id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,893,421,709
|
[ROCm] Add TF32 option for Flex Attention for gfx90a
|
jataylo
|
closed
|
[
"module: rocm",
"open source",
"release notes: rocm",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 2
|
COLLABORATOR
|
Add TF32 option for flex attention kernels, performance doesn't seem to always be better so we will add this as an autotuning option.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,893,293,603
|
set non_blocking to true in torch._foreach_copy_ to improve performance
|
aahehehe
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"release notes: distributed (fsdp)"
] | 4
|
NONE
|
The non_blocking parameter in the `torch._foreach_copy_` interface has a default value of False, which triggers synchronous operations by default. However, in `FSDP` , when the input tensors reside on the same device, synchronization is unnecessary. To enhance performance, a check has been added: if the input tensors are on the same device, non_blocking is set to True.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,893,220,606
|
Introduce guard_or_true, guard_or_false
|
laithsakka
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor"
] | 32
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148430
some context in this document:
https://docs.google.com/document/d/18nJsj-F2C_QXO7ClwzPcAUENQ-B440B43W7DdDnlDt4/edit?tab=t.0#heading=h.pgebnyi7pocj
But TLDR;
`guard_or_true`, `guard_or_false` are better than `guard_size_oblivious` due to :
- Easier to reason about what assumptions we are making while reading the code.
- Avoid size_oblivious complexity that is not needed.
- Avoid unsoundness that could make `guard_size_oblivious(a==1)` be true when its not true for some vaue `a` during runtime.
- Less data dependent errors for some cases: ex, when doing `guard_size_oblivious(a==1)` and we know `a` is a tensor size, if it's traced with `a=u1-u2` `guard_size_oblivious(a==1)` will throw a data dependent error but `guard_else_false` will just return `False`.
### How is it different from statically_known_true??
**`if(cond)`:** (normal guarding) will try to evaluate statically and guard on the condition, willing to restrict input space to evaluate cond. if it fails to evaluate due to data dependent error will throw an exception (that could be converted to graph break in some situations).
**`statically_known_true(cond)`:** would be used when you never want to add a guard (restrict your input space), but just want to do a best effort check to see if you can infer that something is true/false ONLY based on existing constraints.
**`guard_or_true(cond)`/`guard_or_false(cond)`:** Those would be used in situations you prefer to guard and know the result of the expression over not guarding, but in case you hit a data dependent error you are ok with just returning true or false.
Some reasons you might be ok with returning true/false instead could be:
1. It's an optimization I do not want to fail for not performing optimization.
2. I am willing to deviate from the normal semantics when I have unbacked for the benefit of not failing (See the doc above for more details).
**`definitely_true(cond)`**: same as `guard_or_false(cond)` except does not try to do static eval for unbacked (planning to deprecate it and replace uses with `guard_or_false` or make it alias to `guard_or_false`)
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,893,213,072
|
Optimize `torch.distributions` Score function
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Fixes #148253
## Test Result
### Before

### After

| true
|
2,893,138,944
|
DISABLED test_sys_modules (__main__.MiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: mac, macos, rocm, asan, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sys_modules&suite=MiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38144911987).
Over the past 3 hours, it has been determined flaky in 16 workflow(s) with 32 failures and 16 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sys_modules`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_misc.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,893,138,857
|
DISABLED test_capture_tracked_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 8
|
NONE
|
Platforms: asan, linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_capture_tracked_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38144264978).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 24 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_capture_tracked_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 538, in test_capture_tracked
self._test_wrap_simple(f, default_args_generator((x, y)), arg_count)
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 191, in _test_wrap_simple
self.assertEqual(len(wrap_node.args), expected_num_wrap_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4096, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 4 but got 9.
Absolute difference: 5
Relative difference: 1.25
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_dynamic_shapes.py DynamicShapesHigherOrderOpTests.test_capture_tracked_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,893,138,763
|
DISABLED test_empty_graph_nested_calls_fullgraph_False_dynamic_shapes (__main__.DynamicShapesReproTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: mac, macos, linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_empty_graph_nested_calls_fullgraph_False_dynamic_shapes&suite=DynamicShapesReproTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38141374275).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 12 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_empty_graph_nested_calls_fullgraph_False_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @malfet @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,893,084,753
|
[ROCM] `linalg.eigh` crash with `float64` dtype and shape `[8192,8192]`
|
Qubitium
|
open
|
[
"module: crash",
"module: rocm",
"triaged"
] | 12
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Platform: AMD `MI300X`
ROCM: `rocm/jammy,now 6.3.3.60303`
OS: Ubuntu 22.04
Torch: 2.7.0.dev20250228+rocm6.3
`linalg.eigh` crash with `float64` dtype and shape `[8192,8192]`
Run the following unittest to reproduce:
[GPTQModel: test_linalg.py](https://github.com/ModelCloud/GPTQModel/blob/main/tests/test_linalg.py)
As shown the above unit test. The dtype + shape combo of `float64 + [8192,8192]` will crash pytroch but using `magma` backend fixed the issue. So our unittest triggers a bug in the default `linalg` backend from `hip`. We did not sweep all shapes but only a few to show that this is shape specific.
Exception:
```
> torch.linalg.eigh(matrix)
E RuntimeError: hipsolver error: HIPSOLVER_STATUS_INTERNAL_ERROR, when calling `hipsolverDnDsyevd_bufferSize(handle, jobz, uplo, n, A, lda, W, lwork)`. If you keep seeing this error, you may use `torch.backends.cuda.preferred_linalg_library()` to try linear algebra operators with other supported backends. See https://pytorch.org/docs/stable/backends.html#torch.backends.cuda.preferred_linalg_library
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250228+rocm6.3
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-fa1d09cbd
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-133-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI300X (gfx942:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42131
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9534 64-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3718.0659
CPU min MHz: 1500.0000
BogoMIPS: 4892.29
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 128 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] pytorch-triton-rocm==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250228+rocm6.3
[pip3] torchaudio==2.6.0.dev20250228+rocm6.3
[pip3] torchvision==0.22.0.dev20250228+rocm6.3
[conda] numpy 2.2.3 pypi_0 pypi
[conda] pytorch-triton-rocm 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250228+rocm6.3 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250228+rocm6.3 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250228+rocm6.3 pypi_0 pypi
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,893,062,210
|
Temp test
|
CaoE
|
open
|
[
"module: mkldnn",
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/linux-aarch64"
] | 7
|
COLLABORATOR
|
For testing.
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,893,047,386
|
[Intel GPU][pt2e]: Collapse 3D input to 2D for matmul in qlinear_pointwise_binary fusion
|
ZhiweiYan-96
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ciflow/xpu"
] | 6
|
COLLABORATOR
|
# Motivation
During the `qlinear_pointwise_binary` lowering pass, dim collapsing only occurs when post-ops is `add`. It is the responsibility of C++ kernels to handle dimension for post-ops `sum`
# Details
This PR explicitly reshape input from 3D to 2D in op `qlinear_pointwise_binary`. Besides, we refractor implementation `qlinear_pointwise_binary.tensor` to call `qlinear_pointwise_binary` for removing duplicated codes.
# UT testing
`python test/inductor/test_mkldnn_pattern_matcher.py -k test_qlienar_add_xpu`
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148522
* __->__ #148423
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,893,031,576
|
[set_linter] allow x in {...}
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148422
| true
|
2,893,030,692
|
Add cutlass kernel for rowwise scaled mm on sm100
|
danielvegamyhre
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 9
|
CONTRIBUTOR
|
### Important
- Previous PR in stack https://github.com/pytorch/pytorch/pull/148274
- Despite the changes between sm90 vs sm100 being fairly minimal, I created a separate kernel since we'll be making various arch specific perf optimizations to the sm100 kernel next.
- This kernel has not been optimized yet. However, initial perf testing shows numbers which indicates the tensorcores are being utilized as expected (not just CUDA cores).
### Summary of changes
- This PR adds a new cutlass kernel for rowwise GEMM on sm100.
- sm100 kernel is based on sm90 kernel, with the following changes:
- Use new arch tag `cutlass::arch::Sm100`
- Do not use [large tile](https://github.com/pytorch/pytorch/blob/4eb0c45297555c53e948258a94e80f288a3f4cf0/aten/src/ATen/native/cuda/RowwiseScaledMM.cu#L203) schedule in CollectiveMainLoop or CollectiveEpilogue (causes build errors)
- SM90 vs SM100 kernel diff: https://www.diffchecker.com/ZCAPaFAg/
### Next steps
- Arch specific performance optimization
| true
|
2,893,030,083
|
Float8_e4m3fn
|
wangcheng2013
|
open
|
[
"triaged",
"module: mps",
"module: float8"
] | 0
|
NONE
|
### 🐛 Describe the bug
MPS use Flux.1
Error:Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
### Versions
Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @yanbing-j @vkuzo @kadeng @penguinwu
| true
|
2,893,015,603
|
ci: Add sccache to manylinux images
|
seemethere
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149675
* #143672
* __->__ #148419
Adds sccache to our manylinux images, these are purposefully built
without the scccache-dist binary since we're not expecting to use that.
Another caveat of these builds is that they are built with the vendored
version of openssl.
This is to set the stage for us to be able to build binaries
sequentially.
Signed-off-by: Eli Uriegas <github@terriblecode.com>
| true
|
2,892,888,891
|
[2/N] Use Python 3.9 typing
|
cyyever
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,892,876,710
|
Add 'x in {...}' patterns to perf_linter
|
jansel
|
open
|
[
"Stale",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148417
* #148416
* #148415
* #148414
* #148413
* #148422
* #148412
| true
|
2,892,876,617
|
Add perf_linter to auto-fix some anti-patterns
|
jansel
|
open
|
[
"Stale",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148417
* __->__ #148416
* #148415
* #148414
* #148413
* #148422
* #148412
| true
|
2,892,876,536
|
Automated perf_linter changes: x in (...)
|
jansel
|
open
|
[
"module: rocm",
"Stale",
"release notes: fx",
"topic: not user facing",
"fx",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148417
* #148416
* __->__ #148415
* #148414
* #148413
* #148422
* #148412
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,892,876,450
|
Automated perf_linter changes: list constructors
|
jansel
|
open
|
[
"module: rocm",
"Stale",
"topic: not user facing",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend",
"module: compiled autograd"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148417
* #148416
* #148415
* __->__ #148414
* #148413
* #148422
* #148412
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @xmfan
| true
|
2,892,876,368
|
Automated perf_linter changes: generators
|
jansel
|
open
|
[
"module: rocm",
"Stale",
"release notes: fx",
"topic: not user facing",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148417
* #148416
* #148415
* #148414
* __->__ #148413
* #148422
* #148412
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @xmfan
| true
|
2,892,876,294
|
Disable flake8 advice C416
|
jansel
|
open
|
[
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148417
* #148416
* #148415
* #148414
* #148413
* #148422
* __->__ #148412
This is not a good suggestion, since it is almost 2x slower:
```
>>> timeit.timeit("tuple(x for x in range(10))")
0.39464114885777235
>>> timeit.timeit("tuple([x for x in range(10)])")
0.21258362499065697
>>>
```
| true
|
2,892,850,518
|
Torch 2.6 doesn't have TCPStore::TCPStore symbol in cu126 binary, but it's available in headers
|
xwang233
|
closed
|
[
"module: binaries",
"triaged"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
Torch 2.6 doesn't have TCPStore::TCPStore symbol in cu126 binary, but it's available in headers. This caused some runtime issue in our pytorch extension. Our pytorch extension can be built but can't be imported. The error message suggests: missing symbol
The symbol is in cu118 and cu124 binary but not in cu126 binary.
```
ImportError: /opt/pyenv/lib/python3.12/site-packages/nvfuser/_C.cpython-312-x86_64-linux-gnu.so: undefined symbol: _ZN4c10d8TCPStoreC1ESsRKNS_15TCPStoreOptionsE
```
Reproduce with this bash script and docker
```bash
#!/bin/bash
set -x
script() {
echo "
set -x;
pip install torch --no-deps --index-url https://download.pytorch.org/whl/$1;
grep -r 'explicit TCPStore(std::string host, const TCPStoreOptions& opts = {})' /usr/local/lib/python3.12/site-packages/torch/include/;
nm -D /usr/local/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so | grep _ZN4c10d8TCPStoreC1ESsRKNS_15TCPStoreOptionsE;
c++filt _ZN4c10d8TCPStoreC1ESsRKNS_15TCPStoreOptionsE;
"
}
docker pull python:3.12
docker run -i --rm python:3.12 bash -c "$(script cu118)"
docker run -i --rm python:3.12 bash -c "$(script cu124)"
docker run -i --rm python:3.12 bash -c "$(script cu126)"
```
### Versions
torch 2.6 binary cu126
cc @seemethere @malfet @osalpekar @atalman @ptrblck @nWEIdia @naoyam
| true
|
2,892,849,094
|
Add api info for torch._C._nn.pyi [1/N]
|
shink
|
open
|
[
"open source",
"Stale",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Part of: #148404
| true
|
2,892,837,132
|
I don't use FSDP,it can train.
|
Vieeo
|
closed
|
[] | 1
|
NONE
|
> This looks more relevant as a flux issue--could you open an issue in their repo? https://github.com/black-forest-labs/flux
>
> I guess it is a FSDP problem,
> when forward:
> weight.shape, mod._weight_mask.shape: torch.Size([6144, 3072]) torch.Size([6144, 3072])
> but backward:
> torch.Size([2360064]) torch.Size([6144, 3072])
>
> Here, weight is not ok.
_Originally posted by @Vieeo in [#148251](https://github.com/pytorch/pytorch/issues/148251#issuecomment-2696088922)_
| true
|
2,892,802,150
|
Enable `_lazy_clone` between CPU and MPS
|
kurtamohler
|
open
|
[
"open source",
"release notes: lazy",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Adds `device` arg to `_lazy_clone` to enable lazy cloning data from one device to another. At the moment, only the following cases are supported:
* Source is a pinned CPU tensor and destination is MPS.
* Source is an MPS tensor and destination is CPU.
* Source and destination devices are the same.
This PR also adds support for pinned CPU tensors on MPS builds, which was not working properly before.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150569
* #150721
* __->__ #148408
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,892,801,347
|
Enable ASAN on inductor CUDA tests
|
cyyever
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,892,784,097
|
ci: Move s390x builds with the rest
|
seemethere
|
closed
|
[
"topic: not user facing"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143672
* #148419
* __->__ #148406
Moves the s390x builds which were in a separate workflow into the
workflow that builds the rest of the manywheel images. No need to
actually have a completely separate workflow for this
Signed-off-by: Eli Uriegas <github@terriblecode.com>
| true
|
2,892,750,943
|
Add api info for torch._C._nn.pyi
|
FFFrog
|
open
|
[
"open source",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148405
APis involved are as followed:
- adaptive_avg_pool2d
- adaptive_avg_pool3d
- binary_cross_entropy
- col2im
ISSUE Related:
https://github.com/pytorch/pytorch/issues/148404
| true
|
2,892,745,220
|
The apis in torch._C._nn.pyi is nonexhaustive
|
FFFrog
|
open
|
[
"module: nn",
"triaged"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
The C API provided by the torch._C._nn module is inconsistent with torch._C._nn.pyi, and some API descriptions are missing. The missing list is as follows:
- [x] `adaptive_avg_pool2d` #148405
- [x] `adaptive_avg_pool3d` #148405
- [x] `binary_cross_entropy` #148405
- [x] `col2im` #148405
- [ ] `cross_entropy_loss`
- [ ] `elu`
- [ ] `glu`
- [ ] `hardsigmoid_`
- [ ] `hardswish`
- [ ] `hardswish_`
- [ ] `huber_loss`
- [ ] `im2col`
- [ ] `l1_loss`
- [ ] `max_pool2d_with_indices`
- [ ] `max_pool3d_with_indices`
- [ ] `max_unpool2d`
- [ ] `max_unpool3d`
- [ ] `mish`
- [ ] `mish_`
- [ ] `mse_loss`
- [ ] `multilabel_margin_loss`
- [ ] `multi_margin_loss`
- [ ] `nll_loss_nd`
- [ ] `relu6`
- [ ] `relu6_`
- [ ] `silu`
- [ ] `silu_`
- [ ] `smooth_l1_loss`
- [ ] `soft_margin_loss`
- [ ] `upsample_bicubic2d`
- [ ] `_upsample_bicubic2d_aa`
- [ ] `upsample_bilinear2d`
- [ ] `_upsample_bilinear2d_aa`
- [ ] `upsample_linear1d`
- [ ] `upsample_nearest1d`
- [ ] `upsample_nearest2d`
- [ ] `upsample_nearest3d`
- [ ] `_upsample_nearest_exact1d`
- [ ] `_upsample_nearest_exact2d`
- [ ] `_upsample_nearest_exact3d`
- [ ] `upsample_trilinear3d`
@shink @zeshengzong
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+gitb3bb73e
Is debug build: True
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.0
Libc version: glibc-2.35
Python version: 3.9.16 (main, May 15 2023, 23:46:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6151 CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 4
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat md_clear flush_l1d arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] botorch==0.8.5
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-coding==1.3.3
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] gpytorch==1.10
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.14.0
[pip3] onnxruntime-gpu==1.15.1
[pip3] onnxscript==0.1.0.dev20231109
[pip3] optree==0.13.0
[pip3] pytorch-lightning==2.0.6
[pip3] torch==2.7.0a0+gitb3bb73e
[pip3] torchao==0.7.0+gite41ca4ee
[pip3] torchmetrics==1.0.1
[pip3] torchmultimodal-nightly==2023.7.31
[pip3] torchrl==0.1.1
[pip3] torchvision==0.16.2
[pip3] torchx==0.5.0
[pip3] triton==3.0.0
[conda] botorch 0.8.5 pypi_0 pypi
[conda] gpytorch 1.10 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-lightning 2.0.6 pypi_0 pypi
[conda] torch 2.7.0a0+gitb3bb73e dev_0 <develop>
[conda] torchao 0.7.0+gite41ca4ee dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmetrics 1.0.1 pypi_0 pypi
[conda] torchmultimodal-nightly 2023.7.31 pypi_0 pypi
[conda] torchrl 0.1.1 pypi_0 pypi
[conda] torchvision 0.16.2 pypi_0 pypi
[conda] torchx 0.5.0 pypi_0 pypi
[conda] triton 3.2.0+git0d4682f0 dev_0 <develop>
(torch)
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,892,736,700
|
Use oneDNN v3.7.1 for Intel GPU
|
ZhiweiYan-96
|
closed
|
[
"module: mkldnn",
"open source",
"Merged",
"ciflow/trunk",
"keep-going",
"ciflow/xpu",
"release notes: xpu",
"ciflow/linux-aarch64"
] | 19
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148403
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,892,732,736
|
Generate two reduction loops for vectorization
|
shunting314
|
open
|
[
"feature",
"triaged",
"oncall: pt2",
"module: inductor"
] | 9
|
CONTRIBUTOR
|
### 🐛 Describe the bug
A general reduction is to reduce each row of a [xnumel, rnumel] 2D tensors (multiple dimensional cases can be just treated as 2D tensor by flattening reduction and non-reductoin dimensions). When rnumel is not well aligned (128 bytes aligned), Inductor will pad the strides of the tensor to make the memory access more efficient.
E.g. if rnumel=50257, for bf16 tensor, Inductor pad the strides to the next multiple of 64 elements and we get 50304. The tensor's shape is not changed. Only strides get padded.
There are 2 problems if we do reduction for such a tensor:
1. There is a triton bug that such reduction will have un-coalesced memory access due to the non-const mask for a potential vectorized load. ( https://github.com/pytorch/pytorch/issues/122840 )
2. The load is not vectorized and can be less efficient
Here is an optimization to fix it. We can split the reduction loop to 2 loops. The first loop is the main loop. It iterates all the elements up-to `rnumel_rounded = rnumel // RBLOCK * RBLOCK`. The second loop handles the left over elements not handled by the main loop.
Example code:
```
def triton_red_fused__log_softmax_16_manually_modified_to_two_loops(in_out_ptr0, in_ptr0, out_ptr0, xnumel, r0_numel, XBLOCK : tl.constexpr, R0_BLOCK : tl.constexpr):
xnumel = 32768
r0_numel = 50257
rnumel = r0_numel
RBLOCK: tl.constexpr = R0_BLOCK
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:, None]
xmask = tl.full([XBLOCK, R0_BLOCK], True, tl.int1)
r0_base = tl.arange(0, R0_BLOCK)[None, :]
rbase = r0_base
x0 = xindex
_tmp3_max = tl.full([XBLOCK, R0_BLOCK], float('-inf'), tl.float32)
_tmp3_sum = tl.zeros([XBLOCK, R0_BLOCK], tl.float32)
# first loop
r0_numel_round = (rnumel // R0_BLOCK) * R0_BLOCK
for r0_offset in range(0, r0_numel_round, R0_BLOCK):
r0_index = r0_offset + r0_base
# r0_mask = r0_index < r0_numel
roffset = r0_offset
rindex = r0_index
r0_1 = r0_index
tmp0 = tl.load(in_ptr0 + (r0_1 + 50304*x0), None, eviction_policy='evict_first').to(tl.float32)
tmp1 = tmp0.to(tl.float32)
tmp2 = tl.broadcast_to(tmp1, [XBLOCK, R0_BLOCK])
_tmp3_max_next, _tmp3_sum_next = triton_helpers.online_softmax_combine(
_tmp3_max, _tmp3_sum, tmp2, True
)
_tmp3_max = _tmp3_max_next
_tmp3_sum = _tmp3_sum_next
# second loop
for r0_offset in range(r0_numel_round, r0_numel, R0_BLOCK):
r0_index = r0_offset + r0_base
r0_mask = r0_index < r0_numel
roffset = r0_offset
rindex = r0_index
r0_1 = r0_index
tmp0 = tl.load(in_ptr0 + (r0_1 + 50304*x0), r0_mask, eviction_policy='evict_first', other=0.0).to(tl.float32)
tmp1 = tmp0.to(tl.float32)
tmp2 = tl.broadcast_to(tmp1, [XBLOCK, R0_BLOCK])
_tmp3_max_next, _tmp3_sum_next = triton_helpers.online_softmax_combine(
_tmp3_max, _tmp3_sum, tmp2, True
)
_tmp3_max = tl.where(r0_mask, _tmp3_max_next, _tmp3_max)
_tmp3_sum = tl.where(r0_mask, _tmp3_sum_next, _tmp3_sum)
tmp5, tmp6 = triton_helpers.online_softmax_reduce(
_tmp3_max, _tmp3_sum, 1, True)
tmp5 = tmp5[:, None]
tmp6 = tmp6[:, None]
tmp3 = tmp5
tmp4 = tmp6
tl.store(out_ptr0 + (x0), tmp3, None)
tmp7 = tl_math.log(tmp4)
tl.debug_barrier()
tl.store(in_out_ptr0 + (x0), tmp7, None)
```
This can guarantees that the first loop (which handles majority of elements) to be fully vectorized and having coalesced memory access.
A full example https://gist.github.com/shunting314/2fb1f5381b62b363d1046a2e05741e7b
Perf for softmax:
Generate 1 loops: 3.013ms 3.294GB 1093.18GB/s
Generate 2 loops: 2.076ms 3.294GB 1586.94GB/s
There is about 1.5x speedup for this kernel.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @jansel @eellison
### Error logs
.
### Versions
.
cc @chauhang @penguinwu
| true
|
2,892,712,197
|
[dynamo] show stack above dynamo in graph break user tracebacks
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compile ux"
] | 7
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148736
* __->__ #148401
Also show the line of code relevant to a dynamo-compiled frame, instead of just the first line (this was broken for data-dependent jump graph breaks and for 3.11+).
Also collapses resume frames together (use config.verbose to see full stack trace - for developers).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,892,706,244
|
[Docs] update bucketize documentaion
|
blaine-rister
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 9
|
CONTRIBUTOR
|
Fixes #144504
Clarify the documentation for `torch.bucketize` by referencing the existing table. The current version includes a somewhat confusing explanation for the `right` kwarg, whereas the existing table is much clearer.
| true
|
2,892,702,801
|
[BE] Move `sinc` kernels to the same OP family
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148449
* #148448
* __->__ #148399
| true
|
2,892,702,739
|
[BE] Remove stale arg for complex ops
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148399
* __->__ #148398
Not need to pass DTYPE0 and DTYPE1 if only one DTYPE is used
| true
|
2,892,698,848
|
[inductor][fuzzer] `IndexError` error at `torch.dstack`
|
WLFJ
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"topic: fuzzer"
] | 2
|
NONE
|
### 🐛 Describe the bug
Reproduce:
```python
import torch
def f():
sym_0 = 964
sym_1 = 806
sym_2 = -9063443332548498471
sym_3 = 2
sym_4 = 1
sym_5 = 0
sym_6 = False
sym_7 = False
sym_8 = -1
sym_9 = (776984,)
var_161 = torch.triu_indices(row=sym_0, col=sym_1, offset=sym_2)
var_315 = torch.randperm(n=sym_3)
var_46 = torch.ops.aten.embedding_backward(var_161, var_315, sym_4, sym_5, sym_6, sym_7)
var_336 = var_46.unflatten(dim=sym_8, sizes=sym_9)
tup_0 = (var_336,)
return torch.dstack(tup_0)
print('eager', f())
print('inductor', torch.compile(f)())
```
### Error logs
running result:
```
eager tensor([[[0],
[0],
[0],
...,
[0],
[0],
[0]]])
Traceback (most recent call last):
File "/home/yvesw/reborn2-expr/250304-bugs/test.py", line 23, in <module>
print('inductor', torch.compile(f)())
^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/reborn2-expr/250304-bugs/test.py", line 3, in f
def f():
File "/home/yvesw/miniconda3/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 600, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 987, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 217, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 120, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/home/yvesw/miniconda3/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 451, in wrapper
return compiled_fn(runtime_args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 1131, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_yvesw/kh/ckhhgq5wj43iae25grlt76ub6ivaaorhz3yark2xujqiq273ks24.py", line 109, in call
aten.index_put_(buf5, [buf1], buf6, True)
File "/home/yvesw/miniconda3/lib/python3.11/site-packages/torch/_ops.py", line 1061, in __call__
return self_._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IndexError: index 1 is out of bounds for dimension 0 with size 1
```
According to the log, here's the generated code from inductor:
```python
# AOT ID: ['0_inference']
from ctypes import c_void_p, c_long
import torch
import math
import random
import os
import tempfile
from math import inf, nan
from torch._inductor.hooks import run_intermediate_hooks
from torch._inductor.utils import maybe_profile
from torch._inductor.codegen.memory_planning import _align as align
from torch import device, empty_strided
from torch._inductor.async_compile import AsyncCompile
from torch._inductor.select_algorithm import extern_kernels
from torch._inductor.codegen.multi_kernel import MultiKernelCall
aten = torch.ops.aten
inductor_ops = torch.ops.inductor
_quantized = torch.ops._quantized
assert_size_stride = torch._C._dynamo.guards.assert_size_stride
empty_strided_cpu = torch._C._dynamo.guards._empty_strided_cpu
empty_strided_cuda = torch._C._dynamo.guards._empty_strided_cuda
reinterpret_tensor = torch._C._dynamo.guards._reinterpret_tensor
alloc_from_pool = torch.ops.inductor._alloc_from_pool
async_compile = AsyncCompile()
cpp_fused_embedding_dense_backward_triu_indices_0 = async_compile.cpp_pybinding(['const int64_t*', 'const int64_t*', 'int64_t*', 'int64_t*', 'int64_t*', 'int64_t*'], '''
#include "/tmp/torchinductor_yvesw/sk/cskh5dx62fglpphcrl6723dnmowdabouerrzy3dmqcngbxwfa7bv.h"
extern "C" void kernel(const int64_t* in_ptr0,
const int64_t* in_ptr1,
int64_t* out_ptr0,
int64_t* out_ptr1,
int64_t* out_ptr2,
int64_t* out_ptr3)
{
#pragma omp parallel num_threads(10)
{
int tid = omp_get_thread_num();
{
#pragma omp for
for(long x0=static_cast<long>(0L); x0<static_cast<long>(776984L); x0+=static_cast<long>(1L))
{
auto tmp0 = c10::div_floor_integer(x0, 806L);
auto tmp1 = c10::convert<int64_t>(tmp0);
out_ptr0[static_cast<long>(x0)] = tmp1;
}
}
{
#pragma omp for
for(long x0=static_cast<long>(0L); x0<static_cast<long>(776984L); x0+=static_cast<long>(1L))
{
auto tmp0 = static_cast<long>(x0) % static_cast<long>(806L);
auto tmp1 = c10::convert<int64_t>(tmp0);
out_ptr1[static_cast<long>(x0)] = tmp1;
}
}
{
#pragma omp for
for(long x0=static_cast<long>(0L); x0<static_cast<long>(776984L); x0+=static_cast<long>(8L))
{
auto tmp0 = static_cast<int64_t>(0);
auto tmp1 = at::vec::VectorizedN<int64_t,2>(tmp0);
tmp1.store(out_ptr2 + static_cast<long>(x0), 8);
}
}
{
#pragma omp for
for(long x0=static_cast<long>(0L); x0<static_cast<long>(2L); x0+=static_cast<long>(1L))
{
for(long x1=static_cast<long>(0L); x1<static_cast<long>(776984L); x1+=static_cast<long>(8L))
{
auto tmp0 = in_ptr0[static_cast<long>(x0)];
auto tmp3 = at::vec::VectorizedN<int64_t,2>::loadu(in_ptr1 + static_cast<long>(x1 + (776984L*x0)), 8);
auto tmp1 = static_cast<int32_t>(0);
auto tmp2 = tmp0 == tmp1;
auto tmp4 = static_cast<int64_t>(0);
auto tmp5 = at::vec::VecMask<float,1>::from(tmp2);
auto tmp6 = at::vec::VectorizedN<int64_t,2>(tmp4);
auto tmp7 = decltype(tmp6)::blendv(tmp3, tmp6, tmp5.template cast<int64_t,2>());
tmp7.store(out_ptr3 + static_cast<long>(x1 + (776984L*x0)), 8);
}
}
}
}
}
''')
async_compile.wait(globals())
del async_compile
def call(args):
# Source Nodes: [var_315], Original ATen: [aten.randperm]
buf0 = aten.randperm.default(2, device=device(type='cpu'), pin_memory=False)
buf1 = buf0
del buf0
buf4 = empty_strided_cpu((1553968, ), (1, ), torch.int64)
buf2 = reinterpret_tensor(buf4, (776984, ), (1, ), 0) # alias
buf3 = reinterpret_tensor(buf4, (776984, ), (1, ), 776984) # alias
buf5 = empty_strided_cpu((1, 776984), (776984, 1), torch.int64)
buf6 = empty_strided_cpu((2, 776984), (776984, 1), torch.int64)
cpp_fused_embedding_dense_backward_triu_indices_0(buf1, buf4, buf2, buf3, buf5, buf6)
del buf2
del buf3
del buf4
aten.index_put_(buf5, [buf1], buf6, True) # IndexError: index 1 is out of bounds for dimension 0 with size 1
del buf1
del buf6
return (reinterpret_tensor(buf5, (1, 776984, 1), (776984, 1, 1), 0), )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
fn = lambda: call([])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
```
### Versions
PyTorch 2.7.0.dev20250218+cu124
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,892,678,507
|
[ONNX] Create VerificationInterpreter
|
justinchuby
|
closed
|
[
"module: onnx",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
An fx interpreter for comparing ONNX values with pytorch ones.
```py
import torch
from torch.onnx._internal.exporter._verification import VerificationInterpreter
class Model(torch.nn.Module):
def forward(self, query, key, value):
res = torch.nn.functional.scaled_dot_product_attention(
query, key, value
)
rest = res.transpose(0, 1)
return rest.view(8, 32, 128 * 64)
model = Model()
query = torch.rand(32, 8, 128, 64, dtype=torch.float16)
key = torch.rand(32, 8, 128, 64, dtype=torch.float16)
value = torch.rand(32, 8, 128, 64, dtype=torch.float16)
onnx_program = torch.onnx.export(model, (query, key, value), dynamo=True)
interpreter = VerificationInterpreter(onnx_program)
interpreter.run(query, key, value)
for info in interpreter.verification_infos:
print(info)
```
| true
|
2,892,676,689
|
[not for landing] build CPU CPP kernels at O3, and all other code at O1
|
desertfire
|
closed
|
[
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148395
for ghimport
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,892,675,249
|
Add new GHA workflow to cache ROCm CI docker images on MI300 CI runners periodically
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm",
"ciflow/rocm-mi300"
] | 4
|
COLLABORATOR
|
Refiling https://github.com/pytorch/pytorch/pull/148387 from pytorch repo branch to get AWS login via OIDC working
Successful docker caching run: https://github.com/pytorch/pytorch/actions/runs/13843689908/job/38737095535
Run without cached docker image: https://github.com/pytorch/pytorch/actions/runs/13843692637/job/38746033460

Run with cached docker image:

~6 min vs 3 s :)
Thanks @saienduri for the help on the MI300 infra side
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,892,661,426
|
[codemod] Fix missing field initializer in caffe2/torch/lib/libshm/manager.cpp +1
|
r-barnes
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary:
The LLVM warning `-Wmissing-field-initializers` has found one or more structs in this diff's files which were missing field initializers.
This can be unintended such as:
```
my_struct s1 = {0}; // Initializes *only* the first field to zero; others to default values
my_struct s2 = {}; // Initializes *all* fields to default values (often zero)
```
or it may be because only some of the members of a struct are initialized, perhaps because the items were added to the struct but not every instance of it was updated.
To fix the problem, I've either used `{}` to initialize all fields to default or added appropriate default initializations to the missing fields.
- If you approve of this diff, please use the "Accept & Ship" button :-)
Test Plan: Sandcastle
Reviewed By: dtolnay
Differential Revision: D70472663
| true
|
2,892,652,782
|
DISABLED test_shape_int_inplace_binops (__main__.MiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_shape_int_inplace_binops&suite=MiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38130714525).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 18 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_shape_int_inplace_binops`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_misc.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,892,652,660
|
DISABLED test_sdpa_rewriter_14_cuda (__main__.SDPAPatternRewriterCudaTests)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sdpa_rewriter_14_cuda&suite=SDPAPatternRewriterCudaTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38131846800).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 5 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sdpa_rewriter_14_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 707, in _test_sdpa_rewriter_14
self._check_common(dot_prod_attention)
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 85, in _check_common
self.assertGreaterEqual(counters["inductor"]["fuse_attention"], 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1250, in assertGreaterEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 0 not greater than or equal to 1
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_fused_attention.py SDPAPatternRewriterCudaTests.test_sdpa_rewriter_14_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_fused_attention.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,892,652,659
|
DISABLED test_untracked_inputs_in_constraints_dynamic_shapes (__main__.DynamicShapesExportTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 7
|
NONE
|
Platforms: asan, linux, rocm, slow, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_untracked_inputs_in_constraints_dynamic_shapes&suite=DynamicShapesExportTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38130561511).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_untracked_inputs_in_constraints_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_export.py", line 2557, in test_untracked_inputs_in_constraints
ep = torch.export.export(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/export/__init__.py", line 360, in export
return _export(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/export/_trace.py", line 1047, in wrapper
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/export/_trace.py", line 1020, in wrapper
ep = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/export/_trace.py", line 2083, in _export
ep = _export_for_training(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/export/_trace.py", line 1047, in wrapper
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/export/_trace.py", line 1020, in wrapper
ep = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/export/_trace.py", line 1967, in _export_for_training
range_constraints = _get_range_constraints(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/export/_trace.py", line 1165, in _get_range_constraints
range_constraints = make_constraints(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_export/non_strict_utils.py", line 400, in make_constraints
dim = shape_spec[i] if shape_spec else None
KeyError: '1\n\nTo execute this test, run the following from the base repo dir:\n PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_dynamic_shapes.py DynamicShapesExportTests.test_untracked_inputs_in_constraints_dynamic_shapes\n\nThis message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0'
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,892,647,486
|
[export] Unable to trace ops like min/pow
|
angelayi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"export-triaged",
"oncall: export"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The following code fails to trace with export:
```python
class M(torch.nn.Module):
def forward(self, x, y):
b = x.item()
p = min(b, 10)
p = math.pow(p, 10)
return y * p
ep = torch.export.export(M(), (torch.tensor(5), torch.randn(5)))
print(ep)
```
To get it traceable we need to replace `min` with `torch.sym_min` and `math.pow` with `**`:
```python
class M(torch.nn.Module):
def forward(self, x, y):
b = x.item()
p = torch.sym_min(b, 10)
p = p ** 10
return y * p
```
With `strict=True`, dynamo converts `min` to `torch.sym_min`, but it later throws a `GuardOnDataDependentSymNode` on `math.pow`.
With `strict=False`, we GuardOnDataDependentSymNode on `min` and `math.pow`.
### Versions
main
cc @chauhang @penguinwu @ezyang @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @suo @ydwu4
| true
|
2,892,637,220
|
Bump onnxscript to 0.2.2 in CI
|
justinchuby
|
closed
|
[
"module: ci",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 14
|
COLLABORATOR
|
Unblock https://github.com/pytorch/pytorch/pull/148140
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,892,622,177
|
Add new GHA workflow to cache ROCm CI docker images on MI300 CI runners periodically
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"open source",
"topic: not user facing",
"ciflow/rocm"
] | 2
|
COLLABORATOR
|
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,892,616,047
|
[dynamo] Remove dead code path around `functools.partial` objects
|
StrongerXi
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148386
This removes the code paths added in #98120, which has then been
superceded by #108846.
More importantly, it makes `EQUALS_MATCH`'s `ok_mutable_types` (added in #134016)
easier to reason about, i.e., no need to worry about `dict` types, which
was only needed for #98120.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,892,615,916
|
[dynamo] Account for function id reuse in relevant Dynamo decorators
|
StrongerXi
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148386
* #148007
* __->__ #148385
This fixes a recent series of flaky failure from `nonstrict_trace` unit
tests: #148166, #148056, #148055, #148054, #148034, #148033, #148032, #148031.
For now we don't need to worry about the other decorators because they
are either meant for builtin/numpy functions (which should never
deallocate in practice), or used for polyfills which keeps the function
object in `get_torch_obj_rule_map()`.
Fixes #147777.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,892,613,613
|
[Docs][TunableOp] TunableOp documentation update
|
naromero77amd
|
closed
|
[
"module: docs",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: docs",
"topic: not user facing"
] | 10
|
COLLABORATOR
|
This PR aligns documentation to what is in the README file:
https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/cuda/tunable/README.md
and removes the prototype NOTE.
cc @svekars @sekyondaMeta @AlannaBurke
| true
|
2,892,589,611
|
Update onnxscript pin
|
yushangdi
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Summary:
Update pin to include https://github.com/microsoft/onnxscript/pull/2085
required to land https://github.com/pytorch/pytorch/pull/148140
Test Plan: CI
Differential Revision: D70526777
| true
|
2,892,559,817
|
[Inductor-CPU] Debug util request: fine-grained mechanism to disable out-of-template epilogues
|
sanchitintel
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 10
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
There are two types of epilogue nodes for a `CPPGemmTemplate`:
1. Epilogues explicitly added via `epilogue_creator`, and
2. Out of template epilogues added via `epilogue_nodes`.
This request is to allow disabling out-of-template `epilogue_nodes` for a specific `CPPTemplate` subclass, so that it may correspond one-to-one to its ATen counterpart. Such a mechanism may be helpful for debugging. For example, for some input shapes, a codegened GEMM kernel may perform well during autotuning but may not perform as well end-to-end. Can this difference necessarily be attributed solely to different cache behavior during autotuning and end-to-end model runtime for all input shapes (e.g. M=1, N=4096, K=14336)? Probably, yes, but it'd be great if there was a mechanism to disable `epilogue nodes` for specific `CPPTemplate` subclasses to explore the answer to such questions.
Thanks!
cc @chauhang @penguinwu @jgong5
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,892,525,505
|
[ca] remove compiled_autograd_tracing
|
xmfan
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/inductor"
] | 5
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148491
* __->__ #148381
| true
|
2,892,523,151
|
[1/n][Optimus][Auto-AC] Support activation quantization without scaling
|
mengluy0125
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: inductor"
] | 19
|
CONTRIBUTOR
|
Summary: We enable the activation quantization in the forward pass, and users can customize the dtype they want to quantize.
Test Plan:
# unit test
```
buck2 test 'fbcode//mode/dev-nosan' fbcode//caffe2/test/inductor:quantization -- test_activation_quantization_aten
```
Buck UI: https://www.internalfb.com/buck2/776d3911-bb86-4ac8-a527-540cf1510b9d
Test UI: https://www.internalfb.com/intern/testinfra/testrun/4785074873051017
Network: Up: 4.3MiB Down: 42MiB (reSessionID-fef7e727-68b1-4645-a519-5652854df38d)
Executing actions. Remaining 0/4 6.7s exec time total
Command: test. Finished 2 local
Time elapsed: 3:11.5s
Tests finished: Pass 2. Fail 0. Fatal 0. Skip 0. Build failure 0
# E2E
### how to enable (you can overrite the dtype, if nothing given, the default is fp8)
```
post_grad_fusion_options={
"activation_quantization_aten_pass": {"quant_type": "torch.float8_e5m2"}
},
```
Differential Revision: D70522237
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,892,478,138
|
Improve nested jagged tensor select performance on batch dim
|
fleonce
|
open
|
[
"module: performance",
"triaged",
"module: nestedtensor"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Currently, unbind is used when selecting an element of a nested tensor with the `torch.jagged` layout:
https://github.com/pytorch/pytorch/blob/a41413829c98377e4c155ff150f250438303f7b2/torch/nested/_internal/ops.py#L1796-L1799
In case of a large batch dim, `unbind` yields an overhead when selecting a single element within the tensor.
I would be willing to submit a PR to address this!
### Alternatives
Leave it as is
### Additional context
_No response_
cc @msaroufim @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,892,468,147
|
Throws error when using torch.cuda.MemPool with expandable segments
|
syed-ahmed
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148378
| true
|
2,892,444,440
|
[dtensor] add aten._scaled_dot_product_cudnn_attention.default op support
|
XilunWu
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"release notes: distributed (dtensor)",
"ci-no-td",
"module: context parallel",
"release notes: context parallel"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148537
* __->__ #148377
### Summary
This PR adds `_scaled_dot_product_cudnn_attention` to DTensor ops and tests it with unit test. This should allow Context Parallel and Tensor Parallel to use cudnn SDPA.
### Test
`pytest test/distributed/tensor/test_attention.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,892,427,463
|
[reland][ca] side-effect free inital trace: compiled_args
|
xmfan
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd",
"ci-no-td"
] | 13
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148376
This reverts commit ea12fc8a9ff7da808e0b661ca07e9d4ce75d04bc.
Reland https://github.com/pytorch/pytorch/pull/147804, there was a bad import inserted by my linter.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D70582747](https://our.internmc.facebook.com/intern/diff/D70582747)
| true
|
2,892,425,600
|
[Utilization] Add utilization monitor for linux build
|
yangw-dev
|
open
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,892,425,218
|
Documents torch.cuda.MemPool API
|
syed-ahmed
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148378
* __->__ #148374
| true
|
2,892,412,413
|
[@no-merge] Enable process based async cp + caching
|
MeetVadakkanchery
|
closed
|
[
"oncall: distributed",
"fb-exported",
"release notes: distributed (checkpoint)"
] | 5
|
CONTRIBUTOR
|
Differential Revision: D70516754
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,892,409,758
|
Docker release - pin buildkit to v0.19.0
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fix nightly build failure during arm64 docker build (since 02.21.2025): https://github.com/pytorch/pytorch/actions/runs/13452177170/job/37588508155#step:12:851
Error:
```
#10 73.62 Segmentation fault (core dumped)
#10 73.67 qemu: uncaught target signal 11 (Segmentation fault) - core dumped
#10 73.85 Segmentation fault (core dumped)
#10 73.85 dpkg: error processing package libc-bin (--configure):
#10 73.85 installed libc-bin package post-installation script subprocess returned error exit status 139
```
Looks like we are hitting: https://github.com/moby/buildkit/issues/5783
Update setup-qemu and buildkit actions to v3 and buildkit to v0.19.0
Please note: CUDA 12.8 error is not related to this failure in nightly cpu arm64. Looks like we are trying to install release torch when running on PR. Cuda 12.8 build is not released yet, hence a failure. Will send followup to make sure we are using nightly torch when running on PR.
| true
|
2,892,406,343
|
[ROCm][TunableOp] Unit test for offline tuning of GEMM with bias
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 4
|
COLLABORATOR
|
One more unit test for the offline version of TunableOp.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,892,398,758
|
AOTI doesn't account for constant tensors
|
tugsbayasgalan
|
closed
|
[] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
class Foo(torch.nn.Module):
def __init__(self):
super().__init__()
self.a = torch.ones(4, 4)
self.b = torch.ones(4, 4)
def forward(self, x):
return torch.ops.aten.linear.default(x, self.a, self.b)
ep = torch.export.export(Foo(), (torch.ones(4, 4),), strict=False).run_decompositions({})
_ = torch._inductor.aoti_compile_and_package(ep)
```
When exporting with non-strict mode, we preserve tensor constants as constants in the module. This is different from torch.compile/strict-export because they turn them into buffers. AOTAutograd is used in AOTI lowering which doesn't account for constant tensors. In the long term, AOTI should use exported_program.run_decompositions() API to do lowering. But for now, i feel this is pretty high priority bug that needs to be fixed soon because in practice lot of export models have tensor constants.
### Versions
main
| true
|
2,892,381,937
|
DISABLED test_param_shape_binops (__main__.MiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 4
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_param_shape_binops&suite=MiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38119173789).
Over the past 3 hours, it has been determined flaky in 18 workflow(s) with 36 failures and 18 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_param_shape_binops`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_misc.py", line 758, in test_param_shape_binops
self.assertExpectedInline(counts.op_count, """1""")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3094, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: '1' != '9'
- 1
+ 9
: To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
python test/dynamo/test_misc.py MiscTests.test_param_shape_binops
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_misc.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,892,381,877
|
DISABLED test_export_with_cond_dynamic_shape_pred_dynamic_shapes (__main__.DynamicShapesExportTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: asan, linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_export_with_cond_dynamic_shape_pred_dynamic_shapes&suite=DynamicShapesExportTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38112040917).
Over the past 3 hours, it has been determined flaky in 15 workflow(s) with 30 failures and 15 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_export_with_cond_dynamic_shape_pred_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_export.py", line 1894, in test_export_with_cond_dynamic_shape_pred
self.assertExpectedInline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3094, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 'def [448 chars]_int_1 = torch.ops.aten.sym_size.int(getitem_3[508 chars]pec)' != 'def [448 chars]_int_2 = torch.ops.aten.sym_size.int(getitem_3[508 chars]pec)'
def forward(self, x):
arg0, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
l_x_ = arg0
sym_size_int = torch.ops.aten.sym_size.int(l_x_, 0)
le = sym_size_int <= 2; sym_size_int = None
cond_true_0 = self.cond_true_0
cond_false_0 = self.cond_false_0
cond = torch.ops.higher_order.cond(le, cond_true_0, cond_false_0, [l_x_]); le = cond_true_0 = cond_false_0 = l_x_ = None
getitem_3 = cond[0]
- sym_size_int_1 = torch.ops.aten.sym_size.int(getitem_3, 0); getitem_3 = None
? ^
+ sym_size_int_2 = torch.ops.aten.sym_size.int(getitem_3, 0); getitem_3 = None
? ^
- sym_constrain_range_for_size_default = torch.ops.aten.sym_constrain_range_for_size.default(sym_size_int_1); sym_constrain_range_for_size_default = None
? ^
+ sym_constrain_range_for_size_default = torch.ops.aten.sym_constrain_range_for_size.default(sym_size_int_2); sym_constrain_range_for_size_default = None
? ^
- ge = sym_size_int_1 >= 2; sym_size_int_1 = None
? ^ ^
+ ge = sym_size_int_2 >= 2; sym_size_int_2 = None
? ^ ^
_assert_scalar_default = torch.ops.aten._assert_scalar.default(ge, "Runtime assertion failed for expression u0 >= 2 on node 'ge'"); ge = _assert_scalar_default = None
getitem_2 = cond[0]; cond = None
return pytree.tree_unflatten([getitem_2], self._out_spec) : To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
python test/dynamo/test_dynamic_shapes.py DynamicShapesExportTests.test_export_with_cond_dynamic_shape_pred_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,892,370,828
|
[inductor] use eager stride for custom op if no tags
|
shunting314
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148367
Fix https://github.com/pytorch/pytorch/issues/148356
This is some sort of short term fix to recover the default behavior to apply layout constraint for custom ops when there are no tags.
A longer term attempt to make sure Inductor always gets correct eager strides is here: https://github.com/pytorch/pytorch/pull/148104
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,892,364,581
|
[AOTI] build CPU CPP kernels at O3, and all other code at O1
|
benjaminglass1
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 2
|
COLLABORATOR
|
Cancels out some of the performance implications of this move by adding LTO to linking. _Only_ applies to AOT Inductor, not `cpp_wrapper` mode.
Re-implements #148212.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,892,360,930
|
[ROCm] Add rocm-mi300 and inductor-rocm-mi300 to upload-test-stats.yml
|
ethanwee1
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
We currently run MI300X machines on rocm-mi300 and inductor-rocm-mi300 but we don't have artifacts for the results:
e.g.
https://hud.pytorch.org/pytorch/pytorch/commit/6e10471966e22cda8ac0cded8a179267880457e0#rocm-mi300

cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,892,355,228
|
Fix bug in AOTI lowering
|
tugsbayasgalan
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148488
* #148485
* #148483
* __->__ #148364
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Fixes: https://github.com/pytorch/pytorch/issues/148370
Differential Revision: [D70514480](https://our.internmc.facebook.com/intern/diff/D70514480)
| true
|
2,892,323,411
|
[mm_logs][ez] dump tuned mm info at lowering stage
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 12
|
CONTRIBUTOR
|
Summary:
As title. it would be beneficial for judging e2e perf improvement
Easy first step to dump mm info at lowering stage.
e.g.
```
fbsource/fbcode/caffe2/torch/_inductor/kernel/mm.py:525] [0/0] Tuned aten.addmm: m=16, n=6, k=16, layout=FixedLayout('cuda:0', torch.float32, size=[16, 6], stride=[6, 1])
```
Next step:
Dump overview info at `post_grad_graph` stage such as
overall count of `aten.mm` in the graph & visualize to a table structure.
Test Plan: by looking very hard in aot inductor bmm and mm UTs.
Differential Revision: D70507880
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,892,297,942
|
Fix condition for `CONVERT_NON_VECTORIZED_INIT` invocation
|
malfet
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"release notes: build",
"topic: bug fixes",
"topic: build"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148362
* #148354
Yet another regression caused by https://github.com/pytorch/pytorch/pull/146596 that breaks builds if PyTorch is compiled for Android or using NVIDIA GraceHopper systems
Not sure why author was trying to change the conditon to begin with
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,892,258,499
|
Add new hf storage class to torch.distributed package
|
ankitageorge
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 7
|
CONTRIBUTOR
|
Summary:
title - Add new hf storage class to torch.distributed package so that it can be imported by customers.
The HF storage reader/writer was added as DCP storage components so that DCP load and save can directly interact with hugging face format and storage.
Test Plan: ensure signals pass
Differential Revision: D70495399
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,892,255,704
|
Enabling xpu in OffsetBasedRNGTracker .
|
githubsgi
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: xpu"
] | 24
|
CONTRIBUTOR
|
Else torch.distributed breaks on xpu devices.
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,892,215,693
|
[AOTI][dashboard] Skip torchbench models not supported by export
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148359
Summary: Certain models fail in export because of data-dependent ops. Skip them so that oncall can better track the AOTInductor dashboard.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,892,215,167
|
[inductor] Improve type annotations in _inductor/ir.py
|
rec
|
closed
|
[
"module: rocm",
"module: typing",
"open source",
"better-engineering",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148358
31 files changed was a lot more than I expected! 😲
My procedure was simple: I removed all the `# type: ignore` comments from `_inductor/ir.py` and then did the least possible I could do fix all the remaining type failures without `# type: ignore`s.
The one exception was several `#type: ignore[override]`s, which would be impossible to fix without tremendous violence to the existing API.
I tried to avoid adding new `#type: ignore`s in other files, but in a few places that were delicate, I felt it the best way not to change existing behavior at all.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @ezyang @malfet @xuzhao9 @gramster @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,892,201,200
|
test index_put
|
XilunWu
|
open
|
[
"oncall: distributed",
"Stale",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148357
* #148204
* #148125
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,892,147,210
|
Inductor layout constraints for custom operators changed from 2.5->2.6, breaking BC
|
zou3519
|
closed
|
[
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
Repro: the following code behaves differently between PyTorch 2.5 and PyTorch 2.6. It errors in PyTorch 2.6 but succeeds in PyTorch 2.5
```py
import torch
with torch.library._scoped_library("mylib", "DEF") as lib:
lib.define(
"copy_(Tensor(a!) dst, Tensor src) -> ()",
# tags=torch.Tag.needs_fixed_stride_order,
)
@torch.library.impl(lib, "copy_", "Meta")
def _(dst, src):
return None
@torch.library.impl(lib, "copy_", "CompositeExplicitAutograd")
def _(dst, src):
if src.is_contiguous():
dst.copy_(src + 1)
else:
dst.copy_(src)
def f(x):
full_default_3 = torch.full([3, 3], 7.0, device="cpu")
chunk_cat_default_1 = torch.ops.mylib.copy_.default(full_default_3, x)
mul_out = torch.mul(full_default_3, full_default_3)
return mul_out
x = torch.arange(9, dtype=torch.float, device="cpu").view(3, 3).t().contiguous().t()
eager_out = f(x)
compiled_inductor_f = torch.compile(f, backend="inductor", fullgraph=True)
compiled_inductor_out = compiled_inductor_f(x)
assert torch.allclose(compiled_inductor_out, eager_out)
```
cc @chauhang @penguinwu @bdhirsh
| true
|
2,892,134,405
|
[ROCm][CI] Add support for gfx1100 in rocm workflow + test skips
|
amdfaa
|
open
|
[
"module: rocm",
"open source",
"topic: not user facing",
"module: inductor",
"keep-going"
] | 8
|
CONTRIBUTOR
|
This PR adds infrastructure support for gfx1100 in the rocm workflow. Nodes have been allocated for this effort.
@dnikolaev-amd contributed all the test skips.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,892,124,222
|
[BE] Use `C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED`
|
malfet
|
closed
|
[
"module: cpu",
"Merged",
"release notes: build",
"topic: bug fixes",
"topic: build"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148362
* __->__ #148354
Instead of `#pragma GCC diagnostic ignored "-Wignored-qualifiers"`
Also limit the scope to just `Vectorized::map` that has to be declared that way due to sleef function signature definitions that return `const __m256` for AVX2 methods
Also delete `#pragma GCC diagnostic pop` from vec256_half and vec256_bfloat16 as it results in an unbalanced pop warning, for push that is defined in vec256_16bit_float, which will be included only once
```
In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec.h:7:
In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec256/vec256.h:15:
/Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec256/vec256_half.h:232:27: warning: pragma diagnostic pop could not pop, no matching push [-Wunknown-pragmas]
232 | #pragma GCC diagnostic pop
| ^
1 warning generated.
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,892,097,887
|
`torch.nn.functional` inconsistent documentation
|
olipinski
|
closed
|
[
"module: docs",
"module: nn",
"module: loss",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 📚 The doc issue
Functional versions of losses have inconsistent documentation, for example `torch.nn.functional.huber_loss` is well documented, including all parameters, where as `torch.nn.functional.l1_loss` has almost no documentation and is missing the `weight` parameter in the documentation, which is present in the code. Similarly, `torch.nn.functional.smooth_l1_loss` has a very sparse documentation.
### Suggest a potential alternative/fix
Updating the documentation.
cc @svekars @sekyondaMeta @AlannaBurke @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,892,048,635
|
Use release notes label for module: distributed_checkpoint
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
module: distributed_checkpoint is redundant with oncall: distributed checkpointing.
@fduwjj let us know that module: distributed_checkpoint is just used for release notes, so let's use the release notes label for the release notes, which the bot will pick up better.
| true
|
2,892,010,317
|
[test] cutlass
|
clee2000
|
closed
|
[
"topic: not user facing",
"ciflow/periodic"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,891,987,924
|
[MPS] unary kernels - avoid copying tensors if they have same stride
|
Isalia20
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: performance",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 10
|
COLLABORATOR
|
I was a bit concerned when I saw in #148272 that metal unary kernel was 0.02x of the performance of what we had with MPS Graphs for sqrt(for non contiguous) tensors. This change makes it so that copying is only done if we don't have same strided tensors(for input/output). So if out tensor is not provided then we don't do copy(don't call contiguous) at all and dispatch the kernel as is. After making this change the script that I listed at the end of the above PR has the same execution time as the non-transposed one.
Times for reference(on transposed tensor where matrix is NxN matrix):
| N | time_old | time_new |
|-------|--------------------|--------------------|
| 100 | 0.0002241021 | 0.0001548659 |
| 1000 | 0.0005934822 | 0.0002150342 |
| 10000 | 0.3242016407 | 0.0045755033 |
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,891,976,005
|
torch._check(x > 0) should do something sane when x is a Tensor
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
```py
def f(x):
torch._check(x > 0)
return torch.log(x)
torch.compile(f)(torch.rand(1))
```
gives
```
TorchRuntimeError: Failed running call_function <function _check at 0x7f7cf0322fc0>(*(FakeTensor(..., size=(1,), dtype=torch.bool),), **{}):
cond must be a bool, but got <class 'torch._subclasses.fake_tensor.FakeTensor'>
from user code:
File "/tmp/ipykernel_1454634/3269954531.py", line 2, in f
torch._check(x > 0)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
If this isn't supported then we should make the error message clearer -- the user doesn't necessarily know what a FakeTensor is.
The thread at https://discuss.pytorch.org/t/torch-check-failing-with-torch-compile/215443 implies that there is a workaround
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,891,968,096
|
[ONNX] Assert capture strategy in tests
|
justinchuby
|
closed
|
[
"module: onnx",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 12
|
COLLABORATOR
|
Previously the strategy used for obtaining the exported program is not asserted. This leads to silent errors if torch.export breaks something and a fallback strategy is used. This change adds a _capture_strategy field to ONNXProgram and enables unit tests to assert the strategy used to prevent fallbacks from happening.
Fixes #147674
| true
|
2,891,933,946
|
[cutlass backend] Benchmark compared to aten and triton
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Benchmark for cutlass backend.
```
python benchmarks/inductor_backends/cutlass.py
```
Test Plan:
```
Experiment group: mm (1024x1024, 1024x1024) torch.float16
+-----------------------+--------------------+----------------------+---------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+--------------------+----------------------+---------------------+
| aten | 12.759539298713207 | 2.7271360370796174 | NA |
| triton | 10.573655366897583 | 1.8661278090439737 | -17.131370346859384 |
| triton_persistent_tma | 10.884030722081661 | 0.5315794269554317 | -14.698873781600327 |
| cutlass_lvl_default | 13.09632882475853 | 0.5520401500398293 | 2.6395116481931873 |
| cutlass_lvl_1111 | 11.05172373354435 | 0.569593315012753 | -13.384617776451302 |
| cutlass_lvl_2222 | 11.371277272701263 | 133.58984916994814 | -10.880189272601317 |
+-----------------------+--------------------+----------------------+---------------------+
Experiment group: mm (1024x1024, 1024x1024) torch.bfloat16
+-----------------------+--------------------+----------------------+---------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+--------------------+----------------------+---------------------+
| aten | 14.472318813204765 | 1.5445372510002926 | NA |
| triton | 10.568295605480671 | 16.583424195996486 | -26.975796056689987 |
| triton_persistent_tma | 10.45411266386509 | 5.830657540936954 | -27.764770809729562 |
| cutlass_lvl_default | 12.742593884468079 | 28.994930602959357 | -11.951954286402668 |
| cutlass_lvl_1111 | 11.522261425852776 | 79.85037935699802 | -20.38413764531163 |
| cutlass_lvl_2222 | 10.993581265211105 | 132.86601971101481 | -24.037181552548486 |
+-----------------------+--------------------+----------------------+---------------------+
Experiment group: mm (2048x2048, 2048x2048) torch.float16
+-----------------------+--------------------+----------------------+---------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+--------------------+----------------------+---------------------+
| aten | 30.700622126460075 | 2.225986961973831 | NA |
| triton | 29.17378954589367 | 38.571991189033724 | -4.97329524553989 |
| triton_persistent_tma | 29.642896726727486 | 7.2848734309664 | -3.4452897904663744 |
| cutlass_lvl_default | 29.514770954847336 | 29.819900761009194 | -3.8626291243482167 |
| cutlass_lvl_1111 | 29.411429539322853 | 23.82907024596352 | -4.19923929172139 |
| cutlass_lvl_2222 | 29.57325428724289 | 134.31008586101234 | -3.672133530628152 |
+-----------------------+--------------------+----------------------+---------------------+
Experiment group: mm (2048x2048, 2048x2048) torch.bfloat16
+-----------------------+--------------------+----------------------+--------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+--------------------+----------------------+--------------------+
| aten | 30.858177691698074 | 1.181898436974734 | NA |
| triton | 28.630023822188377 | 39.24473957403097 | -7.220626868414034 |
| triton_persistent_tma | 28.641965240240097 | 5.275042273919098 | -7.181929126210897 |
| cutlass_lvl_default | 29.16003204882145 | 29.934022572939284 | -5.503065216107967 |
| cutlass_lvl_1111 | 28.79570797085762 | 23.948012012057006 | -6.683705504085324 |
| cutlass_lvl_2222 | 29.02756631374359 | 136.25560767308343 | -5.932337924306467 |
+-----------------------+--------------------+----------------------+--------------------+
Experiment group: mm (8192x8192, 8192x8192) torch.float16
+-----------------------+--------------------+----------------------+--------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+--------------------+----------------------+--------------------+
| aten | 1456.143856048584 | 1.020197194069624 | NA |
| triton | 1708.2737684249878 | 5.766509635956027 | 17.31490410985819 |
| triton_persistent_tma | 1476.485013961792 | 7.455113030038774 | 1.3969195302177155 |
| cutlass_lvl_default | 1583.3594799041748 | 50.408804678940214 | 8.736473620182366 |
| cutlass_lvl_1111 | 1636.4418268203735 | 82.82403108896688 | 12.381879030898025 |
| cutlass_lvl_2222 | 1507.5665712356567 | 260.03901409788523 | 3.531430975962381 |
+-----------------------+--------------------+----------------------+--------------------+
Experiment group: mm (8192x8192, 8192x8192) torch.bfloat16
+-----------------------+--------------------+----------------------+--------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+--------------------+----------------------+--------------------+
| aten | 1382.230520248413 | 1.2586536260787398 | NA |
| triton | 1646.9683647155762 | 5.442052865982987 | 19.15294450447995 |
| triton_persistent_tma | 1423.9195585250854 | 6.515797697938979 | 3.016069871556595 |
| cutlass_lvl_default | 1500.9030103683472 | 51.36402789200656 | 8.58557877152115 |
| cutlass_lvl_1111 | 1446.9740390777588 | 30.65435610699933 | 4.683988515729638 |
| cutlass_lvl_2222 | 1419.661521911621 | 205.1948991640238 | 2.7080144096717635 |
+-----------------------+--------------------+----------------------+--------------------+
```
Differential Revision: D70147589
| true
|
2,891,922,814
|
Symmetrization of Cholesky backward gradient
|
ayghri
|
closed
|
[
"oncall: distributed",
"module: cpu",
"triaged",
"module: mkldnn",
"open source",
"module: amp (automated mixed precision)",
"release notes: quantization",
"release notes: releng",
"module: inductor",
"module: dynamo",
"release notes: distributed (checkpoint)"
] | 3
|
NONE
|
Fixes #137284
The previous "symmetrization" of the backward sensitivities assumed real matrices, this PR uses a more general formulation to account for complex matrices.
The previous approach, that assumes real matrices, uses:
$$S+ S^\top -diag(S)$$
this doesn't account for complex S, which might yield complex diagonal elements.
Instead, I should use:
$$A = S + S^\top$$, then scale the diagonal elements of A by 1/2
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @mcarilli @ptrblck @leslie-fang-intel @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,891,901,630
|
Installation of `pytorch==2.6.0+cu124` doesn't install `triton` and `nvidia` libraries
|
rithwik-db
|
open
|
[
"module: binaries",
"triaged",
"needs design"
] | 2
|
NONE
|
### 🐛 Describe the bug
On ubuntu 22.04, if we run the following command:
```
pip3.11 install --no-cache-dir --find-links https://download.pytorch.org/whl/torch/ torch==2.6.0+cu124
```
This installs PyTorch from:
```
https://download.pytorch.org/whl/cu124_full/torch-2.6.0%2Bcu124-cp311-cp311-linux_x86_64.whl.metadata
```
and this doesn't install the `nvidia` libraries and `triton` that should be dependencies. Doing the same with `torch==2.5.1+cu124` does install the correct dependencies so this seems to be a regression.
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.0rc1 (main, Aug 12 2022, 10:02:14) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7513 32-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3681.6399
CPU min MHz: 1500.0000
BogoMIPS: 5190.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
NUMA node2 CPU(s): 16-23
NUMA node3 CPU(s): 24-31
NUMA node4 CPU(s): 32-39
NUMA node5 CPU(s): 40-47
NUMA node6 CPU(s): 48-55
NUMA node7 CPU(s): 56-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] torch==2.6.0+cu124
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @osalpekar @atalman
| true
|
2,891,884,187
|
[RFC][PGNCCL] Add Float8 support
|
kwen2501
|
closed
|
[
"oncall: distributed",
"triaged",
"module: c10d"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
NCCL added float8 support in 2.24. We can thus enable the same in ProcessGroupNCCL, removing the following restriction:
https://github.com/pytorch/pytorch/blob/57addfcd580e8fae70ebb8ac0364b272af65ac8e/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L4065-L4067
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,891,792,783
|
Update CURL url for manywheel images
|
AlekseiNikiforovIBM
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
COLLABORATOR
|
It looks like it was moved on the site it was downloaded from.
Switch to official site while updating URL.
| true
|
2,891,775,202
|
[CI] [anaconda] CI Perf Tests
|
atalman
|
closed
|
[
"module: ci",
"triaged",
"better-engineering"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Related to https://github.com/pytorch/pytorch/issues/138506
CI Perf Tests:
.ci/pytorch/perf_test/test_cpu_speed_mnist.sh
.ci/pytorch/perf_test/test_gpu_speed_mnist.sh
We would like to remove Anaconda install dependency
### Versions
2.7.0 nightly
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,891,760,822
|
[CI] [anaconda] Review Devcontainer anaconda usage
|
atalman
|
closed
|
[
"module: ci",
"triaged",
"better-engineering"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Related to https://github.com/pytorch/pytorch/issues/138506
Review anaconda usage in Devcontainer - legacy software :
.devcontainer/Dockerfile
.devcontainer/scripts/install-dev-tools.sh
DevContainer is not used in PyTorch CI/CD system, hence either remove the usage of anaconda or provide some documentation about anaconda usage in DevContainer
### Versions
2.7.0 nightly
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,891,709,218
|
[CI] [anaconda] CI Build and Test scripts MacOS
|
atalman
|
open
|
[
"module: ci",
"triaged",
"better-engineering"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Related to https://github.com/pytorch/pytorch/issues/138506
CI Build and Test scripts to replace:
.ci/pytorch/macos-test.sh - used for torchbench
astunparse numpy scipy ninja pyyaml setuptools cmake typing-extensions requests protobuf numba cython scikit-learn librosa
.ci/pytorch/run_tests.sh
future hypothesis numpy protobuf pytest setuptools six typing_extensions pyyaml
.github/workflows/_mac-build.yml
.github/workflows/_mac-test.yml
.github/workflows/_mac-test-mps.yml
We would like to remove Anaconda install dependency
### Versions
2.7.0 nightly
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,891,687,851
|
[Docs] [anaconda] Review and update
|
atalman
|
open
|
[
"triaged",
"better-engineering",
"topic: docs"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Related to https://github.com/pytorch/pytorch/issues/138506
Review Anaconda in documentation:
.github/requirements/README.md
CONTRIBUTING.md
README.md
benchmarks/README.md
docs/cpp/source/installing.rst
docs/source/conf.py
docs/source/notes/windows.rst
functorch/dim/README.md
### Versions
2.7.0 nightly
| true
|
2,891,670,449
|
[CI] [anaconda] CI Build and Test scripts Windows
|
atalman
|
open
|
[
"module: ci",
"triaged",
"better-engineering"
] | 1
|
CONTRIBUTOR
|
Related to https://github.com/pytorch/pytorch/issues/138506
CI Build and Test scripts to replace:
.ci/pytorch/win-test-helpers/setup_pytorch_env.bat
.ci/pytorch/win-test-helpers/build_pytorch.bat
Github Actions :
.github/actions/setup-win/action.yml
We would like to remove Anaconda install dependency
### Versions
2.7.0 nightly
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,891,666,962
|
[ONNX] aten_unfold needs to support symint
|
justinchuby
|
closed
|
[
"module: onnx",
"triaged"
] | 2
|
COLLABORATOR
|
See https://github.com/pytorch/pytorch/issues/113067#issuecomment-2693015882 for error.
| true
|
2,891,663,174
|
[CI] [anaconda] CI Build and Test scripts Linux
|
atalman
|
open
|
[
"module: ci",
"triaged",
"better-engineering"
] | 2
|
CONTRIBUTOR
|
Related to https://github.com/pytorch/pytorch/issues/138506
CI Build and Test scripts to replace:
.ci/pytorch/build.sh
.ci/pytorch/test.sh
.ci/pytorch/run_tests.sh
future hypothesis numpy protobuf pytest setuptools six typing_extensions pyyaml
We would like to remove Anaconda install dependency
### Versions
2.7.0 nightly
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,891,644,498
|
[CI] [anaconda] Docker files have conda environment installed
|
atalman
|
open
|
[
"module: ci",
"triaged"
] | 0
|
CONTRIBUTOR
|
Related to https://github.com/pytorch/pytorch/issues/138506
All CI Docker files have conda environment installed by default:
.ci/docker/build.sh#L97
.ci/docker/common/install_conda.sh
.ci/docker/common/install* scripts
We would like to remove Anaconda install dependency
### Versions
2.7.0 nightly
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,891,619,276
|
[FSDP2] Issues with model not running on all ranks - Grads not matching fairscale implementation
|
JosselinSomervilleRoberts
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 8
|
NONE
|
### 🐛 Describe the bug
Hi, I am running into some issues because I have a model that does not have to run on all ranks.
Here is a minimal example:
```python
model = CombinedModel()
model = fully_shard(model)
for (x0, x1), y in dataloader:
if x1 is not None:
x0 += model.encoder(x1)
y_pred = model(x0)
loss = criterion(y, y_pred)
loss.backward()
optimizer.step()
```
This is a bit tricky because given like this, the code will hang when trying to gather the shard for a layer in the encoder because some ranks will not run the encoder. Is there a good way to do this?
Right now, to solve this, I do a dummy pass:
```python
if x1 is not None:
x0 += model.encoder(x1)
else:
_ = model.encoder(dummy)
```
This solves the forward hanging. However the backward will have the same hanging issue. To solve this, I do this trick but please let me know if there is a better way to do this:
```python
if x1 is not None:
x0 += model.encoder(x1)
else:
x0 += 0.0 * model.encoder(dummy)
```
Now the issue is that with this code I get different gradients compared to my fairscale implementation (which does not need all this dummy code). As this may be an important detail, my encoder is fully sharded but I do not shard individual layers of the encoder.
My theory is that since I do not have dummy passes on the encoder with fairscale, in the `all_gather`, the reduce op being average, we will only average ranks that do have gradients. Si if only 2/8 ranks ran the encoder, the divider factor will be 2.
With FSDP2, all ranks will have gradients, most of them being 0 because it was a dummy pass. In that case the sum of the gradients would be the same but the divide factor would be 8.
How can I solve this?
Is there a better way to solve the hang as well ? (One annoying thing is that technically some batches could have no encoder need on all ranks but here we will always do a dummy pass)
Thanks!
### Versions
```
python: Python 3.10.13
torch: 2.4.1
GPUs: H100
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o
| true
|
2,891,618,918
|
[export][torchbench] moco fails
|
desertfire
|
closed
|
[
"triaged",
"oncall: pt2",
"oncall: export"
] | 3
|
CONTRIBUTOR
|
Repro:
```
python benchmarks/dynamo/torchbench.py --accuracy --inference --bfloat16 --export --disable-cudagraphs --device cuda --only moco
```
Error:
```
Traceback (most recent call last):
File "/data/users/binbao/pytorch/torch/export/dynamic_shapes.py", line 509, in _tree_map_with_path
return tree_map_with_path(f, tree, *dynamic_shapes, is_leaf=is_leaf)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/utils/_pytree.py", line 1794, in tree_map_with_path
all_keypath_leaves = keypath_leaves + [treespec.flatten_up_to(r) for r in rests]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/utils/_pytree.py", line 1794, in <listcomp>
all_keypath_leaves = keypath_leaves + [treespec.flatten_up_to(r) for r in rests]
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/utils/_pytree.py", line 942, in flatten_up_to
helper(self, tree, subtrees)
File "/data/users/binbao/pytorch/torch/utils/_pytree.py", line 900, in helper
raise ValueError(
ValueError: Node arity mismatch; expected 1, but got 2.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/binbao/pytorch/benchmarks/dynamo/common.py", line 2227, in check_accuracy
optimized_model_iter_fn = optimize_ctx(
^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/benchmarks/dynamo/common.py", line 1463, in export
ep = torch.export.export(
^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/__init__.py", line 360, in export
return _export(
^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1047, in wrapper
raise e
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1020, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 2083, in _export
ep = _export_for_training(
^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1047, in wrapper
raise e
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1020, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1946, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1299, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 684, in _export_to_torch_ir
_check_dynamic_shapes(combined_args, dynamic_shapes)
File "/data/users/binbao/pytorch/torch/export/dynamic_shapes.py", line 797, in _check_dynamic_shapes
_tree_map_with_path(check_shape, combined_args, dynamic_shapes, tree_name="inputs")
File "/data/users/binbao/pytorch/torch/export/dynamic_shapes.py", line 581, in _tree_map_with_path
_compare(tree_spec, other_tree_spec, [])
File "/data/users/binbao/pytorch/torch/export/dynamic_shapes.py", line 552, in _compare
raise_mismatch_error(
File "/data/users/binbao/pytorch/torch/export/dynamic_shapes.py", line 529, in raise_mismatch_error
raise UserError(
torch._dynamo.exc.UserError: Detected mismatch between the structure of `inputs` and `dynamic_shapes`: `inputs` has 1 elements, but `dynamic_shapes` has 2 elements
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.