id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,849,505,642
|
Added code to use the safe loader
|
TimAtGoogle
|
open
|
[
"triaged",
"open source",
"Stale",
"release notes: export"
] | 2
|
NONE
|
Fixes #ISSUE_NUMBER
| true
|
2,849,480,471
|
[trymerge] Post initial starting merge comment on stacked PRs
|
clee2000
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"ci-no-td"
] | 10
|
CONTRIBUTOR
|
Post a small comment stating if a PR is being merged as part of a stack
| true
|
2,849,472,763
|
`make pdflatex` Sphinx error: Builder name pdflatex not registered or available through entry point
|
Geremia
|
closed
|
[
"module: build",
"module: docs",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
On `master`, running
```bash
cd docs
pip install -r requirements.txt
make pdflatex
```
gives this error:
<details><summary>`make pdflatex` output</summary>
<p>
```log
/usr/lib64/python3.12/site-packages/torch/_dynamo/variables/higher_order_ops.py:811: UserWarning: Pred is a Python constant. When used with torch.cond, it specializes on one of the branches. If you want torch.cond to preserve two branches, please make the predicate a boolean tensor or a SymBool.
warnings.warn(
/usr/lib64/python3.12/site-packages/torch/_dynamo/variables/higher_order_ops.py:811: UserWarning: Pred is a Python constant. When used with torch.cond, it specializes on one of the branches. If you want torch.cond to preserve two branches, please make the predicate a boolean tensor or a SymBool.
warnings.warn(
/usr/lib64/python3.12/site-packages/torch/_dynamo/variables/higher_order_ops.py:811: UserWarning: Pred is a Python constant. When used with torch.cond, it specializes on one of the branches. If you want torch.cond to preserve two branches, please make the predicate a boolean tensor or a SymBool.
warnings.warn(
/usr/lib64/python3.12/site-packages/torch/_dynamo/variables/higher_order_ops.py:811: UserWarning: Pred is a Python constant. When used with torch.cond, it specializes on one of the branches. If you want torch.cond to preserve two branches, please make the predicate a boolean tensor or a SymBool.
warnings.warn(
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] Error while creating guard:
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] Name: ''
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] Source: shape_env
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] Create Function: SHAPE_ENV
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] Guard Types: None
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] Code List: None
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] Object Weakref: None
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] Guarded Class Weakref: None
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] Traceback (most recent call last):
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] File "/usr/lib64/python3.12/site-packages/torch/_guards.py", line 293, in create
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] return self.create_fn(builder, self)
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] File "/usr/lib64/python3.12/site-packages/torch/_dynamo/guards.py", line 1868, in SHAPE_ENV
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] code_parts, verbose_code_parts = output_graph.shape_env.produce_guards_verbose(
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] File "/usr/lib64/python3.12/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5188, in produce_guards_verbose
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] raise ConstraintViolationError(
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (dim0_x)! For more information, run with TORCH_LOGS="+dynamic".
E0212 15:07:32.496000 19033 site-packages/torch/_guards.py:295] [17/0] - Not all values of dim0_x = L['x'].size()[0] in the specified range satisfy the generated guard round(L['x'].size()[0] / 2) <= L['x'].size()[0].
E0212 15:07:32.499000 19033 site-packages/torch/_guards.py:297] [17/0] Created at:
E0212 15:07:32.499000 19033 site-packages/torch/_guards.py:297] [17/0] File "/usr/lib64/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 642, in transform
E0212 15:07:32.499000 19033 site-packages/torch/_guards.py:297] [17/0] tracer = InstructionTranslator(
E0212 15:07:32.499000 19033 site-packages/torch/_guards.py:297] [17/0] File "/usr/lib64/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2711, in __init__
E0212 15:07:32.499000 19033 site-packages/torch/_guards.py:297] [17/0] output=OutputGraph(
E0212 15:07:32.499000 19033 site-packages/torch/_guards.py:297] [17/0] File "/usr/lib64/python3.12/site-packages/torch/_dynamo/output_graph.py", line 336, in __init__
E0212 15:07:32.499000 19033 site-packages/torch/_guards.py:297] [17/0] self.init_ambient_guards()
E0212 15:07:32.499000 19033 site-packages/torch/_guards.py:297] [17/0] File "/usr/lib64/python3.12/site-packages/torch/_dynamo/output_graph.py", line 485, in init_ambient_guards
E0212 15:07:32.499000 19033 site-packages/torch/_guards.py:297] [17/0] self.guards.add(ShapeEnvSource().make_guard(GuardBuilder.SHAPE_ENV))
E0212 15:07:32.969000 19033 site-packages/torch/_dynamo/eval_frame.py:1213] Parameter y is optional with a default value of tensor([[-0.7549, -1.6256, 0.8431],
E0212 15:07:32.969000 19033 site-packages/torch/_dynamo/eval_frame.py:1213] [ 0.7641, -1.2511, -1.0317]])
E0212 15:07:32.970000 19033 site-packages/torch/export/_trace.py:1021] See optional_input in exportdb for unsupported case. https://pytorch.org/docs/main/generated/exportdb/index.html#optional-input
E0212 15:07:32.971000 19033 site-packages/torch/export/_trace.py:1021] See optional_input in exportdb for unsupported case. https://pytorch.org/docs/main/generated/exportdb/index.html#optional-input
E0212 15:07:33.477000 19033 site-packages/torch/export/_trace.py:1021] See unsupported_operator in exportdb for unsupported case. https://pytorch.org/docs/main/generated/exportdb/index.html#unsupported-operator
E0212 15:07:33.478000 19033 site-packages/torch/export/_trace.py:1021] See unsupported_operator in exportdb for unsupported case. https://pytorch.org/docs/main/generated/exportdb/index.html#unsupported-operator
Running Sphinx v5.0.0
/home/geremia/Downloads/pytorch/docs/source/conf.py:37: UserWarning: unable to load "torchvision" package
warnings.warn('unable to load "torchvision" package')
Traceback (most recent call last):
File "/usr/lib/python3.12/site-packages/importlib_metadata/__init__.py", line 289, in __getitem__
return next(iter(self.select(name=name)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
StopIteration
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/geremia/.local/lib/python3.12/site-packages/sphinx/registry.py", line 149, in preload_builder
entry_point = builder_entry_points[name]
~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/usr/lib/python3.12/site-packages/importlib_metadata/__init__.py", line 291, in __getitem__
raise KeyError(name)
KeyError: 'pdflatex'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/geremia/.local/lib/python3.12/site-packages/sphinx/cmd/build.py", line 272, in build_main
app = Sphinx(args.sourcedir, args.confdir, args.outputdir,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/geremia/.local/lib/python3.12/site-packages/sphinx/application.py", line 226, in __init__
self.preload_builder(buildername)
File "/home/geremia/.local/lib/python3.12/site-packages/sphinx/application.py", line 302, in preload_builder
self.registry.preload_builder(self, name)
File "/home/geremia/.local/lib/python3.12/site-packages/sphinx/registry.py", line 151, in preload_builder
raise SphinxError(__('Builder name %s not registered or available'
sphinx.errors.SphinxError: Builder name pdflatex not registered or available through entry point
Sphinx error:
Builder name pdflatex not registered or available through entry point
make: *** [Makefile:51: pdflatex] Error 2
```
</p>
</details>
Is `torchvision` a requirement? If so, it should be in `requirements.txt`. If not, that warning is unrelated to Sphinx's inability to run `pdflatex`.
I didn't have this issue on PyTorch 2.6.0. It cropped up since then. (Maybe I can't compile `master`'s docs against a 2.6.0 build?)
### Versions
<details><summary>collect_env.py output</summary>
<p>
```
Collecting environment information...
PyTorch version: 2.6.0a0
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Slackware Linux (x86_64)
GCC version: (GCC) 14.2.0
Clang version: 19.1.7
CMake version: version 3.31.5
Libc version: glibc-2.40
Python version: 3.12.9 (main, Feb 5 2025, 13:12:07) [GCC 14.2.0] (64-bit runtime)
Python platform: Linux-6.13.1-x86_64-AMD_Ryzen_Threadripper_2990WX_32-Core_Processor-with-glibc2.40
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 4000
Nvidia driver version: 570.86.16
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 2990WX 32-Core Processor
CPU family: 23
Model: 8
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 73%
CPU max MHz: 3000.0000
CPU min MHz: 2200.0000
BogoMIPS: 5988.02
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 2 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 64 MiB (8 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-7,32-39
NUMA node1 CPU(s): 16-23,48-55
NUMA node2 CPU(s): 8-15,40-47
NUMA node3 CPU(s): 24-31,56-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT vulnerable
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] numpy==1.26.3
[pip3] pytorch_sphinx_theme==0.0.24
[pip3] torch==2.6.0a0+gitunknown
[pip3] torchviz==0.0.3
[conda] Could not collect
```
</p>
</details>
cc @malfet @seemethere @svekars @brycebortree @sekyondaMeta @AlannaBurke
| true
|
2,849,472,326
|
cpp_wrapper: compile main function without optimization
|
benjaminglass1
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147026
* #144349
* #144293
* #146928
This seems like a bad idea, but testing via the benchmark HUD shows that we don't actually lose any performance from this move, while gaining _significant_ compile time improvements.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,849,468,711
|
[DTensor][random] defer DTensor RNG state sync until first random op call or manual_seed call; support more flexible OffsetBasedRNGTracker init
|
XilunWu
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"ciflow/periodic",
"ciflow/inductor",
"release notes: distributed (dtensor)"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147025
Resolves https://github.com/pytorch/pytorch/issues/146767.
May also resolve https://github.com/pytorch/pytorch/issues/147584.
### Summary
This PR removes the RNG tracker init from the `distribute_tensor` call for the following reasons:
1. if the user does not use random ops on DTensor, there's no need to init DTensor RNG which currently requires CUDA device to be present.
2. this complies with the 0-communication semantic of `src_data_rank=None` shard distribution.
Besides, `OffsetBasedRNGTracker` only accepts `DeviceMesh` argument to its constructor method.
### Consequence
DTensor RNG initialization is delayed till the first DTensor random ops call or `torch.distributed.tensor.random.manual_seed`.
### Test
`pytest test/distributed/tensor/test_random_ops.py`
`pytest test/distributed/tensor/parallel/test_tp_random_state.py`
`pytest test/distributed/tensor/parallel/test_tp_style.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
Differential Revision: [D70201856](https://our.internmc.facebook.com/intern/diff/D70201856)
| true
|
2,849,441,376
|
[export] Add initial export -> distributed tests
|
angelayi
|
closed
|
[
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
For a set of models we want to:
1. call run_export_workflow which matches what APS does
2. apply distributed technique: DDP, FSDP, PP (maybe?)
3. check running forward is accurate, and optionally after running backward is the loss the same
Links to some example models:
* https://github.com/pytorch/pytorch/blob/995f607c743d27a4109451e68782fecedebeb934/test/distributed/test_dynamo_distributed.py#L64
* https://github.com/pytorch/pytorch/blob/main/test/distributed/pipelining/model_registry.py
| true
|
2,849,407,444
|
[BE] Toward Metal Iterator (step 2)
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147018
* __->__ #147023
Add dense flavor of the binary ops, i.e. if iterator is contiguous, do not build indices but rather run different flavor, using the same functor, which results in almost 100% perf gain for binary operation with 1mln elements of `torch.fmax` as one can see from the table below collected on M4Pro Mini using following benchmarking script
```python
import torch
from timeit import default_timer
from itertools import product
from torch.utils.benchmark import Measurement, Timer
def bench_binary(
n,
binary_func,
dtype=torch.float32,
) -> Measurement:
t = Timer(
stmt=f"f(x, y);f(x, y); f(x, y); torch.mps.synchronize()",
setup=f"x, y=torch.rand((2, {n}), dtype={dtype}, device='mps').unbind(0)",
globals = {'f': binary_func},
language="python", timer=default_timer
)
return t.blocked_autorange()
if __name__ == "__main__":
n = 1024**2
for dtype in [torch.float32, torch.float16, torch.bfloat16]:
eager_t = bench_binary(n, torch.fmax, dtype)
use_msec = eager_t.mean > 1e-4
multiplier = 1e3 if use_msec else 1e6
uname = "msec" if use_msec else "usec"
print(f"torch.fmax()x3 {str(dtype):>14} {eager_t.mean*multiplier:>7.2f} {uname}")
```
Dtype | Time before | Time After |
| ------|------------ | ---------- |
| float32 | 0.84 msec | 0.66 msec |
| float16 | 0.49 msec | 0.23 msec |
| bfloat16 | 0.48 msec | 0.22 msec |
| true
|
2,849,380,099
|
[logging] Log individual Triton kernel compilation times to dynamo_compile
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147022
Summary: Gather the compilation time of individual triton kernels and log them to dynamo_compile:
* Time compilation in `_worker_compile_triton` and pass back to the main process and logged from `get_result()`.
* Added a way to track the "top N" (or N most-expensive compiles) in the metrics_context. I did this because I doubt we really care to capture potentially thousands of kernel compile times. That would be problematic for scuba logging anyway, so let's limit the number we track from the beginning. Arbitrarily chose 25 for now.
* Format the list of compile times as a json string before logging.
Test Plan:
`python benchmarks/dynamo/torchbench.py --performance --training --amp --backend inductor --device cuda --print-compilation-time --repeat 5 --cold-start-latency --only nanogpt`
Scuba: https://fburl.com/scuba/dynamo_compile/sandbox/nc4dzm3r
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,849,365,214
|
wip [ca] support DDP w/ c++ reducer via graph breaks
|
xmfan
|
open
|
[
"oncall: distributed",
"Stale",
"release notes: distributed (c10d)",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147021
* #146875
* #146735
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @yf225
| true
|
2,849,362,304
|
[aoti_debug_printer][BE] explicitly dumping float32, bfloat16, float16 data type
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Summary:
per request, explicitly dumping the float dtypes for aten tensors in debug printing summary info.
can be useful in identifying issues such as "wrong AOTI Lowering precisions"
Test Plan:
```
AOT_INDUCTOR_DEBUG_INTERMEDIATE_VALUE_PRINTER=2 TORCH_LOGS="+inductor, output_code" buck2 run -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=h100 @//mode/opt fbcode//caffe2/test/inductor:test_aot_inductor -- -r test_addmm
```
Differential Revision: D69547344
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,849,305,381
|
[Inductor] Record Triton’s Base32 Cache Key in `.best_config` for Debugging
|
fulvius31
|
closed
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 37
|
CONTRIBUTOR
|
Modified TorchInductor’s autotuning flow so that each `best_config` JSON file also includes the Triton “base32” (or base64) cache key.
**Motivation**
Debugging & Analysis: With this change, we can quickly identify which compiled binary and IRs belongs to a given best config.
The impact is minimal since it is only an extra field in .best_config. It can help advanced performance tuning or kernel-level debugging.
Also, since Triton already stores cubin/hsaco in its cache, developers/researchers can avoid to set `store_cubin = True` since they can get the cubin/hsaco in the Triton cache and with the code provided in this PR, they can easily match the best_config with the right Triton cache directory for the "best" kernel.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @davidberard98
| true
|
2,849,288,687
|
[BE] Turn nextafter into functor
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147018
* #147023
This functor is a bit more involved as nextafter is missing for MacOS13
| true
|
2,849,258,486
|
Fix meta impl for topk
|
tugsbayasgalan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147017
Topk in this context is always size-like so we should use torch._check_is_size. Fixes some issue in https://github.com/pytorch/pytorch/issues/146990
Differential Revision: [D69545983](https://our.internmc.facebook.com/intern/diff/D69545983)
| true
|
2,849,248,328
|
Add some more docs to trace_rules.py
|
zou3519
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147031
* #147013
* #147012
* __->__ #147016
After discussing with Yanbo we wanted to record the behavior down so we
don't need to rederive them in the future.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi
| true
|
2,849,240,932
|
update kineto submodule
|
briancoutinho
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 10
|
CONTRIBUTOR
|
Fix https://github.com/pytorch/kineto/issues/1032
See https://github.com/pytorch/kineto/pull/1035 for testplan
| true
|
2,849,231,354
|
Support subclass constructor capturing in export
|
tugsbayasgalan
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"with-ssh",
"ciflow/trunk",
"fx",
"ciflow/inductor",
"release notes: export",
"no-runner-experiments"
] | 37
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147014
Notable TODOs:
1. Need to implement AutogradHOP to get rid of subclasses before serializing
2. Need to implement mechanism to figure out what subclasses will be used in export when they are not expressed in the inputs
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
Differential Revision: [D69640673](https://our.internmc.facebook.com/intern/diff/D69640673)
| true
|
2,849,224,502
|
[SkipFiles] Some more cleanup
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147031
* __->__ #147013
* #147012
* #147016
This isn't a no-op but I think it's fine. It changes the case where a
function f1 in a module in MOD_SKIPFILES calls a function f2 in one of
the deleted modules. Previously f2 would have been skipped, now f2 gets
inlined.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi
| true
|
2,849,224,412
|
[SkipFiles] Some more cleanup
|
zou3519
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147031
* #147013
* __->__ #147012
* #147016
I think these are all no-ops.
Test Plan:
- tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi
| true
|
2,849,210,068
|
Deprecation of NVTX 2 (`nvToolsExt`): Recommended to move to NVTX 3
|
jakirkham
|
open
|
[
"module: cuda",
"triaged",
"better-engineering",
"oncall: profiler",
"topic: build"
] | 6
|
NONE
|
Currently PyTorch contains references to NVTX 2 (like `nvToolsExt`). For example:
https://github.com/pytorch/pytorch/blob/8a975cb247d6ef901c4d4da4fea25d21de6648c7/cmake/public/cuda.cmake#L186
However [NVIDIA has deprecated NVTX 2]( https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#deprecated-or-dropped-operating-systems ). Similarly CMake has [deprecated the `CUDA::nvToolsExt` target]( https://cmake.org/cmake/help/v3.31/release/3.25.html#modules )
The current recommendation is to move to NVTX 3 by changing `#include`s
```diff
-#include <nvtoolsext.h>
+#include "nvtx3/nvtoolsext.h"
```
And using the CMake target `CUDA::nvtx3`
Note: NVTX 3 has been part of the CUDA Toolkit since 10.0
cc @ptrblck @msaroufim @eqy @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,849,196,446
|
[FlexAttention] Make zero_length sequence handiling better
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"module: flex attention"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147010
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @Chillee @yanboliang @BoyuanFeng
| true
|
2,849,161,176
|
Add some more docs to trace_rules.py
|
zou3519
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147009
After discussing with Yanbo we wanted to record the behavior down so we
don't need to rederive them in the future.
Test Plan:
- comment reading
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,849,132,747
|
Turn on prologue fusion
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147151
* __->__ #147008
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,849,124,049
|
[Optimus][Inductor] Add select view cat aten pattern
|
mengluy0125
|
open
|
[
"fb-exported",
"Stale",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"inductor_pattern_match"
] | 5
|
CONTRIBUTOR
|
Summary: We find another inefficient triton kernels generated by PT2 in Wukong CMF, we thus add an inductor pattern to optimize it
Test Plan:
# how to add config
```
"post_grad_fusion_options": {
"normalization_aten_pass": {},
"select_view_cat_aten_pass": {},
},
```
# unit test
```
buck2 test 'fbcode//mode/dev-nosan' fbcode//caffe2/test/inductor:split_cat_fx_aten_passes -- test_select_view_cat_post_grad
```
Buck UI: https://www.internalfb.com/buck2/7b34f835-d74e-4142-ad6b-09d49d46bbe2
Test UI: https://www.internalfb.com/intern/testinfra/testrun/4222124916789264
Network: Up: 80KiB Down: 1.0KiB (reSessionID-47d145dd-a06a-4625-a48a-d959f9a972ef)
Executing actions. Remaining 0/6 5.2s exec time total
Command: test. Finished 4 local
Time elapsed: 54.2s
Tests finished: Pass 2. Fail 0. Fatal 0. Skip 0. Build failure 0
# local reproduce
```
CUDA_VISIBLE_DEVICES=5 buck2 run mode/opt scripts/shuaiyang:test -- --optimus --flow_id 685212996 --use_synthetic_data 2>&1 | tee ~/wukong_685212996.txt
```
Differential Revision: D69495415
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,849,095,429
|
[TP] Add warning when module is distributed twice
|
kwen2501
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: distributed (dtensor)",
"keep-going"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147006
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,849,091,912
|
[ONNX] Deprecation message follow up
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: docs"
] | 3
|
COLLABORATOR
|
Follow up on https://github.com/pytorch/pytorch/pull/146923 to address comments.
This pull request includes updates to the `torch/onnx` module, focusing on deprecations and documentation improvements. The most important changes involve moving version change notes within the `export` function, updating deprecation messages, and removing example code in the `dynamo_export` function.
Documentation and Deprecation Updates:
* [`torch/onnx/__init__.py`](diffhunk://#diff-c3c8c09b65c1235ca4494633c6a0aab2761a11a7653ddaf9f874bbcd91e15553L172-L184): Moved version change notes to the correct location within the `export` function's docstring. Updated the deprecation note for the `dynamo_export` function to version 2.7 and removed example code from its docstring. [[1]](diffhunk://#diff-c3c8c09b65c1235ca4494633c6a0aab2761a11a7653ddaf9f874bbcd91e15553L172-L184) [[2]](diffhunk://#diff-c3c8c09b65c1235ca4494633c6a0aab2761a11a7653ddaf9f874bbcd91e15553R349-R357) [[3]](diffhunk://#diff-c3c8c09b65c1235ca4494633c6a0aab2761a11a7653ddaf9f874bbcd91e15553L434-R430) [[4]](diffhunk://#diff-c3c8c09b65c1235ca4494633c6a0aab2761a11a7653ddaf9f874bbcd91e15553L445-L475)
* [`torch/onnx/utils.py`](diffhunk://#diff-849a5778e2dcf7f36587967273cee0bf20642e35bf4c79405111ea3417c3fb3cL111-R114): Enhanced deprecation messages for several functions (`select_model_mode_for_export`, `disable_apex_o2_state_dict_hook`, `setup_onnx_logging`, `unconvertible_ops`) to provide clearer guidance on their removal and suggest copying logic if needed. [[1]](diffhunk://#diff-849a5778e2dcf7f36587967273cee0bf20642e35bf4c79405111ea3417c3fb3cL111-R114) [[2]](diffhunk://#diff-849a5778e2dcf7f36587967273cee0bf20642e35bf4c79405111ea3417c3fb3cL148-R151) [[3]](diffhunk://#diff-849a5778e2dcf7f36587967273cee0bf20642e35bf4c79405111ea3417c3fb3cL166-R173) [[4]](diffhunk://#diff-849a5778e2dcf7f36587967273cee0bf20642e35bf4c79405111ea3417c3fb3cL1180-R1189) [[5]](diffhunk://#diff-849a5778e2dcf7f36587967273cee0bf20642e35bf4c79405111ea3417c3fb3cL1190-R1199)
| true
|
2,849,063,274
|
[ONNX] Remove dort
|
justinchuby
|
closed
|
[
"open source",
"Stale",
"release notes: onnx"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147004
* #147003
Using ORT for training is unmaintained and the DORT implementation is using legacy logic. So remove. We can use this as a reference when we need to add back the funtionality.
| true
|
2,849,063,195
|
[ONNX][dort] Remove reference to onnxscript rewriter
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147004
* __->__ #147003
| true
|
2,849,057,380
|
[Inductor] Add Torch Logs for ir_pre_fusion and ir_post_fusion
|
eellison
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
When you run with `TORCH_COMPILE_DEBUG=1` we serialize pre fusion and post fusion IR. See here, https://github.com/pytorch/pytorch/blob/7f62616a585f91a6cccb9672c42bc8210044b1bf/torch/_inductor/debug.py#L524-L528.
TORCH_COMPILE_DEBUG is kind of an earlier mechanism that has mostly been replaced with a combination of TORCH_LOGS and tlparse. We should add a similar TORCH_LOGS="ir_pre_fusion" and TORCH_LOGS="ir_post_fusion" to more accessibly debug states of the IR.
Check out the registration of logging [here](https://github.com/pytorch/pytorch/blob/main/torch/_logging/_registrations.py). If you click around blame there should be a PR that shows how to add a logging artifact.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,849,013,464
|
Preload CUDA fails if CUDA libs in different PYTHONPATH
|
aowenson-imm
|
open
|
[
"module: build",
"module: cuda",
"triaged",
"module: third_party"
] | 9
|
NONE
|
### 🐛 Describe the bug
This is subtlety different to the other related issues (linked to in PR #144311)
Suppose my `PYTHONPATH` is `A:B`. PyTorch is installed in A, and nvidia libraries are installed in `B`. The nvidia libs are not where `libtorch_cuda.so` expects them so `__init__.py` uses its backup method: search for pattern `'libcudart.so.*[0-9]'` in `PYTHONPATH`.
The problem is `'libcudart.so.*[0-9]'` is too broad - `_preload_cuda_deps` matches `libcudart.so.11` instead of `libcudart.so.12` (I have both installed) but `libtorch_cuda.so` needs 12. I have a solution I can submit which is simply make the patterns more specific.
### Versions
PyTorch version 2.4.1
...
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu12==12.1.105
...
cc @malfet @seemethere @ptrblck @msaroufim @eqy
| true
|
2,849,004,571
|
Fix shape_inference for V-schedules
|
H-Huang
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (pipeline)"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147000
I was hitting a hang in shape_inference when testing v-shaped schedules with >2 ranks in titan.
`self.next_rank` and `self.prev_rank` are used in shape inference but are not accurate for v-shaped schedules:
https://github.com/pytorch/pytorch/blob/bfcce6984b033640e01a647c44a8a13f86d64f5a/torch/distributed/pipelining/stage.py#L1325-L1326
Will clean up / delete the use of next_rank / prev rank in follow up PRs
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,848,993,906
|
experimental proposal DCP v2
|
teja-rao
|
open
|
[
"oncall: distributed",
"Stale",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 2
|
CONTRIBUTOR
|
Just for fun - hacking up a different way to do checkpointing. Dont take this too seriously, it may not ever see light!
This reimagines distributed checkpoints - preserving torch serialization format, files per rank and goes for heavy simplification of implementation.
The following concepts are eliminated -
- stateful protocol which is a source of issues like https://github.com/pytorch/pytorch/issues/146157, cause of stuck jobs because of complexity in handling failures in collectives in state_dict() calls along calls in dcp, complexities of managing current cuda devices, backward compatibility issues.
- planners which are a confusing concept to map state_dict to internal representation of storage. planners also build metadata, allow advanced write strategies which can all be done much simpler independently.
- Internal structures like WriteItem, ReadItem and all the code to translate a param to storage chunk (which is redundant ) to improve debuggability.
- do not require an additional pg to be constructed which is critical problem for large jobs due to gloo initialization times.
The following are introduced/modified -
- Standardized on torch serialization format which I know will be appreciated by ML engineers and hopefully increase dcp adoption in research.
- Introduced a stateful checkpointer class to manage resources properly for async checkpointing.
- Eliminated collectives in save and load path (assuming metadata can be cached).
- Decouples async checkpointing specifically staging from StoragerWriter abstraction enabling more code reuse.
- Abstraction to control layouts and serialization format with common usecases implemented.
- Introduced an optional barrier to allow waiting for all ranks to finish checkpointing which is useful for async checkpointing.
TBD
- support streaming serialization/deserialization (and streaming support in torch serialization?)
- storage/APIs need a second look.
- Support in torch serialization to load a specific storage without loading entire model file (This can be done without format changes).
- Additional components that can help with easier integration like checkpoint scheduler, factory methods for creation, config management .
- need to think through validations to catch common pitfalls when using the apis.
- implement support filtering replicated tensors.
to review: start with _base.py and _checkpointer.py and move on to _checkpoint_loader.py.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,848,966,589
|
ROCM / HIP: Conv3d forward is very slow for some shapes.
|
IMbackK
|
closed
|
[
"module: performance",
"module: rocm",
"triaged"
] | 9
|
CONTRIBUTOR
|
### 🐛 Describe the bug
With some conv3d shapes, such as those used by video models like hunyuan and moch1 are very slow on MIOpen backed platforms.
Below is a benchmark repducer:
```
import torch
import time
configs = [
[128, 128, 3, 1],
[256, 256, 3, 1],
[512, 512, 3, 1],
[128, 256, 1, 1],
[512, 512, 3, (2, 2, 2)],
[256, 256, 3, (2, 2, 2)],
[128, 3, 3, 1]
]
inputs = [
[1, 128, 67, 258, 258],
[1, 256, 35, 130, 130],
[1, 512, 35, 130, 130],
[1, 128, 67, 258, 258],
[1, 512, 35, 130, 130],
[1, 256, 27, 258, 258],
[1, 128, 67, 258, 258],
]
def conv3dbenchmark(configs: list[list[int]], inputs: list[list[int]], repeat: int, dtype: torch.dtype, device: torch.device):
modules = list()
assert len(inputs) == len(configs)
for config in configs:
modules.append(torch.nn.Conv3d(config[0], config[1], config[2], stride=config[3]).to(device, dtype))
for i in range(len(modules)):
x = torch.randn(inputs[i]).to(device, dtype)
print(f"Running Conv3d config: {configs[i]} input: {inputs[i]} type: {dtype}")
start = time.perf_counter()
for n in range(repeat):
modules[i].forward(x)
torch.cuda.synchronize(device)
print(f"Time {(time.perf_counter() - start) / repeat} seconds\n")
if __name__ == "__main__":
device = torch.device(0)
conv3dbenchmark(configs, inputs, 5, torch.bfloat16, device)
conv3dbenchmark(configs, inputs, 5, torch.float16, device)
```
The benchmark was always run twice to allow miopen to cache its solutions, however using MIOPEN_FIND_MODE=2 provides equivalent performance without the need to cache any solutions.
[mi100.txt](https://github.com/user-attachments/files/18771827/mi100.txt)
[3090.txt](https://github.com/user-attachments/files/18771828/3090.txt)
[rx6800xt.txt](https://github.com/user-attachments/files/18771829/rx6800xt.txt)
[cpu.txt](https://github.com/user-attachments/files/18771857/cpu.txt)
Comparison was done to the 3090 which was choosen as a device with roughly similar raw compute and memory bandwidth as mi100.
Additionally an epic 7552 was used as another point of comparison.
It can be seen that when comparing the mi100, on most configurations, such as `config: [512, 512, 3, 1] input: [1, 512, 35, 130, 130] type: torch.float16` the cuDNN device holds a 10x advantage.
The performance on RX 6800XT can only be described as broken, with it often performing much worse than the cpu.
In addition to the above micro benchmark a benchmark executing a VAE decode is provided here: https://uvos.xyz/git/uvos/HyDecodeRepo
On 3090 this takes about 25 Seconds, while on MI100 120 seconds are required.
Pytorch Profiler Traces showing the slected kernels are given below:
[hunyuan_vae_decode_rtx_3090.zip](https://github.com/user-attachments/files/18772005/hunyuan_vae_decode_rtx_3090.zip)
[hunyuan_vae_decode_rtx_MI100.zip](https://github.com/user-attachments/files/18772027/hunyuan_vae_decode_rtx_MI100.zip)
### Versions
MIOpen 6.2.4
Pytorch 2.6.0
cc @msaroufim @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,848,956,004
|
[BE][Ez]: Apply FURB188: use str remove(pre|suf)fix
|
Skylion007
|
closed
|
[
"oncall: distributed",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Since we are on 3.9, we can use this nice str builtin which is more readable and more efficient.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi
| true
|
2,848,947,612
|
Turn on autograd local caches in fbcode
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146996
| true
|
2,848,945,813
|
[dynamo] Fix tensordict regression
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147046
* __->__ #146995
* #146819
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi
| true
|
2,848,926,382
|
subclasses + HOPs fail with `Attempting to use FunctionalTensor on its own`
|
IvanKobzarev
|
open
|
[
"triaged",
"tensor subclass",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```
def test_sc_hop(self):
class M(torch.nn.Module):
def __init__(self, weight):
super().__init__()
self.weight = weight
def forward(self, x):
return out_dtype(
torch.ops.aten.mm.default, torch.int32, x, self.weight
)
weight = torch.randint(-128, 127, (5, 5), dtype=torch.int8)
m = M(weight)
x = torch.randint(-128, 127, (5, 5), dtype=torch.int8)
x = WrapperSubclass(x)
y = torch.compile(m, backend="aot_eager")(x)
```
FunctionalTensorMode somehow is not on in case of
### Error logs
Error:
```
104 File "/data/users/ivankobzarev/a/pytorch/torch/_ops.py", line 363, in dispatch
105 result = handler(mode, *args, **kwargs)
106 File "/data/users/ivankobzarev/a/pytorch/torch/_higher_order_ops/out_dtype.py", line 156, in out_dtype_fake_tensor_mode
107 return out_dtype_dense(op, output_dtype, *args)
108 File "/data/users/ivankobzarev/a/pytorch/torch/_higher_order_ops/out_dtype.py", line 104, in out_dtype_dense
109 return out_dtype_fallback(op, output_dtype, *args)
110 File "/data/users/ivankobzarev/a/pytorch/torch/_higher_order_ops/out_dtype.py", line 126, in out_dtype_fallback
111 casted_args = pytree.tree_map_only(
112 File "/data/users/ivankobzarev/a/pytorch/torch/utils/_pytree.py", line 1274, in tree_map_only
113 return tree_map(map_only(type_or_types_or_pred)(func), tree, is_leaf=is_leaf)
114 File "/data/users/ivankobzarev/a/pytorch/torch/utils/_pytree.py", line 1097, in tree_map
115 return treespec.unflatten(map(func, *flat_args))
116 File "/data/users/ivankobzarev/a/pytorch/torch/utils/_pytree.py", line 943, in unflatten
117 leaves = list(leaves)
118 File "/data/users/ivankobzarev/a/pytorch/torch/utils/_pytree.py", line 1215, in wrapped
119 return func(x)
120 File "/data/users/ivankobzarev/a/pytorch/torch/_higher_order_ops/out_dtype.py", line 127, in <lambda>
121 torch.Tensor, lambda arg: arg.to(dtype=promote_dtype), args
122 File "/data/users/ivankobzarev/a/pytorch/torch/testing/_internal/subclasses.py", line 56, in __torch_dispatch__
123 out_a = func(*args_a, **kwargs_a)
124 File "/data/users/ivankobzarev/a/pytorch/torch/_ops.py", line 756, in __call__
125 return self._op(*args, **kwargs)
126 File "/data/users/ivankobzarev/a/pytorch/torch/_subclasses/functional_tensor.py", line 201, in __torch_dispatch__
127 raise RuntimeError(
128torch._dynamo.exc.BackendCompilerFailed: backend='aot_eager' raised:
129RuntimeError: Attempting to use FunctionalTensor on its own. Instead, please use it with a corresponding FunctionalTensorMode()
```
### Versions
pytorch main 02/12/2025
cc @ezyang @albanD @chauhang @penguinwu @zou3519 @bdhirsh @yf225
| true
|
2,848,926,046
|
[BE] Towards MetalTensorIterator
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147023
* #147018
* __->__ #146993
Further refactor binary kernels to replace individual implementation with a binary_indexing_kernel template that takes functors that implement the logic.
According to godbolt such refactoring should have no impact on the performance as compiler thru dead code elimination should just replaces the functor with direct underlying function call as one can see for clang CPU compiler here: https://godbolt.org/z/8dxv5jvz7 but to be on the safe side, run following benchmark
```python
import torch
from timeit import default_timer
from itertools import product
from torch.utils.benchmark import Measurement, Timer
def bench_binary(
n,
binary_func,
dtype=torch.float32,
) -> Measurement:
t = Timer(
stmt=f"f(x, y);f(x, y); f(x, y); torch.mps.synchronize()",
setup=f"x, y=torch.rand((2, {n}), dtype={dtype}, device='mps').unbind(0)",
globals = {'f': binary_func},
language="python", timer=default_timer
)
return t.blocked_autorange()
if __name__ == "__main__":
n = 1024**2
for dtype in [torch.float32, torch.float16, torch.bfloat16]:
eager_t = bench_binary(n, torch.fmax, dtype)
use_msec = eager_t.mean > 1e-4
multiplier = 1e3 if use_msec else 1e6
uname = "msec" if use_msec else "usec"
print(f"torch.fmax()x3 {str(dtype):>14} {eager_t.mean*multiplier:>7.2f} {uname}")
```
That reports roughly identical before and after times (1 msec for float32 and .5 msec for float16)
Another interesting quirk, that functors can not be in anonymous namespace, otherwise they'll not be visible from the library, as one can see by running following swift sample (filed FB16490467 to clarify if this is supported)
```swift
let shader_source = """
struct add_functor {
template <typename T>
inline T operator()(const T a, const T b) {
return static_cast<T>(a + b);
}
};
namespace {
struct sub_functor {
template <typename T>
inline T operator()(const T a, const T b) {
return static_cast<T>(a - b);
}
};
} // anonymous namespace
template <typename T, typename F>
kernel void binary_executor(
constant T* input [[buffer(0)]],
constant T* other [[buffer(1)]],
device T* out [[buffer(2)]],
uint tid [[thread_position_in_grid]]) {
F f;
out[tid] = f(input[tid], other[tid]);
}
template
[[host_name("add_float")]] kernel void binary_executor<float, add_functor>(constant float*, constant float *, device float*, uint);
template
[[host_name("sub_float")]] kernel void binary_executor<float, sub_functor>(constant float*, constant float *, device float*, uint);
"""
import Metal
guard let device = MTLCopyAllDevices().first else { fatalError("Not Metal device found") }
let library = try! device.makeLibrary(source:shader_source, options:MTLCompileOptions())
// Expect two kernels to be printed, but see only one, with functor in global namespace
for kernel_name in library.functionNames {
print(kernel_name)
}
```
| true
|
2,848,751,418
|
Make Inductor scheduler aware of _scaled_mm
|
lw
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146992
This is used for example to estimate runtime when doing comms overlap
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,848,706,217
|
cpp_wrapper: use largeTensorTest for test memory checks
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147225
* #146706
* #147403
* __->__ #146991
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,848,696,365
|
How to export a model using topk with a variable number of neighbour?
|
xadupre
|
closed
|
[
"triaged",
"oncall: pt2",
"oncall: export"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
The export is the following but that may not be the only one. That's the first raised one.
``torch._dynamo.exc.UserError: Could not guard on data-dependent expression u7 >= 0 (unhinted: u7 >= 0). (Size-like symbols: none)``
```python
import contextlib
import io
import logging
import warnings
from typing import Any, Dict, List, Optional
import numpy as np
import sklearn
import torch
def flatnonzero(x):
"Similar to :func:`numpy.flatnonzero`"
return torch.nonzero(torch.reshape(x, (-1,)), as_tuple=True)[0]
def _get_weights(dist, weights):
"""Get the weights from an array of distances and a parameter ``weights``.
Assume weights have already been validated.
Parameters
----------
dist : ndarray
The input distances.
weights : {'uniform', 'distance'}, callable or None
The kind of weighting used.
Returns
-------
weights_arr : array of the same shape as ``dist``
If ``weights == 'uniform'``, then returns None.
"""
if weights in (None, "uniform"):
return None
if weights == "distance":
# if user attempts to classify a point that was zero distance from one
# or more training points, those training points are weighted as 1.0
# and the other points as 0.0
dist = 1.0 / dist
inf_mask = torch.isinf(dist)
inf_row = torch.any(inf_mask, axis=1)
dist[inf_row] = inf_mask[inf_row]
return dist
if callable(weights):
return weights(dist)
class NanEuclidean(torch.nn.Module):
"""Implements :func:`sklearn.metrics.nan_euclidean`."""
def __init__(self, squared=False, copy=True):
super().__init__()
self.squared = squared
self.copy = copy
def forward(self, X, Y):
X = X.clone()
Y = Y.to(X.dtype).clone()
missing_X = torch.isnan(X)
missing_Y = torch.isnan(Y)
# set missing values to zero
X[missing_X] = 0
Y[missing_Y] = 0
# Adjust distances for missing values
XX = X * X
YY = Y * Y
distances = -2 * X @ Y.T + XX.sum(1, keepdim=True) + YY.sum(1, keepdim=True).T
distances -= XX @ missing_Y.to(X.dtype).T
distances -= missing_X.to(X.dtype) @ YY.T
distances = torch.clip(distances, 0, None)
present_X = 1 - missing_X.to(X.dtype)
present_Y = ~missing_Y
present_count = present_X @ present_Y.to(X.dtype).T
distances[present_count == 0] = torch.nan
# avoid divide by zero
present_count = torch.maximum(
torch.tensor([1], dtype=present_count.dtype), present_count
)
distances /= present_count
distances *= X.shape[1]
if not self.squared:
distances = distances.sqrt()
return distances
# %%
# Validation
# ++++++++++
model = NanEuclidean()
X = torch.randn((5, 2))
Y = torch.randn((5, 2))
for i in range(5):
X[i, i % 2] = torch.nan
for i in range(4):
Y[i + 1, i % 2] = torch.nan
d1 = sklearn.metrics.nan_euclidean_distances(X.numpy(), Y.numpy())
d2 = model(X, Y)
# print(f"discrepancies: {max_diff(d1, d2)}")
# %%
# torch implementation of KNNImputer
# ==================================
#
# See :class:`sklearn.impute.KNNImputer`.
# The code is split into several :class:`torch.nn.Module`
# and refactored to avoid control flow.
def _get_mask(X, value_to_mask):
return torch.isnan(X)
class SubTopKIndices(torch.nn.Module):
def forward(self, x, k):
# torch does not like nans
xn = torch.nan_to_num(x, nan=1.0e10)
return torch.topk(xn, k, dim=1, largest=False, sorted=True).indices
class SubWeightMatrix(torch.nn.Module):
def __init__(self, weights):
super().__init__()
self.weights = weights
def forward(self, donors_dist):
weight_matrix = _get_weights(donors_dist, self.weights)
if weight_matrix is not None:
weight_matrix = weight_matrix.clone()
weight_matrix[torch.isnan(weight_matrix)] = 0.0
else:
weight_matrix = torch.ones_like(donors_dist)
weight_matrix[torch.isnan(donors_dist)] = 0.0
return weight_matrix
class SubDonorsIdx(torch.nn.Module):
def __init__(self):
super().__init__()
self._topk = SubTopKIndices()
def forward(self, dist_pot_donors, n_neighbors):
donors_idx = self._topk(dist_pot_donors, n_neighbors)
donors_dist = dist_pot_donors[torch.arange(donors_idx.shape[0])[:, None], donors_idx]
return donors_idx, donors_dist
class MakeNewWeights(torch.nn.Module):
def forward(self, donors_mask, donors, weight_matrix):
return donors_mask.to(donors.dtype) * weight_matrix.to(donors.dtype)
class CalcImpute(torch.nn.Module):
"""Implements :meth:`sklearn.impute.KNNImputer._calc_impute`."""
def __init__(self, weights):
super().__init__()
self._weights = SubWeightMatrix(weights)
self._donors_idx = SubDonorsIdx()
self._make_new_neights = MakeNewWeights()
def _calc_impute(self, dist_pot_donors, n_neighbors, fit_X_col, mask_fit_X_col):
donors_idx, donors_dist = self._donors_idx(dist_pot_donors, n_neighbors)
weight_matrix = self._weights(donors_dist)
# Retrieve donor values and calculate kNN average
donors = fit_X_col.take(donors_idx)
donors_mask = torch.tensor([1], dtype=donors_idx.dtype) - (
mask_fit_X_col.take(donors_idx)
).to(donors_idx.dtype)
new_weights = self._make_new_neights(donors_mask, donors, weight_matrix)
weights_sum = new_weights.sum(axis=1, keepdim=True)
div = torch.where(
weights_sum == 0, torch.tensor([1], dtype=weights_sum.dtype), weights_sum
)
res = (donors * new_weights).sum(axis=1, keepdim=True) / div
return res.squeeze(dim=1).to(dist_pot_donors.dtype)
def forward(self, dist_pot_donors, n_neighbors, fit_X_col, mask_fit_X_col):
return self._calc_impute(dist_pot_donors, n_neighbors, fit_X_col, mask_fit_X_col)
class ColProcessor(torch.nn.Module):
"""Processes one column (= one feature)."""
def __init__(self, col, n_neighbors, weights):
super().__init__()
self._calc_impute = CalcImpute(weights)
self.col = col
self.n_neighbors = n_neighbors
def process_one_col(
self,
X,
dist_chunk,
non_missing_fix_X,
mask_fit_X,
dist_idx_map,
mask,
row_missing_idx,
_fit_X,
):
col = self.col
X = X.clone()
row_missing_chunk = row_missing_idx
col_mask = mask[row_missing_chunk, col]
potential_donors_idx = torch.nonzero(non_missing_fix_X[:, col], as_tuple=True)[0]
# receivers_idx are indices in X
receivers_idx = row_missing_chunk[flatnonzero(col_mask)]
# distances for samples that needed imputation for column
dist_subset = dist_chunk[dist_idx_map[receivers_idx]][:, potential_donors_idx]
# receivers with all nan distances impute with mean
all_nan_dist_mask = torch.isnan(dist_subset).all(axis=1)
all_nan_receivers_idx = receivers_idx[all_nan_dist_mask]
# when all_nan_receivers_idx is not empty (training set is small)
mask_ = (~mask_fit_X[:, col]).to(_fit_X.dtype)
mask_sum = mask_.to(X.dtype).sum()
col_sum = (_fit_X[mask_ == 1, col]).sum().to(X.dtype)
div = torch.where(mask_sum > 0, mask_sum, torch.tensor([1], dtype=mask_sum.dtype))
X[all_nan_receivers_idx, col] = col_sum / div
# receivers with at least one defined distance
receivers_idx = receivers_idx[~all_nan_dist_mask]
dist_subset = dist_chunk[dist_idx_map[receivers_idx]][:, potential_donors_idx]
# when all_nan_receivers_idx is not empty (training set is big)
tn = torch.tensor(self.n_neighbors)
n_neighbors = torch.where(
tn < potential_donors_idx.shape[0], tn, potential_donors_idx.shape[0]
)
# to make sure n_neighbors > 0
n_neighbors = torch.where(
n_neighbors <= 0, torch.tensor([1], dtype=n_neighbors.dtype), n_neighbors
)
value = self._calc_impute(
dist_subset,
n_neighbors,
_fit_X[potential_donors_idx, col],
mask_fit_X[potential_donors_idx, col],
)
X[receivers_idx, col] = value.to(X.dtype)
return X
def forward(
self,
X,
dist_chunk,
non_missing_fix_X,
mask_fit_X,
dist_idx_map,
mask,
row_missing_idx,
_fit_X,
):
return self.process_one_col(
X,
dist_chunk,
non_missing_fix_X,
mask_fit_X,
dist_idx_map,
mask,
row_missing_idx,
_fit_X,
)
class MakeDictIdxMap(torch.nn.Module):
def forward(self, X, row_missing_idx):
dist_idx_map = torch.zeros(X.shape[0], dtype=int)
dist_idx_map[row_missing_idx] = torch.arange(row_missing_idx.shape[0])
return dist_idx_map
class TorchKNNImputer(torch.nn.Module):
def __init__(self, knn_imputer):
super().__init__()
assert (
knn_imputer.metric == "nan_euclidean"
), f"Not implemented for metric={knn_imputer.metric!r}"
self.dist = NanEuclidean()
cols = []
for col in range(knn_imputer._fit_X.shape[1]):
cols.append(ColProcessor(col, knn_imputer.n_neighbors, knn_imputer.weights))
self.columns = torch.nn.ModuleList(cols)
# refactoring
self._make_dict_idx_map = MakeDictIdxMap()
# knn imputer
self.missing_values = knn_imputer.missing_values
self.n_neighbors = knn_imputer.n_neighbors
self.weights = knn_imputer.weights
self.metric = knn_imputer.metric
self.keep_empty_features = knn_imputer.keep_empty_features
self.add_indicator = knn_imputer.add_indicator
# results of fitting
self.indicator_ = knn_imputer.indicator_
# The training results.
# self._fit_X = torch.from_numpy(knn_imputer._fit_X)
# self._mask_fit_X = torch.from_numpy(knn_imputer._mask_fit_X)
# self._valid_mask = torch.from_numpy(knn_imputer._valid_mask)
def _transform_indicator(self, X):
if self.add_indicator:
if not hasattr(self, "indicator_"):
raise ValueError(
"Make sure to call _fit_indicator before _transform_indicator"
)
raise NotImplementedError(type(self.indicator_))
# return self.indicator_.transform(X)
return None
def _concatenate_indicator(self, X_imputed, X_indicator):
if not self.add_indicator:
return X_imputed
if X_indicator is None:
raise ValueError(
"Data from the missing indicator are not provided. Call "
"_fit_indicator and _transform_indicator in the imputer "
"implementation."
)
return torch.cat([X_imputed, X_indicator], dim=0)
def transform(self, mask_fit_X, _valid_mask, _fit_X, X):
X = X.clone()
mask = _get_mask(X, self.missing_values)
X_indicator = self._transform_indicator(mask)
row_missing_idx = flatnonzero(mask[:, _valid_mask].any(axis=1))
non_missing_fix_X = torch.logical_not(mask_fit_X)
# Maps from indices from X to indices in dist matrix
dist_idx_map = self._make_dict_idx_map(X, row_missing_idx)
# process in fixed-memory chunks
pairwise_distances = self.dist(X[row_missing_idx, :], _fit_X)
# The export unfold the loop as it depends on the number of features.
# Fixed in this case.
for col_processor in self.columns:
X = col_processor(
X,
pairwise_distances,
non_missing_fix_X,
mask_fit_X,
dist_idx_map,
mask,
row_missing_idx,
_fit_X,
)
if self.keep_empty_features:
Xc = X.clone()
Xc[:, ~_valid_mask] = 0
else:
Xc = X[:, _valid_mask]
return self._concatenate_indicator(Xc, X_indicator)
def forward(self, _mask_fit_X, _valid_mask, _fit_X, X):
return self.transform(_mask_fit_X, _valid_mask, _fit_X, X)
# %%
# Validation
# ++++++++++
#
# We need to do that with different sizes of training set.
def validate(size, sizey):
X = torch.randn((size, 2))
Y = torch.randn((sizey, 2))
for i in range(5):
X[i, i % 2] = torch.nan
for i in range(4):
Y[i + 1, i % 2] = torch.nan
knn_imputer = sklearn.impute.KNNImputer(n_neighbors=3)
knn_imputer.fit(X)
model = TorchKNNImputer(knn_imputer)
p1 = knn_imputer.transform(Y)
p2 = model.transform(
torch.from_numpy(knn_imputer._mask_fit_X),
torch.from_numpy(knn_imputer._valid_mask),
torch.from_numpy(knn_imputer._fit_X),
Y,
)
# d = max_diff(p1, p2)
# assert d["abs"] < 1e-5, f"Discrepancies for size={size} and sizey={sizey}, d={d}"
# print(f"knn discrepancies for size={size}: {d}")
p1 = knn_imputer.transform(Y[1:2])
p2 = model.transform(
torch.from_numpy(knn_imputer._mask_fit_X),
torch.from_numpy(knn_imputer._valid_mask),
torch.from_numpy(knn_imputer._fit_X),
Y[1:2],
)
# d = max_diff(p1, p2)
# assert d["abs"] < 1e-5, f"Discrepancies for size={size} and sizey={sizey}, d={d}"
# print(f"knn discrepancies for size={size}: {d}")
return knn_imputer, Y
knn5, Y10 = validate(5, 10)
knn50, Y40 = validate(50, 40)
inputs = [
(
(
torch.from_numpy(knn50._mask_fit_X),
torch.from_numpy(knn50._valid_mask),
torch.from_numpy(knn50._fit_X),
Y40,
),
{},
),
(
(
torch.from_numpy(knn5._mask_fit_X),
torch.from_numpy(knn5._valid_mask),
torch.from_numpy(knn5._fit_X),
Y10,
),
{},
),
]
DYNAMIC = torch.export.Dim.DYNAMIC
dynamic_shapes = ({0: DYNAMIC}, {}, {0: DYNAMIC}, {0: DYNAMIC})
ep = torch.export.export(TorchKNNImputer(knn5), inputs[0][0], dynamic_shapes=dynamic_shapes)
print(ep)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250207+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.5
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-training==1.21.0+cu126
[pip3] optree==0.14.0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250207+cu126
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0.dev20250208+cu126
[pip3] torchvision==0.22.0.dev20250208+cu126
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,848,655,783
|
[BE]: Try to remove unused type ignores - attempt 1
|
Skylion007
|
open
|
[
"oncall: distributed",
"oncall: jit",
"module: rocm",
"module: cpu",
"open source",
"module: amp (automated mixed precision)",
"Stale",
"release notes: quantization",
"release notes: distributed (c10d)",
"fx",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: distributed (checkpoint)",
"module: compiled autograd",
"oncall: distributed checkpointing",
"release notes: inductor (aoti)"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER - generated using mypy_clean_slate
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @mingfeima @XiaobingSuper @ashokei @jingxu10 @jerryzh168 @mcarilli @ptrblck @leslie-fang-intel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @xmfan @LucasLLC @pradeepfn @kwen2501 @c-p-i-o @yf225 @MeetVadakkanchery @mhorowitz @ekr0
| true
|
2,848,640,853
|
DISABLED test_avoid_register_spilling_cuda (__main__.BenchmarkFusionCudaTest)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 6
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_avoid_register_spilling_cuda&suite=BenchmarkFusionCudaTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37053115710).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 5 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_avoid_register_spilling_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_benchmark_fusion.py", line 168, in test_avoid_register_spilling
_, out_code2 = run_and_get_code(foo_c, m, inp)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/utils.py", line 1486, in run_and_get_code
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 570, in _fn
return fn(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/inductor/test_benchmark_fusion.py", line 133, in foo
def foo(m, inp):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 749, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1199, in forward
return compiled_fn(full_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 325, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 686, in inner_fn
outs = compiled_fn(args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 492, in wrapper
return compiled_fn(runtime_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/utils.py", line 2239, in run
return model(new_inputs)
File "/tmp/tmppa0oeeb9/3l/c3l75isqnsqheyyrfr3gltsefouscov2bnjxoqepsl5o45lxtsqr.py", line 356, in call
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 906, in run
if launcher.store_cubin and (not benchmark_run or not self.cuda_kernel_saved):
AttributeError: 'NoneType' object has no attribute 'store_cubin'
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_benchmark_fusion.py BenchmarkFusionCudaTest.test_avoid_register_spilling_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_benchmark_fusion.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,848,566,175
|
torch.from_numpy() raises TypeError: “expected np.ndarray (got numpy.ndarray)” in PyTorch 2.6.0
|
xiaoran007
|
closed
|
[
"needs reproduction",
"triaged",
"module: numpy"
] | 2
|
NONE
|
### 🐛 Describe the bug
In PyTorch 2.6.0, the torch.from_numpy() function raises the following error when passing a NumPy array:
> TypeError: expected np.ndarray (got numpy.ndarray)
**To reproduce**, run the following minimal example (which can be found in [document](https://pytorch.org/docs/stable/generated/torch.from_numpy.html)):
```python
import numpy
import torch
a = numpy.array([1, 2, 3])
t = torch.from_numpy(a)
```
**Environment**
• PyTorch version: 2.6.0+cu124
• NumPy version: 1.26.4
• Python version: 3.9
• OS: Linux (Ubuntu 22.04.5 LTS)
• Installation method: pip
Here is an example:
<img width="558" alt="Image" src="https://github.com/user-attachments/assets/080a8047-a4f9-4852-8968-c2e2ed17cc8a" />
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA TITAN Xp
Nvidia driver version: 550.120
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 2
Stepping: 1
CPU max MHz: 3300.0000
CPU min MHz: 1200.0000
BogoMIPS: 4799.96
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Virtualization: VT-x
L1d cache: 896 KiB (28 instances)
L1i cache: 896 KiB (28 instances)
L2 cache: 7 MiB (28 instances)
L3 cache: 70 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-13,28-41
NUMA node1 CPU(s): 14-27,42-55
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl 2023.1.0 h213fc3f_46344 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl-service 2.4.0 py39h5eee18b_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl_fft 1.3.11 py39h5eee18b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl_random 1.2.8 py39h1128e8f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] numpy 2.0.2 pypi_0 pypi
[conda] numpy-base 1.26.4 py39hb5e798b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @mruberry @rgommers
| true
|
2,848,543,654
|
torch.argsort() outputs wrongly
|
atinary-lbrey
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
torch.argsort() return the wrong indices. Here's the code I am using:
```python
import torch
torch.manual_seed(0)
x = torch.randn(5,2)
print(x)
print(torch.argsort(x, dim=0))
```
and the returns of this are
```
tensor([[ 1.5410, -0.2934],
[-2.1788, 0.5684],
[-1.0845, -1.3986],
[ 0.4033, 0.8380],
[-0.7193, -0.4033]])
tensor([[1, 2],
[2, 4],
[4, 0],
[3, 1],
[0, 3]])
```
My expected return for `torch.argsort(x, dim=0)` would be
```
tensor([[0, 2],
[4, 1],
[3, 4],
[1, 0],
[2, 3]])
```
NOTE: Using gather afterwards seems to work just fine but the actual displayed values are completely off. Also, adding the kwarg `stable=True` doesn't help.
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9354 32-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 36
Socket(s): 1
Stepping: 1
BogoMIPS: 6490.32
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cr8_legacy abm sse4a misalignsse 3dnowprefetch bpext invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid flush_l1d
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 1.1 MiB (36 instances)
L1i cache: 1.1 MiB (36 instances)
L2 cache: 36 MiB (36 instances)
L3 cache: 9 GiB (36 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-35
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] botorch==0.13.0
[pip3] gpytorch==1.14
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
| true
|
2,848,480,711
|
[BE][Ez]: Update fmtlib submodule to 11.1.3
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
This submodule just fixes a bunch of miscellaneous bugfix issues with ABI compatibility, compiler warning, workarounds for older compilers, performance, and edge cases in formatting.
| true
|
2,848,469,200
|
[Dynamo] support `isinstance(...)` check for type tuple
|
XuehaiPan
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146921
* __->__ #146984
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,848,447,880
|
Porting Pytorch to AIX Operating System.
|
KamathForAIX
|
open
|
[
"oncall: jit",
"triaged",
"open source",
"release notes: jit",
"module: dynamo",
"ciflow/inductor"
] | 10
|
NONE
|
Closes #146982
Fixes #146982
PyTorch compiles for AIX OS level 7.3 and above only. Currently we can compile PyTorch on AIX with GCC version >= 12 only. PyTorch will run in CPU-only mode in AIX.
To port PyTorch in AIX, we need the below changes:
1: Change RAW -> RAWDATA; this is done to avoid header file collisions within AIX, where RAW is defined similar to the link [here](https://chromium.googlesource.com/native_client/nacl-glibc.git/+/glibc-2.9/sysdeps/unix/bsd/bsd4.4/bits/ioctls.h#200). If we change it to RAWDATA, then things would look clean rather than undefining RAW in the Pytorch code.
To be more specific,
We need an include file termio.h in AIX, and line 36 of this header includes #include <sys/ioctl.h>.
The ioctl.h header file in AIX defines a macro
`#define RAW 0x00000020 /* no i/o processing */`
2: extern thread_local variables are marked as weak and hidden in AIX, which does not make them available while linking the library and hence lead to symbol undefined issues. We will take the encapsulated route used by Microsoft and iPhone-like [here](https://github.com/pytorch/pytorch/pull/146983/files#diff-b3651b15177d065d4a02b0bd03703b6df569fc7a53ce88c4c6dbd6145adf35f6R16).
4: AIX does not use glibc and has its own libc, where __assert_fail() is there, and hence I declared it [here](https://github.com/pytorch/pytorch/pull/146983/files#diff-8b8e2531c9927f406bcab344a60870250199dfd4909315296b7de13f3cb5d281R409).
5: The change from SHARED to MODULE [here](https://github.com/pytorch/pytorch/pull/146983/files#diff-c5ee05f1e918772792ff6f2a3f579fc2f182e57b1709fd786ef6dc711fd68b27R1644) is due to the following reason: In AIX, we archive shared libraries so that multiple versions of libraries can coexist in the same archive. So when we say SHARED in CMAKE, we create a ".a" or an archived shared library. But dl_open() understands only ".so". Shared modules in AIX within CMake are ".so". This should not affect our Linux mates :)
6: Lastly, we would like to give our users the flexibility to use blibpath and set their install_rpath. So the change is [here](https://github.com/pytorch/pytorch/pull/146983/files#diff-60f61ab7a8d1910d86d9fda2261620314edcae5894d5aaa236b821c7256badd7R1036).
If a user sets using LDFLAGS, example, we use export LDFLAGS="-lexpat -lncurses -Wl,-blibpath:/opt/freeware/lib/pthread:/opt/freeware/lib64:/opt/freeware/lib:/usr/lib:/lib " then we need to pick the install_rpath from blibpath; otherwise, whatever setup.py calculates.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi
| true
|
2,848,435,788
|
Porting Pytorch to AIX Operating System.
|
KamathForAIX
|
open
|
[
"module: build",
"triaged",
"enhancement",
"module: POWER"
] | 1
|
NONE
|
Hi everyone,
AIX is a UNIX-based operating system used widely in PowerPC Enterprise Hardware. Recently, we have modernized AIX with AI/ML packages like Numpy, Scipy, Pandas, OpenBLAS, and also ported ONNXRUNTIME, which are used by AIX users. We also have the build tools like CMake and Meson working in AIX.
PyTorch being a popular deep learning framework, we would like to make it work in AIX so that AIX users can use the same to explore the deep learning world.
I have the code changes required to port Pytorch in AIX and will raise a pull request for review/document purposes.
Kindly let me know if AIX can be a part of Pytorch code, as we would like to contribute :).
cc @malfet @seemethere
| true
|
2,848,420,310
|
[ROCm][Windows] Fix clang-cl error related to -Wmissing prototypes enabled
|
m-gallus
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Some of the windows files (fused_kernels.cpp or temp_file.h) contain code that fail to compile when this flag is enabled when built with clang-cl.
This PR resolves the issue by ensuring that even if we build with clang-cl, it doesn't include those flags on windows.
Alternatively if needed, I can fix the files mentioned to pass under this flag.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,848,377,220
|
Feature Request: Add dlfloat16 support as a new dtype
|
rebel-seungchul
|
open
|
[
"triaged",
"enhancement",
"needs research"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
I propose adding support for dlfloat16 as a new dtype in PyTorch. This feature would allow users to utilize the dlfloat16 data type natively within the PyTorch framework.
- Hardware Compatibility: Some specialized hardware architectures are optimized for dlfloat16, and native support would enable seamless integration with these systems.
- Performance Optimization: dlfloat16 can offer a balance between computational efficiency and numerical precision, potentially improving performance in certain deep learning tasks.
- Flexibility: Adding dlfloat16 would provide users with more options for fine-tuning their models' precision and memory usage.
### Alternatives
If adding dlfloat16 as a native dtype is not feasible, we kindly request guidance on alternative approaches.
- Custom Extensions: Is it possible to implement dlfloat16 support through PyTorch's extension mechanisms?
- Emulation Layer: Would it be feasible to create an emulation layer that simulates dlfloat16 behavior using existing data types?
This addition would greatly enhance flexibility for users working with emerging hardware and precision formats. I appreciate your consideration and look forward to your feedback.
| true
|
2,848,277,679
|
[ROCm] Update meta_registration for efficient attention
|
AmdSampsa
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 12
|
COLLABORATOR
|
Fixes a series of failing and skipped unit tests.
For nvidia hw, the longsumexp last dimension is required to be a multiple of 32. This is not the case for rocm.
A related issue: https://github.com/pytorch/pytorch/issues/146848
The unit tests in question:
```bash
inductor.test_fused_attention SDPAPatternRewriterCudaDynamicTests test_sdpa_prev_13_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaDynamicTests test_sdpa_prev_14_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaDynamicTests test_sdpa_prev_15_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaDynamicTests test_sdpa_rewriter_11_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaDynamicTests test_sdpa_rewriter_14_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaDynamicTests test_sdpa_rewriter_15_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaDynamicTests test_sdpa_rewriter_17_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaDynamicTests test_sdpa_rewriter_1_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaDynamicTests test_sdpa_rewriter_1_freezing
inductor.test_fused_attention SDPAPatternRewriterCudaDynamicTests test_sdpa_rewriter_2_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaDynamicTests test_sdpa_rewriter_3_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaDynamicTests test_sdpa_rewriter_4_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaDynamicTests test_sdpa_rewriter_6_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaTests test_sdpa_prev_13_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaTests test_sdpa_prev_14_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaTests test_sdpa_prev_15_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaTests test_sdpa_rewriter_11_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaTests test_sdpa_rewriter_14_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaTests test_sdpa_rewriter_15_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaTests test_sdpa_rewriter_17_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaTests test_sdpa_rewriter_1_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaTests test_sdpa_rewriter_1_freezing
inductor.test_fused_attention SDPAPatternRewriterCudaTests test_sdpa_rewriter_2_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaTests test_sdpa_rewriter_3_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaTests test_sdpa_rewriter_4_cuda
inductor.test_fused_attention SDPAPatternRewriterCudaTests test_sdpa_rewriter_6_cuda
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,848,148,298
|
DISABLED test_comprehensive_stft_cuda_float32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_stft_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37047798754).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_stft_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,848,130,788
|
How to install Torch version that supports RTX 5090 on Windows? - CUDA kernel errors might be asynchronously reported at some other API call
|
FurkanGozukara
|
closed
|
[
"high priority",
"needs reproduction",
"module: build",
"module: windows",
"module: cuda",
"triaged"
] | 11
|
NONE
|
I have purchased RTX 5090 just to test AI apps
Currently getting this error on any app
I need torch for Python 3.10 venv on Windows
I am ok with installing nightly version etc just install command please
```
Traceback (most recent call last):
File "E:\trellis_v5\TRELLIS\app.py", line 401, in <module>
pipeline = TrellisImageTo3DPipeline.from_pretrained("JeffreyXiang/TRELLIS-image-large")
File "E:\trellis_v5\TRELLIS\trellis\pipelines\trellis_image_to_3d.py", line 56, in from_pretrained
pipeline = super(TrellisImageTo3DPipeline, TrellisImageTo3DPipeline).from_pretrained(path)
File "E:\trellis_v5\TRELLIS\trellis\pipelines\base.py", line 39, in from_pretrained
_models = {
File "E:\trellis_v5\TRELLIS\trellis\pipelines\base.py", line 40, in <dictcomp>
k: models.from_pretrained(f"{path}/{v}")
File "E:\trellis_v5\TRELLIS\trellis\models\__init__.py", line 59, in from_pretrained
model = __getattr__(config['name'])(**config['args'], **kwargs)
File "E:\trellis_v5\TRELLIS\trellis\models\structured_latent_vae\decoder_mesh.py", line 105, in __init__
self.mesh_extractor = SparseFeatures2Mesh(res=self.resolution*4, use_color=self.rep_config.get('use_color', False))
File "E:\trellis_v5\TRELLIS\trellis\representations\mesh\cube2mesh.py", line 68, in __init__
verts, cube = construct_dense_grid(self.res, self.device)
File "E:\trellis_v5\TRELLIS\trellis\representations\mesh\utils_cube.py", line 11, in construct_dense_grid
vertsid = torch.arange(res_v ** 3, device=device)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @ptrblck @eqy
| true
|
2,848,126,027
|
Adding sparse_coo_tensor objects together behaves differently with CPU and CUDA
|
aeverallpx
|
open
|
[
"module: sparse",
"triaged"
] | 1
|
NONE
|
If you add two torch.sparse_coo_tensor objects together it will behave differently if they are on cpu or cuda.
cpu behaviour: duplicated elements are summed
cuda behaviour: all elements are appended so an extra coalesce() is needed to sum elements.
Example code:
```python
import torch
for device in ['cpu', 'cuda:0']:
indices = torch.tensor([[0,1,1,2],[1,0,0,3]], device=device)
values = torch.tensor([1.,2.,3.,4.], device=device)
print(device)
print(torch.sparse_coo_tensor(indices, values) + torch.sparse_coo_tensor(indices, values), end='\n')
```
Returns:
```
cpu
tensor(indices=tensor([[0, 1, 1, 2],
[1, 0, 0, 3]]),
values=tensor([2., 4., 6., 8.]),
size=(3, 4), nnz=4, layout=torch.sparse_coo)
cuda:0
tensor(indices=tensor([[0, 1, 1, 2, 0, 1, 1, 2],
[1, 0, 0, 3, 1, 0, 0, 3]]),
values=tensor([1., 2., 3., 4., 1., 2., 3., 4.]),
device='cuda:0', size=(3, 4), nnz=8, layout=torch.sparse_coo)
```
I'm not sure which is preferred behaviour as I deliberately gave an example with duplicated indices and neither coalesced the existing tensor. Probably summing elements (as in cpu behaviour) is usually desired. The append version caused a problem in our code.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.13.1 | packaged by conda-forge | (main, Jan 13 2025, 09:53:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.10.233-223.887.amzn2.x86_64-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 7
BogoMIPS: 4999.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 4 MiB (4 instances)
L3 cache: 35.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] cudatoolkit 11.8.0 h4ba93d1_13 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip
| true
|
2,847,804,820
|
[inductor][cpu]pyhpc_equation_of_state multiple thread performance failure in 2025-02-10 nightly release
|
zxd1997066
|
closed
|
[
"oncall: cpu inductor"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
pyhpc_equation_of_state multiple thread performance failure
the bad commit: 68cf36d5ab6165372160f65eb84e13d0f8dbc5dc
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench pyhpc_equation_of_state amp first static cpp
loading model: 0it [00:00, ?it/s]
loading model: 0it [00:00, ?it/s]
cpu eval pyhpc_equation_of_state
ERROR:common:Backend dynamo failed in warmup()
Traceback (most recent call last):
File "/workspace/pytorch/benchmarks/dynamo/common.py", line 3413, in warmup
fn(model, example_inputs)
File "/workspace/pytorch/torch/_dynamo/eval_frame.py", line 574, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/workspace/pytorch/torch/_inductor/compile_fx.py", line 752, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/workspace/pytorch/torch/_inductor/compile_fx.py", line 737, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/workspace/pytorch/torch/_inductor/compile_fx.py", line 1402, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/workspace/pytorch/torch/_inductor/compile_fx.py", line 1122, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
File "/workspace/pytorch/torch/_inductor/graph.py", line 2018, in compile_to_module
return self._compile_to_module()
File "/workspace/pytorch/torch/_inductor/graph.py", line 2060, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/workspace/pytorch/torch/_inductor/codecache.py", line 2757, in load_by_key_path
mod = _reload_python_module(key, path)
File "/workspace/pytorch/torch/_inductor/runtime/compile_tasks.py", line 51, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_root/3r/c3r47djmrqz3gfwdvxp5sr6btmbqoald24yucqtoqdo6qxzzsudk.py", line 31, in <module>
cpp_fused_add_div_log_mul_pow_reciprocal_sqrt_sub_0 = async_compile.cpp_pybinding(['double*', 'double*', 'double*', 'const double*', 'const double*', 'const double*', 'double*', 'double*', 'double*', 'double*', 'double*', 'double*', 'double*', 'double*', 'double*'], '''
File "/workspace/pytorch/torch/_inductor/async_compile.py", line 274, in cpp_pybinding
return CppPythonBindingsCodeCache.load_pybinding(argtypes, source_code)
File "/workspace/pytorch/torch/_inductor/codecache.py", line 2259, in load_pybinding
return cls.load_pybinding_async(*args, **kwargs)()
File "/workspace/pytorch/torch/_inductor/codecache.py", line 2251, in future
result = get_result()
File "/workspace/pytorch/torch/_inductor/codecache.py", line 2042, in load_fn
result = worker_fn()
File "/workspace/pytorch/torch/_inductor/codecache.py", line 2082, in _worker_compile_cpp
cpp_builder.build()
File "/workspace/pytorch/torch/_inductor/cpp_builder.py", line 1524, in build
run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
File "/workspace/pytorch/torch/_inductor/cpp_builder.py", line 347, in run_compile_cmd
_run_compile_cmd(cmd_line, cwd)
File "/workspace/pytorch/torch/_inductor/cpp_builder.py", line 342, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._inductor.exc.InductorError: CppCompileError: C++ compile error
```
the last good commit: 8e56d713c98da9587440c708f86aaef5a3a73dc3
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench pyhpc_equation_of_state amp first static cpp
loading model: 0it [00:00, ?it/s]
cpu eval pyhpc_equation_of_state
running benchmark: 100%|██████████████████████████████████████████████████████████████| 50/50 [00:01<00:00, 27.79it/s]
14.957x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,pyhpc_equation_of_state,0,0.000000,0.000000,0,0,0,0,0,0,0,0,0,0,0
cpu,pyhpc_equation_of_state,1048576,14.957284,2.031060,39.207838,0.932945,145.388749,155.838464,368,1,0,0,0,0,0
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>373ffb19</td>
<td>main</td>
<td>766a5e3a</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>6a9a02acbe34a9d810c8bf56c865b9d0687a3051</td>
<td>main</td>
<td>8cc415774f47b5a50077f72ea493b71b8101e48d</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+2709b65</td>
<td>main</td>
<td>2.6.0a0+b6d4675</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance torchbench pyhpc_equation_of_state amp first static cpp
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/68cf36d5ab6165372160f65eb84e13d0f8dbc5dc
[torchbench-pyhpc_equation_of_state-inference-amp-dynamic-default-multiple-performance-crash_guilty_commit.log](https://github.com/user-attachments/files/18766189/torchbench-pyhpc_equation_of_state-inference-amp-dynamic-default-multiple-performance-crash_guilty_commit.log)
cc @chuanqi129
| true
|
2,847,746,562
|
[inductor][cpu]detectron2_fcos_r_50_fpn multiple thread accuracy failure in 2025-02-10 nightly release
|
zxd1997066
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
detectron2_fcos_r_50_fpn accuracy failure
the bad commit: d1f82de2bf4ce4d4461791a9c9b2e759202db0bb
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference accuracy torchbench detectron2_fcos_r_50_fpn amp first static cpp
Testing with cpp wrapper.
Testing with inductor.
multi-threads testing....
loading model: 0it [00:03, ?it/s]
cpu eval detectron2_fcos_r_50_fpn
WARNING:common:fp64 golden ref were not generated for detectron2_fcos_r_50_fpn. Setting accuracy check to cosine
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
fail_accuracy
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,accuracy,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips,compilation_latency
cpu,detectron2_fcos_r_50_fpn,4,fail_accuracy,947,31,22,4,0,0,0,117.363322
```
the last good commit: 3e135993bd0fa08cbff565ae76bb15cb08e1d6d0
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference accuracy torchbench detectron2_fcos_r_50_fpn amp first static cpp
Testing with cpp wrapper.
Testing with inductor.
multi-threads testing....
loading model: 0it [00:03, ?it/s]
cpu eval detectron2_fcos_r_50_fpn
WARNING:common:fp64 golden ref were not generated for detectron2_fcos_r_50_fpn. Setting accuracy check to cosine
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
W0212 09:29:00.900000 16902 torch/_dynamo/convert_frame.py:917] [10/8] torch._dynamo hit config.recompile_limit (8)
W0212 09:29:00.900000 16902 torch/_dynamo/convert_frame.py:917] [10/8] function: 'forward' (/opt/conda/lib/python3.10/site-packages/detectron2/modeling/backbone/resnet.py:194)
W0212 09:29:00.900000 16902 torch/_dynamo/convert_frame.py:917] [10/8] last reason: 10/0: tensor 'L['x']' size mismatch at index 1. expected 64, actual 1024
W0212 09:29:00.900000 16902 torch/_dynamo/convert_frame.py:917] [10/8] To log all recompilation reasons, use TORCH_LOGS="recompiles".
W0212 09:29:00.900000 16902 torch/_dynamo/convert_frame.py:917] [10/8] To diagnose recompilation issues, see https://pytorch.org/docs/main/torch.compiler_troubleshooting.html.
pass
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,accuracy,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips,compilation_latency
cpu,detectron2_fcos_r_50_fpn,4,pass,864,40,24,5,0,0,0,122.079369
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>373ffb19</td>
<td>main</td>
<td>766a5e3a</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>6a9a02acbe34a9d810c8bf56c865b9d0687a3051</td>
<td>main</td>
<td>8cc415774f47b5a50077f72ea493b71b8101e48d</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+2709b65</td>
<td>main</td>
<td>2.6.0a0+b6d4675</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference accuracy torchbench detectron2_fcos_r_50_fpn amp first static cpp
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/d1f82de2bf4ce4d4461791a9c9b2e759202db0bb
[torchbench-detectron2_fcos_r_50_fpn-inference-amp-dynamic-default-multiple-accuracy-crash_guilty_commit.log](https://github.com/user-attachments/files/18765821/torchbench-detectron2_fcos_r_50_fpn-inference-amp-dynamic-default-multiple-accuracy-crash_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129
| true
|
2,847,732,973
|
[associative_scan] compile backend change to "eager"
|
bohnstingl
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 16
|
COLLABORATOR
|
This PR fixes some issues with torch export discussed here: https://github.com/pytorch/pytorch/pull/140043#discussion_r1941932960
However, this backend change does still not resolve the failure for specific shapes mentioned here: https://github.com/pytorch/pytorch/issues/137943#issuecomment-2649564994
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @ydwu4
| true
|
2,847,689,567
|
DISABLED test_per_sample_api_compute_batch_size_not_pytreeable_cpu (__main__.TestExpandedWeightModuleCPU)
|
pytorch-bot[bot]
|
open
|
[
"module: nn",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2"
] | 11
|
NONE
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_per_sample_api_compute_batch_size_not_pytreeable_cpu&suite=TestExpandedWeightModuleCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37072110412).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_per_sample_api_compute_batch_size_not_pytreeable_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_expanded_weights.py", line 928, in test_per_sample_api_compute_batch_size_not_pytreeable
class NonPytreeableTuple:
File "/opt/conda/envs/py_3.9/lib/python3.9/dataclasses.py", line 1021, in dataclass
return wrap(cls)
File "/opt/conda/envs/py_3.9/lib/python3.9/dataclasses.py", line 1013, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash, frozen)
File "/opt/conda/envs/py_3.9/lib/python3.9/dataclasses.py", line 927, in _process_class
_init_fn(flds,
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1372, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1156, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 564, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1051, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1000, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 725, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 759, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 235, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 679, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2984, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1118, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1028, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 712, in wrapper
return handle_graph_break(self, inst, speculation.reason)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 753, in handle_graph_break
self.output.compile_subgraph(self, reason=reason)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 1012, in compile_subgraph
value.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 479, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 383, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 610, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 610, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/dataclasses.py", line 531, in _init_fn
return _create_fn('__init__',
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_expanded_weights.py TestExpandedWeightModuleCPU.test_per_sample_api_compute_batch_size_not_pytreeable_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_expanded_weights.py`
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @clee2000 @wdvr @chauhang @penguinwu
| true
|
2,847,689,505
|
DISABLED test_cat_max_autotune_extern (__main__.TestMaxAutotune)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: rocm, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cat_max_autotune_extern&suite=TestMaxAutotune&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37002417061).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cat_max_autotune_extern`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_max_autotune.py", line 822, in test_cat_max_autotune_extern
self._test_cat_max_autotune_impl(using_triton_mm=False)
File "/var/lib/jenkins/pytorch/test/inductor/test_max_autotune.py", line 818, in _test_cat_max_autotune_impl
self.assertEqual(f_c(*inps), f(*inps), atol=0.03, rtol=0.25)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 570, in _fn
return fn(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/inductor/test_max_autotune.py", line 812, in f
def f(x, y):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 749, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1199, in forward
return compiled_fn(full_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 325, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 686, in inner_fn
outs = compiled_fn(args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 492, in wrapper
return compiled_fn(runtime_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/utils.py", line 2239, in run
return model(new_inputs)
File "/tmp/tmpy38woj25/ny/cnyfsuczp7mxpc2o4cymqimec43xi3qgbe5mwptv363y4dbonoln.py", line 146, in call
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 906, in run
if launcher.store_cubin and (not benchmark_run or not self.cuda_kernel_saved):
AttributeError: 'NoneType' object has no attribute 'store_cubin'
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_max_autotune.py TestMaxAutotune.test_cat_max_autotune_extern
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_max_autotune.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,847,640,368
|
[Inductor] Unify the data type propagation between Triton and CPP Backend
|
DDEle
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"oncall: pt2",
"module: inductor",
"module: dynamo"
] | 10
|
CONTRIBUTOR
|
Fixes #144246
Use `DtypePropagationOpsHandler` for CSE variables of CPP backend. In addition, add static type checking for the generated CPP code similar to the `config.test_configs.runtime_triton_dtype_assert`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,847,524,084
|
Symbol problem about static variable in inline function
|
dilililiwhy
|
open
|
[
"module: build",
"module: cpp-extensions",
"triaged"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
It seems that a symbol problem (static variable in inline function, https://github.com/pytorch/pytorch/issues/125465) is exposed, as nightly build hides torch_python symbols in default sinece 20241216.
> The variable BackendMetaSerialization in function GetBackendMetaSerialization will have different addresses in torch and other third-party modules.
20241215
```
8229: 000000000139f040 1512 OBJECT <OS specific>: 10 DEFAULT 28 _ZZN5torch3jit27GetBackendMetaSerializationEvE24BackendMetaSerialization
9718: 000000000139f030 8 OBJECT <OS specific>: 10 DEFAULT 28 _ZGVZN5torch3jit27GetBackendMetaSerializationEvE24BackendMetaSerialization
41851: 000000000139f030 8 OBJECT <OS specific>: 10 DEFAULT 28 _ZGVZN5torch3jit27GetBackendMetaSerializationEvE24BackendMetaSerialization
41852: 000000000139f040 1512 OBJECT <OS specific>: 10 DEFAULT 28 _ZZN5torch3jit27GetBackendMetaSerializationEvE24BackendMetaSerialization
```
20241216
```
29402: 00000000012784f0 8 OBJECT LOCAL HIDDEN 28 _ZGVZN5torch3jit27GetBackendMetaSerializationEvE24BackendMetaSerialization
29403: 0000000001278500 1512 OBJECT LOCAL HIDDEN 28 _ZZN5torch3jit27GetBackendMetaSerializationEvE24BackendMetaSerialization
```
After registering additional serialization function in third-party extension:
```
torch::jit::TensorBackendMetaRegistry(c10::DeviceType::PrivateUse1, &torch_npu::npu_info_serialization, &torch_npu::npu_info_deserialization);
```
Custom fptr will not be called due to _has_value_ check inside torch/csrc/jit/serialization/pickler.h still return 0.
```
if (BackendMetaSerialization[device_type].has_value()) {
// Pass the tensor and metadata map references as parameters to the custom
// deserialization function.
BackendMetaPtr fptr = BackendMetaSerialization[device_type].value().second;
fptr(t, metadata);
}
```
This can be reproduced by testcase _test_open_device_serialization_ in test_cpp_extensions_open_device_registration.py.
```
======================================================================
FAIL: test_open_device_serialization (__main__.TestCppExtensionOpenRgistration)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/_internal/cpython-3.9.21/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 3108, in wrapper
method(*args, **kwargs)
File "/home/wuhy/github/pytorch/test/test_cpp_extensions_open_device_registration.py", line 391, in test_open_device_serialization
self.assertTrue(self.module.check_backend_meta(z1))
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
python test/test_cpp_extensions_open_device_registration.py TestCppExtensionOpenRgistration.test_open_device_serialization
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 1 test in 29.115s
FAILED (failures=1)
```
Is there any good solution to this problem?
### Versions
PyTorch version: 2.6.0.dev20241216+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: AlmaLinux 8.10 (Cerulean Leopard) (x86_64)
GCC version: (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9)
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.28
Python version: 3.9.21 (main, Dec 17 2024, 07:34:47) [GCC 14.2.1 20240801 (Red Hat 14.2.1-1)] (64-bit runtime)
Python platform: Linux-3.10.0-1160.119.1.el7.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6266C CPU @ 3.00GHz
Stepping: 7
CPU MHz: 3000.000
BogoMIPS: 6000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 30976K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.6.0.dev20241216+cpu
[conda] No relevant packages
cc @malfet @seemethere @zou3519 @xmfan
| true
|
2,847,429,553
|
fix doc string
|
probli
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (torchelastic)"
] | 5
|
CONTRIBUTOR
|
Fixes a wrong function name in doc string
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,847,379,763
|
[prototype][not for review] How to use sources for input signature rewriting in export
|
anijain2305
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146967
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,847,313,226
|
remove unnecessary xpu availability check when retrieving aot flags
|
jingxu10
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 19
|
COLLABORATOR
|
As title
Retrieving xpu aot flags that the pytorch binary was compiled against is not the same as running the binary itself. Thus it doesn't seem to necessarily check if there is an xpu environment available.
| true
|
2,847,282,769
|
[BE] Unify kernel templates instantiation
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146993
* __->__ #146965
By defining `REGISTER_BINARY_OP` template that could be used to register fmix, fmax, etc
| true
|
2,847,242,745
|
Feature Request: Interface to Check Compilation Status Inside Compiled Module
|
zhiyuan1i
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
Dear PyTorch Community,
I hope this message finds you all in good spirits. I'm writing to seek guidance on a couple of aspects related to PyTorch's compilation features, and I also have a feature request.
### Current Situation and Problem
I'm interested in using the `torch.compiler.is_compiling()` API. After delving into the source code, I've discovered that it checks the `torch.compiler._is_compiling_flag`, which is only employed during the compilation process of `torch.export`. This means that the current behavior of `torch.compiler.is_compiling()` doesn't align with my expectation.
Another issue I've encountered is that when I use `model = torch.compile(model)`, it frequently fails to recursively compile all the internal components of the model. This lack of recursive compilation can limit the optimization potential of the overall model.
### Desired Functionality
My ultimate goal is to be able to perform custom operations within the model's internal code based on whether the model or its sub - modules are being compiled. Specifically, I would like to know if there's a way to make `torch.compiler.is_compiling()` return `True` when queried inside the model's code after `torch.compile(model)` has been called. If this were achievable, I could manually utilize the compiled functions within the model, potentially enhancing its performance.
### Feature Request
I would like to propose the addition of an interface that allows us to query whether a module has been compiled from within the module itself. This new interface could be a simple function, for example, `torch.module.is_compiled()`, which would return a boolean indicating the compilation status of the module. Such an interface would provide more flexibility for advanced users to fine - tune their models during the compilation process.
I would be extremely grateful if the community could offer insights, suggestions, or point me in the right direction on how to achieve the current goal or if there are any plans to implement a feature similar to the one I've requested.
Thank you so much for your time and consideration.
Best regards
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @StrongerXi
| true
|
2,847,227,527
|
Fix clang-tidy warnings in torch/jit
|
cyyever
|
closed
|
[
"oncall: jit",
"triaged",
"open source",
"Merged",
"NNC",
"ciflow/trunk",
"release notes: jit"
] | 6
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,847,117,434
|
Improve typing of torch/_guards.py
|
cyyever
|
open
|
[
"open source",
"Stale",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,847,115,601
|
Port distributed backend tests to Pytest
|
fangchenli
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"topic: not user facing"
] | 3
|
NONE
|
xref #11578
Output of `python test/distributed/test_backends.py`
```shell
(venv) venvfangchenli@Fangchens-MacBook-Pro-2 pytorch-fangchenli % python test/distributed/test_backends.py
================================================ test session starts ================================================
platform darwin -- Python 3.13.1, pytest-7.3.2, pluggy-1.5.0
rootdir: /Users/fangchenli/Workspace/pytorch-fangchenli
configfile: pytest.ini
plugins: xdoctest-1.1.0, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, hypothesis-5.35.1, xdist-3.3.1, subtests-0.13.1, typeguard-4.3.0
collected 5 items
Running 5 items in this shard
test/distributed/test_backends.py ..... [100%]
================================================= 5 passed in 0.03s =================================================
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,847,072,649
|
Optimize `virtualized.py` typing
|
zeshengzong
|
open
|
[
"open source",
"topic: not user facing",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
Fixes part of #146167
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,847,045,374
|
DISABLED test_insignificant_strides (__main__.SDPAPatternRewriterCudaDynamicTests)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"skipped",
"module: sdpa"
] | 3
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22inductor%2Ftest_fused_attention.py%3A%3ASDPAPatternRewriterCudaDynamicTests%3A%3Atest_insignificant_strides%22%5D)).
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,847,018,474
|
[Inductor][CPP] Fix a CPP GEMM Template output data type issue
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146958
**Summary**
Issue found when fixing https://github.com/pytorch/ao/issues/1662. A FP32 GEMM with an epilogue node `to_fp16` resulted in [generated code](https://gist.github.com/leslie-fang-intel/464fb112abdb105818ae09b057350e84), which failed to compile. The root cause is that we used the slice of global buffer `Y` as the output of micro GEMM instead of a `local buffer`. However, due to the `to_fp16` epilogue node, the global buffer `Y` has a float16 data type, leading to the failure. This fix will ensure the use of a local buffer in such cases.
**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_cpu_select_algorithm.py -k test_linear_to_lowp_fp
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,846,961,731
|
[cuDNN] cuDNN to 9.7.1.26 for CUDA 12.8
|
tinglvv
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
rebasing for https://github.com/pytorch/pytorch/pull/146717
cc @atalman @malfet @eqy @ptrblck @nWEIdia
| true
|
2,846,945,959
|
[c10d] Consolidate watchdog threads
|
kwen2501
|
open
|
[
"oncall: distributed",
"triaged",
"module: c10d"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Watchdog thread is spawned by each ProcessGroupNCCL object today. In addition, there are heartbeat monitor thread, on-completion hook thread, etc. As the parallel dimension increases, the thread number quickly goes up, as it is multiplied by 3.
We should try to consolidate the watchdog threads into 1 per all PGs. Same for the other two thread types. This will help increase the thread's utilization and reduce context switching.
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,846,924,502
|
[export] Minor fix to locals
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146939
* __->__ #146955
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,846,916,318
|
[cond] make cond re-dispatch in proxy mode
|
ydwu4
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147130
* #147045
* __->__ #146954
| true
|
2,846,866,335
|
Fix function name in doc string
|
probli
|
closed
|
[
"oncall: distributed",
"release notes: distributed (torchelastic)"
] | 3
|
CONTRIBUTOR
|
The correct function is `record_exception`, not `record`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,846,841,173
|
[BE] Unskip some tensor creation tests on Mac
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Followup after https://github.com/pytorch/pytorch/pull/145367
One should never use skip, but rather xfail otherwise one never knows when test is finally fixed.
`test_float_to_int_conversion_finite` were fixed on MacOS a while back (guess since the time Intel builds were disbaled), while `test_float_to_int_conversion_nonfinite` is fixed by https://github.com/pytorch/pytorch/pull/145367 that selects architecture-appropriate reference values for Arm ISA
Note, that results of floating to integral types cast are undefined if floating point value is outside of integral dynamic range
"Fixes" https://github.com/pytorch/pytorch/issues/38752
| true
|
2,846,834,187
|
[dynamo] `x is x` gets incorrectly interpreted to `False`
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```python
import torch
@torch.compile(fullgraph=True, backend="eager")
def fn(x):
l = []
if l is l:
return x + 1
return x + 2
print(fn(torch.zeros(1))) # prints `tensor([2.])`, but should be `tensor([1.])`
```
We could fix it by adding another case here (same VT instance ==> return True): https://github.com/pytorch/pytorch/blob/7aa629f1268f6944eee6e49e43071b4342bf1669/torch/_dynamo/variables/builtin.py#L658-L662
I don't know if the converse also holds; I fear there _might_ be cases where the same object gets modelled with 2 distinct VT instances... e.g., after fake-tensor-prop through `allow_in_graph`?
Anyways, a bigger problem is the following, i.e., Dynamo constant folding should acknowledge that alias relationship isn't always preserved after `VariableTracker.as_python_constant`. So we either fix the latter or restrict constant folding further: https://github.com/pytorch/pytorch/blob/7aa629f1268f6944eee6e49e43071b4342bf1669/torch/_dynamo/variables/builtin.py#L875-L887
### Error logs
_No response_
### Versions
Python 3.12.5, main fc5913b6bf7
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,846,806,153
|
[dynamo] Use the new `get_unique_name_wrt` helper when applicable
|
StrongerXi
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147572
* #147571
* __->__ #146950
* #146367
* #146714
This patch removes some duplicated name generation logic in Dynamo.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,846,789,544
|
Fix broken test stat upload jobs
|
benjaminglass1
|
closed
|
[
"module: ci",
"open source",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Fixes currently-broken upload jobs for test stat and inductor benchmark stat uploads.
Example job: https://github.com/pytorch/pytorch/actions/runs/13274540948/job/37061406258
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,846,785,072
|
[Sigmoid] Fix issues with constant folding and fba_ops
|
trieuat
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary:
There are 2 issues:
- `skip_folding_node_fn` isn't considered when propagating constant values. So given a skipped node with constant inputs, it outputs a constant and its users can output constant values and then be included in the constant graph. However, the skipped node is not included in the constant graph when extracting the constant graph. This issue is fixed by checking for skipped node when propagating the constant values and making the skipped node to output unknown value (not constant) so that its users cannot output constant.
- `fba_linear` op can be included in the constant graph but it is not implemented for CPU so constant graph cannot be executed. This issue is fixed by converting `fba_linear` to `aten.addmm`.
- A refactor to allow more fba_ops to be included in the constant graph (via mapping fba_ops to aten ops).
Reviewed By: StellarrZ
Differential Revision: D68716393
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,846,778,769
|
[DO NOT MERGE][cuDNN][SDPA] Testing sm90/sm100 priority for cuDNN SDPA
|
eqy
|
open
|
[
"module: cudnn",
"open source",
"Stale",
"topic: not user facing",
"module: sdpa"
] | 5
|
COLLABORATOR
|
trying things out
cc @csarofeen @ptrblck @xwang233
| true
|
2,846,721,532
|
patch for block-wise quantization + pt2e
|
cccclai
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"ciflow/inductor",
"release notes: export"
] | 24
|
CONTRIBUTOR
|
Summary: https://github.com/pytorch/pytorch/pull/144492 was reverted due to duplicate kernel registration. This PR will re-introduce the patch
Differential Revision: D69488779
| true
|
2,846,711,726
|
New CachingAutotuner pickling logic may be brittle to triton upgrades
|
jamesjwu
|
open
|
[
"triaged",
"actionable",
"bug"
] | 2
|
CONTRIBUTOR
|
Repro using triton's bleeding edge main:
```python3
#!/usr/bin/env python3
import torch
import torch.nn.attention.flex_attention
torch.set_default_device("cuda")
N_CTX = 4096
SLIDING_WINDOW = 128
def sliding_window_causal(b, h, q_idx, kv_idx):
causal_mask = q_idx >= kv_idx
window_mask = q_idx - kv_idx < SLIDING_WINDOW
return causal_mask & window_mask
def rand_qkv(n_batch: int, n_head: int, n_ctx: int, d_qk: int, d_v: int):
qk_shape = (n_batch, n_head, n_ctx, d_qk)
v_shape = (n_batch, n_head, n_ctx, d_qk)
return (torch.randn(qk_shape), torch.randn(qk_shape), torch.randn(v_shape))
n_batch = 1
n_head = 1
local_bm = torch.nn.attention.flex_attention.create_block_mask(
sliding_window_causal, B=None, H=None, Q_LEN=N_CTX, KV_LEN=N_CTX
)
flex_attention = torch.compile(torch.nn.attention.flex_attention.flex_attention)
flex_attention(*rand_qkv(n_batch, n_head, N_CTX, d_qk=16, d_v=16), return_lse=True, block_mask=local_bm)
```
Here is the error we get:
```
E0211 21:13:34.994000 1581518 subproc_pool.py:321] Error in subprocess
E0211 21:13:34.994000 1581518 subproc_pool.py:321] concurrent.futures.process._RemoteTraceback:
E0211 21:13:34.994000 1581518 subproc_pool.py:321] """
E0211 21:13:34.994000 1581518 subproc_pool.py:321] Traceback (most recent call last):
E0211 21:13:34.994000 1581518 subproc_pool.py:321] File "/usr/lib/python3.10/concurrent/futures/process.py", line 246, in _process_worker
E0211 21:13:34.994000 1581518 subproc_pool.py:321] r = call_item.fn(*call_item.args, **call_item.kwargs)
E0211 21:13:34.994000 1581518 subproc_pool.py:321] File "/home/ubuntu/pytorch/torch/_inductor/compile_worker/subproc_pool.py", line 340, in do_job
E0211 21:13:34.994000 1581518 subproc_pool.py:321] return pickler.dumps(result)
E0211 21:13:34.994000 1581518 subproc_pool.py:321] File "/home/ubuntu/pytorch/torch/_inductor/compile_worker/subproc_pool.py", line 100, in dumps
E0211 21:13:34.994000 1581518 subproc_pool.py:321] return pickle.dumps(obj, pickle.HIGHEST_PROTOCOL)
E0211 21:13:34.994000 1581518 subproc_pool.py:321] AttributeError: Can't pickle local object 'JITFunction.__init__.<locals>.<lambda>'
E0211 21:13:34.994000 1581518 subproc_pool.py:321] """
E0211 21:13:34.994000 1581518 subproc_pool.py:321]
E0211 21:13:34.994000 1581518 subproc_pool.py:321] The above exception was the direct cause of the following exception:
E0211 21:13:34.994000 1581518 subproc_pool.py:321]
E0211 21:13:34.994000 1581518 subproc_pool.py:321] Traceback (most recent call last):
E0211 21:13:34.994000 1581518 subproc_pool.py:321] File "/home/ubuntu/pytorch/torch/_inductor/compile_worker/subproc_pool.py", line 319, in callback
E0211 21:13:34.994000 1581518 subproc_pool.py:321] result = future.result()
E0211 21:13:34.994000 1581518 subproc_pool.py:321] File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
E0211 21:13:34.994000 1581518 subproc_pool.py:321] return self.__get_result()
E0211 21:13:34.994000 1581518 subproc_pool.py:321] File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
E0211 21:13:34.994000 1581518 subproc_pool.py:321] raise self._exception
E0211 21:13:34.994000 1581518 subproc_pool.py:321] AttributeError: Can't pickle local object 'JITFunction.__init__.<locals>.<lambda>'
W0211 21:13:34.996000 1581373 pytorch/torch/_inductor/utils.py:875] [0/0] on error, temporary cache dir kept at /tmp/torchinductor_ubuntu/tmpkwuio_wu
Traceback (most recent call last):
File "/home/ubuntu/./test.py", line 28, in <module>
flex_attention(*rand_qkv(n_batch, n_head, N_CTX, d_qk=16, d_v=16), return_lse=True, block_mask=local_bm)
File "/home/ubuntu/pytorch/torch/_dynamo/eval_frame.py", line 574, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/home/ubuntu/pytorch/torch/_dynamo/output_graph.py", line 1487, in _call_user_compiler
raise BackendCompilerFailed(
File "/home/ubuntu/pytorch/torch/_dynamo/output_graph.py", line 1466, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/ubuntu/pytorch/torch/_dynamo/repro/after_dynamo.py", line 131, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/ubuntu/pytorch/torch/__init__.py", line 2339, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/ubuntu/pytorch/torch/_inductor/compile_fx.py", line 2163, in compile_fx
return aot_autograd(
File "/home/ubuntu/pytorch/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/home/ubuntu/pytorch/torch/_functorch/aot_autograd.py", line 1168, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/home/ubuntu/pytorch/torch/_functorch/aot_autograd.py", line 1143, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/home/ubuntu/pytorch/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/ubuntu/pytorch/torch/_functorch/aot_autograd.py", line 820, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/ubuntu/pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 205, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/home/ubuntu/pytorch/torch/_functorch/aot_autograd.py", line 479, in __call__
return self.compiler_fn(gm, example_inputs)
File "/home/ubuntu/pytorch/torch/_inductor/compile_fx.py", line 2038, in fw_compiler_base
return inner_compile(
File "/home/ubuntu/pytorch/torch/_inductor/compile_fx.py", line 623, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/ubuntu/pytorch/torch/_dynamo/repro/after_aot.py", line 104, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/ubuntu/pytorch/torch/_inductor/compile_fx.py", line 727, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/home/ubuntu/pytorch/torch/_inductor/compile_fx.py", line 1402, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/home/ubuntu/pytorch/torch/_inductor/compile_fx.py", line 1122, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
File "/home/ubuntu/pytorch/torch/_inductor/graph.py", line 1990, in compile_to_module
return self._compile_to_module()
File "/home/ubuntu/pytorch/torch/_inductor/graph.py", line 2032, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/home/ubuntu/pytorch/torch/_inductor/codecache.py", line 2758, in load_by_key_path
mod = _reload_python_module(key, path)
File "/home/ubuntu/pytorch/torch/_inductor/runtime/compile_tasks.py", line 51, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_ubuntu/tmpkwuio_wu/2c/c2cwsb3k4rlb6akooercw4u4bjrnkofn6xx5cavzkj2swf2iyiii.py", line 552, in <module>
async_compile.wait(globals())
File "/home/ubuntu/pytorch/torch/_inductor/async_compile.py", line 421, in wait
scope[key] = result.result()
File "/home/ubuntu/pytorch/torch/_inductor/codecache.py", line 3237, in result
return self.result_fn()
File "/home/ubuntu/pytorch/torch/_inductor/async_compile.py", line 311, in get_result
kernel = task.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AttributeError: Can't pickle local object 'JITFunction.__init__.<locals>.<lambda>'
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
We did find that sometimes the function does get cached and after that we don't see the bug, so you might want to run the reproducer with `TORCHINDUCTOR_FORCE_DISABLE_CACHES=1`.
_Originally posted by @saagarjha in https://github.com/pytorch/pytorch/issues/146417#issuecomment-2652084363_
| true
|
2,846,699,388
|
win-vs2022-cpu-py3 test failures in test-default-2-3-lf.windows.4xlarge.nonephemeral_37054642004
|
Camyll
|
open
|
[
"module: windows",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
test-default-2-3-lf.windows.4xlarge.nonephemeral_37054642004 is failing currently due to a missing dll. We believe the issue happened between two PRs while windows tests were disabled (https://github.com/pytorch/pytorch/pull/145863, https://github.com/pytorch/pytorch/pull/146920)
```
(base) C:\actions-runner\_work\pytorch\pytorch\test>python run_test.py --exclude-jit-executor --exclude-distributed-tests --shard "2" "3" --verbose
C:\actions-runner\_work\pytorch\pytorch\test\run_test.py:24: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
Traceback (most recent call last):
File "C:\actions-runner\_work\pytorch\pytorch\test\run_test.py", line 26, in <module>
import torch
File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\__init__.py", line 270, in <module>
_load_dll_libraries()
File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\__init__.py", line 266, in _load_dll_libraries
raise err
OSError: [WinError 126] The specified module could not be found. Error loading "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\lib\aoti_custom_ops.dll" or one of its dependencies.
(base) C:\actions-runner\_work\pytorch\pytorch\test>if ERRORLEVEL 1 goto fail
(base) C:\actions-runner\_work\pytorch\pytorch\test>exit /b 1 ```
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 15.3 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 10:37:40) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.3-arm64-arm-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] No relevant packages
[conda] No relevant packages
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,846,685,230
|
[BE] Delete NCCL slimming
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: releng",
"topic: improvements"
] | 6
|
CONTRIBUTOR
|
It was added by https://github.com/pytorch/pytorch/pull/35843 and served its purpose when everything was linked statically in libtorch_cuda.so, but for all our releases it's no longer relevant as nccl is now a dynamic dependency of libtorch_cuda.so
Besides, It does not work with CXX11 ABI anyway, and creates problems with newer version of NCCL, when two `collectvies.o` are package into library archive.
| true
|
2,846,682,229
|
[Inductor] FX backend via Wrapper IR
|
blaine-rister
|
closed
|
[
"module: cpu",
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ci-no-td",
"release notes: inductor (aoti)"
] | 27
|
CONTRIBUTOR
|
# Sub-PRs
These PRs contain refactors from the main one. They should be reviewed and merged first.
- https://github.com/pytorch/pytorch/pull/150458
- https://github.com/pytorch/pytorch/pull/152391
- https://github.com/pytorch/pytorch/pull/152587
# Feature
The goals of this PR are twofold.
## Goal 1: Introduce Wrapper IR as an intermediate step in wrapper codegen.
In addition to Triton/C++/Halide kernels, Inductor also generates "wrapper" code which allocates memory and calls the kernels. Originally, this wrapper code was fairly standard Python which resembled a user-written PyTorch program. Over time, various wrapper code generators have been added to accommodate things like AOTInductor, which prefers C++ code for static compilation. This complexity has bled into other parts of the codebase, as we now need if/else statements to choose between Python and C++ macros. (See an example [here](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/ir.py#L5515-L5522).) Since most of these code generation steps are conceptually identical across target languages, it seems reasonable to refactor them into some kind of intermediate representation which can be shared between the various backends. This might also make it easier to develop out-of-tree backends which cannot put their own macros in core Inductor components.
This PR takes some initial steps to formalize Inductor's wrapper codegen by generalizing the existing Memory Planning IR into a fully fledged Wrapper IR. This is pretty much identical to the existing Memory Planning IR, but it supports a richer set of ops for things like kernel definitions and calls. This refactor could help encapsulate wrapper codegen. Ideally, we don't need to worry about direct Python/C++ codegen in the main compiler files such as `ir.py`, and can instead defer these to classes like `PythonWrapperCodegen` and `CppWrapperCpu`, which operate on the Wrapper IR.
## Goal 2: Convert Wrapper IR into FX IR.
One of the main benefits of Wrapper IR is to enable more diverse Inductor backends. This PR introduces a converter from Wrapper IR into [FX IR](https://pytorch.org/docs/stable/fx.html), which is the intermediate representation most commonly used in PyTorch graph compilers. The purpose of this is to enable out-of-tree backends to consume Inductor's output in FX IR, which would hopefully make Inductor easier to leverage in novel compilers, hardware accelerators, etc.
It's not trivial to generate Python or C++ code which Inductor can compile and run, and doing so may require changes to other core Inductor files, for the reasons outlined in the previous section. The goal of supporting FX output is to enable something like `torch.compile`'s [custom backend](https://pytorch.org/docs/stable/torch.compiler_custom_backends.html) system, in which an out-of-tree backend can receive an optimized FX graph from Inductor, and compile and run it however it likes.
The typical users of this feature would likely not be part of PyTorch, and may or may not support running a kernel in eager mode. However, they can understand what `torch.empty_strided` means, compile and run Triton kernels, etc. So we just need to present them with an FX graph saying what code Inductor wants to run, which should be easier to analyze and transform in a third party system than Python or C++ source.
Since FX IR is fairly stable, this mechanism should hopefully isolate third-party backends, hardware accelerators, etc. from the implementation details of Inductor, and vice versa.
# Current status
Things that seem to work:
- Converted a lot of the most common Python codegen lines to Wrapper IR lines.
- Handled the following cases, in addition to what was already in the Memory Planning IR:
- Comments
- Triton kernels
- Extern/fallback kernels
- Freeing tensors (`del buf0`)
- MultiOutput
- Graph outputs
- ReinterpretView / StorageBox, for both call args and outputs.
- FX conversion asserts that the program only contains Wrapper IR lines, and not strings of Python/C++ code.
- Prototype FX converter which can handle some of the most common use cases.
- Defining Triton kernels, and putting them in a side table using TorchDynamo's existing [utilities](https://dev-discuss.pytorch.org/t/higher-order-operators-2023-10/1565).
- Calling wrapped Triton kernels.
- Calling extern kernels and certain types of fallback kernels.
- Support both `extern_kernels.*` and `aten.*`.
- Support multi-output kernels like `torch.topk`.
- Graphs with multiple inputs/outputs.
- Training i.e. calling `Tensor.backward()` in a compiled function.
- Graph breaks (training).
- Run the `torch.fx.GraphModule` on GPU using the standard `__call__` method. This makes it easy to test the correctness of FX codegen.
Things that don't work:
- Both Wrapper IR and Wrapper -> FX coverage are currently best effort. There are still features which aren't captured as Wrapper IR lines, and fall back to plain strings. This representation is functionally correct but probably not rich enough to achieve the goals outlined in the previous sections.
- Fallback kernels seem like the most difficult thing to fully cover, since they each define their own Python/C++ macros that would need to be converted to FX.
- Size/alignment asserts are currently disabled via the config file. It's possible to generate FX IR for these, but it seems reasonable to defer these sanity checks to a later PR.
- CommBuffer's and distributed communication are not yet supported. An earlier version of this PR attempted to implement this by calling `empty_strided_p2p`. However, building and testing distributed support seems non-trivial, so it's probably better to defer this.
# Out-of-tree compilers
With this PR, out of tree backends will be able to do further compilation on the FX graphs by subclassing `WrapperFxCodegen` and overriding the `compile_graph` function. This follows the same API as torch.compile's [custom backends](https://pytorch.org/docs/stable/torch.compiler_custom_backends.html), where the user simply returns a callable running the graph. The callable need not be a method of `GraphModule` or any other PyTorch class. See an example below.
```
from torch._inductor.codegen.wrapper_fxir import WrapperFxCodegen
class MyCustomBackend(WrapperFxCodegen):
def compile_graph(self, gm):
# Add 1 to the graph's outputs
def compiled_fn(*args):
return [x + 1 for x in gm.graph.forward(*args)]
return compiled_fn
```
# Example FX graphs
This section contains some example FX graphs generated by Inductor. The correctness of these graphs was verified against eager mode by calling the corresponding `GraphModule`.
Here's an FX graph calling a basic Triton kernel. Notice how outputs are allocated with `torch.empty_strided`, and the Triton kernel is called by reference to Dynamo's triton side table.
```
graph():
%arg0_1 : [num_users=1] = placeholder[target=arg0_1]
%arg1_1 : [num_users=1] = placeholder[target=arg1_1]
%buf0 : [num_users=2] = call_function[target=torch.empty_strided](args = ((8,), (1,)), kwargs = {dtype: torch.float32, device: cuda:0})
%triton_kernel_wrapper_mutation : [num_users=0] = call_function[target=torch.ops.higher_order.triton_kernel_wrapper_mutation](args = (), kwargs = {kernel_idx: 0, constant_args_idx: 0, grid: [(8,)], tma_descriptor_metadata: {}, kwargs: {in_ptr0: %arg1_1, in_ptr1: %arg0_1, out_ptr0: %buf0, xnumel: 8, XBLOCK: 8}})
return (buf0,)
```
Here's a more complicated graph that calls a `torch.addmm` extern kernel.
```
graph():
%arg0_1 : [num_users=1] = placeholder[target=arg0_1]
%arg1_1 : [num_users=2] = placeholder[target=arg1_1]
%buf0 : [num_users=3] = call_function[target=torch.empty_strided](args = ((), ()), kwargs = {dtype: torch.float32, device: cuda:0})
%triton_kernel_wrapper_mutation : [num_users=0] = call_function[target=torch.ops.higher_order.triton_kernel_wrapper_mutation](args = (), kwargs = {kernel_idx: 0, constant_args_idx: 0, grid: [(1,)], tma_descriptor_metadata: {}, kwargs: {in_ptr0: %arg1_1, out_ptr0: %buf0, xnumel: 1, r0_numel: 129, XBLOCK: 1}})
%buf2 : [num_users=2] = call_function[target=torch.empty_strided](args = ((129, 1), (1, 1)), kwargs = {dtype: torch.float32, device: cuda:0})
%addmm : [num_users=0] = call_function[target=torch.addmm](args = (%buf0, %arg0_1, %arg1_1), kwargs = {alpha: 1, beta: 1, out: %buf2})
%delete : [num_users=0] = call_function[target=torch._inductor.codegen.wrapper_fxir.delete](args = (%buf0,), kwargs = {})
return (buf2,)
```
Here's a graph which indexes into a tuple using `operator.getitem`. This is necessary to use the output of the `torch.topk` operation.
```
graph():
%arg0_1 : [num_users=1] = placeholder[target=arg0_1]
%buf0 : [num_users=3] = call_function[target=torch.ops.aten.topk.default](args = (%arg0_1, 2), kwargs = {})
%buf1 : [num_users=2] = call_function[target=operator.getitem](args = (%buf0, 0), kwargs = {})
%buf2 : [num_users=2] = call_function[target=operator.getitem](args = (%buf0, 1), kwargs = {})
%delete : [num_users=0] = call_function[target=torch._inductor.codegen.wrapper_fxir.delete](args = (%buf0,), kwargs = {})
%triton_kernel_wrapper_mutation : [num_users=0] = call_function[target=torch.ops.higher_order.triton_kernel_wrapper_mutation](args = (), kwargs = {kernel_idx: 0, constant_args_idx: 0, grid: [(2,)], tma_descriptor_metadata: {}, kwargs: {in_out_ptr0: %buf1, xnumel: 2, XBLOCK: 2}})
%triton_kernel_wrapper_mutation_1 : [num_users=0] = call_function[target=torch.ops.higher_order.triton_kernel_wrapper_mutation](args = (), kwargs = {kernel_idx: 1, constant_args_idx: 1, grid: [(2,)], tma_descriptor_metadata: {}, kwargs: {in_out_ptr0: %buf2, xnumel: 2, XBLOCK: 2}})
return (buf1, buf2)
```
Here's a graph that reinterprets an output tensor using `torch.as_strided`. This is one way to handle Inductor's `ReinterpretView` op.
```
graph():
%arg0_1 : [num_users=1] = placeholder[target=arg0_1]
%arg1_1 : [num_users=1] = placeholder[target=arg1_1]
%buf0 : [num_users=2] = call_function[target=torch.empty_strided](args = ((2, 4), (4, 1)), kwargs = {dtype: torch.float32, device: cuda:0})
%triton_kernel_wrapper_mutation : [num_users=0] = call_function[target=torch.ops.higher_order.triton_kernel_wrapper_mutation](args = (), kwargs = {kernel_idx: 0, constant_args_idx: 0, grid: [(8,)], tma_descriptor_metadata: {}, kwargs: {in_ptr0: %arg0_1, in_ptr1: %arg1_1, out_ptr0: %buf0, xnumel: 8, XBLOCK: 8}})
%buf0_view_buf0_0 : [num_users=1] = call_function[target=torch.as_strided](args = (%buf0, (8,), (1,), 0), kwargs = {})
return (buf0_view_buf0_0,)
```
Here's a graph with dynamic shapes. This one is a little bit funky. Inductor provides a graph input for each shape symbol, which we map to a placeholder, in this example `s6`. Then, shape expressions in the generated code can refer to the symbol `s6`. The size hint for `s6` is stored in `node.meta["val"]` where `node` is the placeholder defining it. This works out in the generated python code because the placeholder defines a Python variable with the name `s6`.
```
graph():
%s6 : [num_users=0] = placeholder[target=s6]
%arg1_1 : [num_users=1] = placeholder[target=arg1_1]
%arg2_1 : [num_users=1] = placeholder[target=arg2_1]
%buf0 : [num_users=2] = call_function[target=torch.empty_strided](args = ((s6,), (1,)), kwargs = {dtype: torch.float32, device: cuda:0})
%triton_kernel_wrapper_mutation : [num_users=0] = call_function[target=torch.ops.higher_order.triton_kernel_wrapper_mutation](args = (), kwargs = {kernel_idx: 0, constant_args_idx: 0, grid: [[-(((-s6)//8)), 1, 1]], tma_descriptor_metadata: {}, kwargs: {in_ptr0: %arg2_1, in_ptr1: %arg1_1, out_ptr0: %buf0, xnumel: s6, XBLOCK: 8}})
return buf0
```
Here's another graph, this time with dynamic shapes and strides. The grid expression is more complex since the numel is a product of dimensions.
```
graph():
%s10 : [num_users=0] = placeholder[target=s10]
%arg1_1 : [num_users=1] = placeholder[target=arg1_1]
%arg2_1 : [num_users=1] = placeholder[target=arg2_1]
%buf0 : [num_users=2] = call_function[target=torch.empty_strided](args = ([s10, s10], [s10, 1]), kwargs = {dtype: torch.float32, device: cuda:0})
%triton_kernel_wrapper_mutation : [num_users=0] = call_function[target=torch.ops.higher_order.triton_kernel_wrapper_mutation](args = (), kwargs = {kernel_idx: 0, constant_args_idx: 0, grid: [[-(((s10**2)//(-64))), 1, 1]], tma_descriptor_metadata: {}, kwargs: {in_ptr0: %arg2_1, in_ptr1: %arg1_1, out_ptr0: %buf0, xnumel: s10**2, XBLOCK: 64}})
return buf0
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @yf225
| true
|
2,846,681,791
|
Make HOPs more debuggable
|
zou3519
|
open
|
[
"module: tests",
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 1
|
CONTRIBUTOR
|
discussed at HOP Sync with @drisspg @bdhirsh @yanboliang @ydwu4
things we can do to make HOPs more debuggable (from easiest to hardest):
- add an envvar to turn off torch.compile, with the caveat that... this only works for inference and no subsystems.
- add torch.print() and torch.breakpoint() operators, which are effectively print statements
- we have a torch.distributed.breakpoint() (similar!)
- Dynamo could rewrite breakpoint() to torch.breakpoint() for HOPs?
- fancy autograd thing. We would rewrite the eager implementation of HOPs to look like the following. (https://gist.github.com/zou3519/dd1750e2969779794ef8a931b940a836#file-inner-py-L21)
cc @mruberry @ZainRizvi @chauhang @penguinwu @ydwu4 @bdhirsh
| true
|
2,846,681,697
|
Update octokit/request-action to 2.4.0
|
huydhn
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"test-config/default",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
The current version 2.1.0 has disappeared since yesterday:
* https://github.com/pytorch/pytorch/actions/workflows/upload-torch-dynamo-perf-stats.yml
* https://github.com/pytorch/pytorch/actions/workflows/upload-test-stats.yml
The latest version is 2.4.0 https://github.com/octokit/request-action
| true
|
2,846,658,333
|
[export] Log evaluate_expr
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147240
* __->__ #146939
We want to log each symnode created so that we can do provenance tracking in the tlparse report generated for draft export. To do this, we want to assign a unique id to every symnode, which python's `id` function already does, and then for every expression created, we can find the provenance by tracing back through its arguments ids. This logging only happens when dtrace_structured is enabled, which is only when running draft export.
An example output is as follows:
<img width="799" alt="image" src="https://github.com/user-attachments/assets/88bb31b4-8c31-43fb-aa88-08b573b9f71d" />
For the increase in the compile_time_instruction_count benchmark, this seems unavoidable because I need to call `id` to get the unique identifier for each symnode. But I believe `id` is an inexpensive operation, so hopefully it should be ok? I tried doing the following:
* Originally I was passing around `self`, which is a SymNode, which caused the compile time to be ~6.36M
* I changed it to pass around `id(self)` instead, which reduced the compile time to ~6.33M
* Then I changed it to be passed as a positional arg instead of a kwarg, which reduced the compile time to ~6.22M, but this doesn't seem to be a super worthwhile fix?
#suppress-bc-linter
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi
| true
|
2,846,653,684
|
ModuleDict has an incomplete (or wrong) typing, producing type errors when accessed with attributes
|
wookayin
|
open
|
[
"module: nn",
"module: typing",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
Since torch 2.6, `nn.ModuleDict` (and `nn.Module`) has gained some typing support (https://github.com/pytorch/pytorch/pull/141240 https://github.com/pytorch/pytorch/pull/115074):
> ```
> # It is crucial that the return type is not annotated as `Any`, otherwise type checking
> # on `torch.nn.Module` and all its subclasses is largely disabled as a result. See:
> # https://github.com/pytorch/pytorch/pull/115074
> def __getattr__(self, name: str) -> Union[Tensor, "Module"]:
> ```
but it may actually give more errors than it used to be due to partially wrong or incomplete type definitions.
Specifically, `nn.ModuleDict` does not override `__getattr__` method, so `mods.<attr>` would resolve to `Union[Tensor, Module]`, not `Module`. Example:
```python
import torch
from torch import nn
mods = nn.ModuleDict(
dict(
foo=nn.Linear(4, 8),
)
)
x = torch.zeros([10, 4])
y = mods.foo(x) # <----- HERE
print(y)
```
Here a type checker would infer the type of `mods.foo` as `Tensor | Module`, giving a type error at the line: `Object of type "Tensor" is not callable`.
Also, `ModuleDict` does not explicitly implement the `MutableMapping[str, Module]` interface. But this seems to be a duplicate of: https://github.com/pytorch/pytorch/issues/80821
### Possible/Suggested solutions
- Add an override of `__getattr__` in ModuleDict, provided that `parameters` and `buffers` are not allowed in ModuleDict.
### Affected versions
torch 2.6+
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
Python version: 3.12.8
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ezyang @malfet @xuzhao9 @gramster
| true
|
2,846,607,868
|
[draft] ROCm MX-FP8 Scale_mm() Support
|
petrex
|
closed
|
[
"module: rocm",
"module: cpu",
"open source",
"release notes: quantization"
] | 2
|
CONTRIBUTOR
|
TLDR: The PR is a follow up/based on https://github.com/pytorch/pytorch/pull/146655. Goal is to enable MX-FP8 capability on gfx950 through hipblasLT. Ideally https://github.com/pytorch/pytorch/pull/146655 should land first.
------------
This pull request introduces support for the new `Float8_e8m0fnu` and `Float4_e2m1fn_x2` data types in various parts of the codebase, including CUDA and CPU kernels, and updates several utility functions to handle these new types. It also includes changes to improve handling of device properties and scaling operations.
### Support for new data types:
* Added `Float8_e8m0fnu` and `Float4_e2m1fn_x2` to the list of unsupported types in `DLDataType getDLDataType` function.
* Included `Float8_e8m0fnu` in `AT_FLOAT8_TYPES` macro definition.
* Updated `fill_kernel` to support `Float8_e8m0fnu`.
### Changes in CUDA operations:
* Added new headers for `Allocator` and `ScalarType` in `CUDABlas.cpp`.
* Modified `scaled_gemm` function to accept `mat1_scale_dtype` and `mat2_scale_dtype` parameters and to handle `Float8_e8m0fnu` specific logic. [[1]](diffhunk://#diff-74fcb26047c1df4024105d36ce22a36b77cf8cc93c28631d743e639b3d6066aeR1427-R1432) [[2]](diffhunk://#diff-74fcb26047c1df4024105d36ce22a36b77cf8cc93c28631d743e639b3d6066aeL1456-R1465) [[3]](diffhunk://#diff-74fcb26047c1df4024105d36ce22a36b77cf8cc93c28631d743e639b3d6066aeR1490-R1499) [[4]](diffhunk://#diff-16c40d88e3572e56e0a5c49bbe539d6acb572f586c93d940255a447aecd03c0aR133-R138)
* Added device property caching helper function `IsGfx950Device` in `GemmHipblaslt.h` and `Blas.cpp`. [[1]](diffhunk://#diff-bfa1a3b5d4bef1892bf50338775f3b0fd8cd31fc1868148f3968b98aefb68e3fR420-R434) [[2]](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abR986-R1002)
* Updated `_scaled_mm_out_cuda` to handle block-wise scaling for `Float8_e8m0fnu` type. [[1]](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abR1044-R1046) [[2]](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abL1098-R1153) [[3]](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abR1250-R1255)
### Utility function updates:
* Modified `_AT_DISPATCH_CP_TYPES` and `_AT_DISPATCH_ALL_TYPES` macros to include `AT_FLOAT8_TYPES`. [[1]](diffhunk://#diff-5920abc01985a724ffb7a8f57b02a373a2e816615b344f0bda8a7a80bee833a0L62-R63) [[2]](diffhunk://#diff-be8c8eae841fa46b76f5d9ea4ad60f1e582698564a20651f68bc452b4bd41be1L207-R212)
* Updated `isfinite` function to handle `Float8_e8m0fnu` type.
### Indexing and copying operations:
* Updated `copy_kernel` and `direct_copy_kernel` to support `Float8_e8m0fnu`. [[1]](diffhunk://#diff-68b879fa8426e2c8c3fefbaf5d7ddc33aadae7369a5ff98621921b7eb7888cc5R147-R167) [[2]](diffhunk://#diff-68b879fa8426e2c8c3fefbaf5d7ddc33aadae7369a5ff98621921b7eb7888cc5L160-R181)
* Temporarily disabled support for `AT_FLOAT8_TYPES` in `index_put_kernel` and `index_put_with_sort_kernel` due to accumulation behavior issues. [[1]](diffhunk://#diff-54b494a4dd0af2160d716378bd5a40e1e4a98c94414901d85adbb6a9ae6dbed2L187-R193) [[2]](diffhunk://#diff-d2648908951bf5aba50d679575f8b1310926ff3211913075c2e602c047fcf504L585-R591) [[3]](diffhunk://#diff-d2648908951bf5aba50d679575f8b1310926ff3211913075c2e602c047fcf504L609-R621)
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,846,574,434
|
flexible custom operators: custom operators that accept arbitrary input/output type
|
zou3519
|
open
|
[
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
A big pain point users have with custom operators today is that they do not accept arbitrary input/output types. Users have asked for enums, namedtuples, and arbitrary user-defined types as inputs. This has been traditionally very difficult to support, because all types in custom operators need to make a roundtrip through the PyTorch C++ Dispatcher. Adding a new input/output type also usually involves updating all subsystems (e.g. autograd, vmap, Functionalization, etc) to support said input/output type.
This is the tracking issue to track said work. For more details, please see the design doc over [here](https://docs.google.com/document/d/1YHl5nPTJvYeCPE5TO9uA18DPWNgUYGE4gCn6bFvXcBM/edit?tab=t.0#heading=h.qqe9krt9jotv).
cc @chauhang @penguinwu @bdhirsh @yf225
| true
|
2,846,561,592
|
Yguo/repro segfault triton aoti cpp wrapper
|
YUNQIUGUO
|
open
|
[
"Stale",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
TEST PLAN:
```
TORCH_LOGS="output_code" CPLUS_INCLUDE_PATH=/usr/local/cuda-12.0/include:$CPLUS_INCLUDE_PATH python test/inductor/test_aot_inductor.py -k test_addmm_cuda
```
output paste: [P1730560592](https://www.internalfb.com/phabricator/paste/view/P1730560592)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,846,546,587
|
[oncall] Change error message to be more readable
|
jingsh
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 7
|
MEMBER
|
Summary:
During oncall, got a debug, where the error message is a bit ambiguous, due to multiple colons, and full line cutoff
```
AssertionError: Expected order: 1 for the component: remote_request_only to be >= 2, the max order for all its
```
Update the error message to something like
```
AssertionError: Component remote_request_only order must be >= max order of its upstream components, got component order=1 and max=2
```
Test Plan: CI
Differential Revision: D69482789
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,846,537,685
|
[draft] nccl - Use checkout rather then submodule
|
atalman
|
open
|
[
"Stale",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,846,501,765
|
Use of @property on in-graph constructed NJT fails Dynamo tracing
|
jbschlosser
|
open
|
[
"triaged",
"module: nestedtensor",
"oncall: pt2",
"module: dynamic shapes",
"module: dynamo",
"dynamo-triage-jan2025"
] | 0
|
CONTRIBUTOR
|
Repro:
```python
import torch
@torch.compile(fullgraph=True, dynamic=True)
def f(values, offsets, max_seqlen):
t = torch.nested.nested_tensor_from_jagged(values, offsets, max_seqlen=max_seqlen)
return torch.nested.nested_tensor_from_jagged(
torch.randn_like(values), t.offsets(), max_seqlen=t._maybe_max_seqlen
# NB: function version of max seqlen query doesn't trigger error
# torch.randn_like(values), t.offsets(), max_seqlen=t._get_max_seqlen()
)
values = torch.randn(10, 5, device="cuda")
offsets = torch.tensor([0, 2, 4, 7, 10], device="cuda")
output = f(values, offsets, 5)
```
Error:
```
Traceback (most recent call last):
File "repro.py", line 14, in <module>
output = f(values, offsets, 5)
^^^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/eval_frame.py", line 570, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/convert_frame.py", line 1365, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/convert_frame.py", line 564, in __call__
return _compile(
^^^^^^^^^
File ".../torch/_dynamo/convert_frame.py", line 993, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/convert_frame.py", line 725, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/convert_frame.py", line 759, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File ".../torch/_dynamo/convert_frame.py", line 235, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/convert_frame.py", line 679, in transform
tracer.run()
File ".../torch/_dynamo/symbolic_convert.py", line 2935, in run
super().run()
File ".../torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
^^^^^^^^^^^
File ".../torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File ".../torch/_dynamo/symbolic_convert.py", line 1856, in LOAD_ATTR
self._load_attr(inst)
File ".../torch/_dynamo/symbolic_convert.py", line 1846, in _load_attr
result = BuiltinVariable(getattr).call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/variables/builtin.py", line 1070, in call_function
return handler(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/variables/builtin.py", line 907, in builtin_dispatch
rv = fn(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/variables/builtin.py", line 827, in call_self_handler
result = self_handler(tx, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/variables/builtin.py", line 1771, in call_getattr
return obj.var_getattr(tx, name)
^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/variables/tensor.py", line 463, in var_getattr
result = self.dynamic_getattr(tx, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/variables/tensor.py", line 256, in dynamic_getattr
return VariableTracker.build(tx, example_value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/variables/base.py", line 477, in build
return builder.SourcelessBuilder.create(tx, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../torch/_dynamo/variables/builder.py", line 3012, in create
unimplemented(
File ".../torch/_dynamo/exc.py", line 380, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Unexpected type in sourceless builder torch.SymInt
from user code:
File "repro.py", line 7, in f
torch.randn_like(values), t.offsets(), max_seqlen=t._maybe_max_seqlen
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
cc @cpuhrsch @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ @ezyang @penguinwu @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,846,492,981
|
[StaticRuntime] Support a new pattern for ClipRangesToGatherToOffsets
|
coufon
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 11
|
CONTRIBUTOR
|
Summary:
Support the following new pattern for ClipRangesToGatherToOffsets:
Before optimization:
```
%18267 : Tensor, %18268 : Tensor = fb::clip_ranges_gather(%int_77.1, %getitem_2484.1, %493)
%getattr_368.1 : int = prim::dtype(%18267)
%to_443.1 : Tensor = aten::to(%18268, %getattr_368.1, %self._maybe_compute_kjt_to_jt_dict.is_weighted, %self._maybe_compute_kjt_to_jt_dict.is_weighted)
%lengths_to_offsets_490.1 : Tensor = fb::lengths_to_offsets(%to_443.1, %8)
```
After optimization:
```
%18297 : int = prim::dtype(%int_77.1)
%18298 : Tensor, %18299 : Tensor = fb::clip_ranges_gather_to_offsets(%int_77.1, %getitem_2484.1, %493, %8, %18297)
```
Reviewed By: garroud
Differential Revision: D69373835
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,846,434,409
|
pytest test/dynamo fails
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
probably not a good thing
<img width="663" alt="Image" src="https://github.com/user-attachments/assets/f2a657f2-aa70-4734-9472-c6102402cf80" />
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.