id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,854,745,573
|
Iterate over dense dim first in split reduction reindexing
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 19
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147229
Fix for https://github.com/pytorch/pytorch/issues/144431.
Improves perf from 0.29963893827160504 -> 0.0396331632970453.
In split reductions, we view an input tensor as a single dimension, then reduce over it. When we are reducing over a tensor which has a dimension other than the last dimension as the dense dimension, we should iterate over the dense dimension first in our re-indexing.
This pr also gives evidence for general need of reduction tiling, e.g. for cooperative reduction handling of this..
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,854,732,605
|
Don't use '-e' when installing Triton
|
jayfurmanek
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ciflow/inductor-rocm"
] | 11
|
CONTRIBUTOR
|
Currently the install_triton.sh script uses "pip install -e ." to install Triton.
Using the -e is sometimes appropriate for develop work but is less appropriate for delivery.
To make matters worse it seems the behavior of the -e various depending on the version of pip invovled.
This PR removes the -e and installs Triton normally.
| true
|
2,854,685,391
|
[dynamo] fix error message when logging graph that contains hops
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147227
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,854,683,292
|
Unable to use XPU device on PyTorch 2.6
|
iori2333
|
closed
|
[
"triaged",
"module: regression",
"module: xpu"
] | 13
|
NONE
|
### 🐛 Describe the bug
After installing PyTorch 2.6.0-XPU according to [documentation](https://pytorch.org/docs/stable/notes/get_start_xpu.html#Binaries), PyTorch could not detect any XPU devices:
```
>>> import torch
>>> torch.__version__
'2.6.0+xpu'
>>> torch.xpu.is_available()
/home/iori/.conda/envs/xpu/lib/python3.10/site-packages/torch/xpu/__init__.py:60: UserWarning: Failed to initialize XPU devices. The driver may not be installed, installed incorrectly, or incompatible with the current setup. Please refer to the guideline (https://github.com/pytorch/pytorch?tab=readme-ov-file#intel-gpu-support) for proper installation and configuration. (Triggered internally at /build/pytorch/c10/xpu/XPUFunctions.cpp:54.)
return torch._C._xpu_getDeviceCount()
False
```
It works fine after downgrading PyTorch to 2.5.1:
```
>>> import torch
>>> torch.__version__
'2.5.1+xpu'
>>> torch.xpu.is_available()
True
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7940HX with Radeon Graphics
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
BogoMIPS: 4790.83
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 32 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] pytorch-triton-xpu==3.2.0
[pip3] torch==2.6.0+xpu
[conda] numpy 2.1.2 pypi_0 pypi
[conda] pytorch-triton-xpu 3.2.0 pypi_0 pypi
[conda] torch 2.6.0+xpu pypi_0 pypi
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,854,675,581
|
cpp_wrapper: Fix even more tests
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ci-no-td",
"ciflow/rocm-mi300"
] | 20
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150673
* __->__ #147225
* #150672
* #150671
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,854,673,561
|
[Cutlass] Restore search space for swizzle
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 9
|
CONTRIBUTOR
|
This restores the previous search space, since swizzle is now a runtime parameter, there shouldn't be extra compile-time overhead from searching this now.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147224
* #147223
* #147222
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,854,673,496
|
[Cutlass] Add support for runtime param choices, starting with swizzle
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 2
|
CONTRIBUTOR
|
This PR adds support for swizzle as a runtime parameter choice. Future runtime parameter choices can be added to the [get_runtime_arg_info](https://github.com/pytorch/pytorch/blob/2d40f9fb525350ac55486714e5620548f53b2958/torch/_inductor/codegen/cuda/cuda_template.py#L282) list method and then possible choices can be [looped over similarly to swizzle](https://github.com/pytorch/pytorch/blob/933f921b366f4c926ae1e653e81066e541f7d11b/torch/_inductor/codegen/cuda/gemm_template.py#L532). For precompile, we now filter choices by hash to only compile each distinct kernel source once.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147224
* __->__ #147223
* #147222
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,854,673,436
|
[Inductor] Add autotuning artifact logging
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147224
* #147223
* __->__ #147222
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,854,672,872
|
[ROCm] Update inductor-perf-test-nightly-rocm.yml to use the correct labels & frequency
|
amdfaa
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
This workflow takes around 75-80hrs on ROCm, so scaling down the frequency to once per week until we get more CI capacity.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,854,672,682
|
Build TorchVision with USE_SYSTEM_NVTX=0 Flag Would Encounter Failure Due to the use of PROJECT_SRC_DIR
|
nWEIdia
|
open
|
[
"module: cpp-extensions",
"module: cuda",
"triaged"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
During internal testing, we encountered a failure related to the following piece of code:
https://github.com/pytorch/pytorch/blob/6f035d8462e43b1c678e5f334d52d9df0e00e6bf/cmake/public/cuda.cmake#L176
We were trying to build torchvision using the following, but we had to use -DUSE_SYSTEM_NVTX=0 because we did not figure out how to build with -DUSE_SYSTEM_NVTX=1 (update: this part has been resolved, so below issue may no longer be needed, posting just as FYI) :
`` cmake -Bbuild -H. -GNinja -DWITH_CUDA=1 -DUSE_SYSTEM_NVTX=0 -DCMAKE_PREFIX_PATH=`python -c 'import torch;print(torch.utils.cmake_prefix_path)'` ``
the failure was that: torchvision could not find nvtx3 and it tried to use nvtx2 but it cannot find torch::nvtoolext namespace since by default torch was able to detect nvtx3 and used torch::nvtx3.
The workaround was to change the PROJECT_SRC_DIR to the pytorch installation location.
Is torchvision/torch support this flag? Seems to be a known limitation.
cc @malfet @zou3519 @xmfan @ptrblck @msaroufim @eqy @atalman @tinglvv @NicolasHug
### Versions
nightly
| true
|
2,854,611,570
|
Allow mark_dynamic to mark parameters as dynamic instead of silence failing.
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 0
|
CONTRIBUTOR
|
One caveat is that parameters are forced to be static even when marked dynamic is used on them unless a [global flag](https://www.internalfb.com/code/fbsource/[8e8caf410d62dea512c2fe20b7008d1e72543cbf]/fbcode/caffe2/torch/_dynamo/config.py?lines=122) is switched. Switching that flag enable automatic dynamic globally on params by default which is not always desired. This should be easy to fix I will work on landing a fix for that soon .
will add repo once i start working on it
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,854,610,156
|
Periodic Activations Module
|
GulkoA
|
open
|
[
"triaged",
"open source",
"release notes: nn",
"topic: improvements"
] | 6
|
NONE
|
Fixes #146708
| true
|
2,854,602,335
|
[dynamo][mappingproxy][inspect] Support existing types.MappingProxyType
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147217
Fixes https://github.com/pytorch/pytorch/issues/147162
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,854,549,638
|
[sigmoid] Test OSS model runner with test_export.py
|
zhxchen17
|
closed
|
[
"fb-exported",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Summary: There are ~260 tests for all the corner cases of export from test_export.py. utitlizing to test sigmoid in the OSS setting.
Test Plan: buck test mode/opt caffe2/test:test_export -- -r _sigmoid
Reviewed By: SherlockNoMad
Differential Revision: D69060784
| true
|
2,854,517,617
|
cpp_wrapper: Fixup output code indentation
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147225
* #146706
* #147403
* #146991
* __->__ #147215
* #146424
* #146109
Closes #142165.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,854,482,232
|
DISABLED test_triton_kernel_multiple_out (__main__.AutogradFunctionTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_triton_kernel_multiple_out&suite=AutogradFunctionTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37209672197).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_triton_kernel_multiple_out`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_autograd_function.py", line 1537, in test_triton_kernel_multiple_out
z, _ = f(x, y)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 585, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1392, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 584, in __call__
return _compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1020, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 745, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 779, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1420, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 255, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 699, in transform
tracer.run()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3018, in run
super().run()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1152, in run
while self.step():
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1062, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 748, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1797, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 986, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/misc.py", line 807, in call_function
return self.obj.call_method(tx, self.name, args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/misc.py", line 535, in call_method
return wrap_fx_proxy(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2220, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2286, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2382, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 3112, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 3047, in get_fake_value
ret_val = wrap_fake_exception(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 2580, in wrap_fake_exception
return fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 3048, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 3188, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 3164, in run_node
return node.target(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/misc.py", line 392, in trampoline_autograd_apply
return fn_cls.apply(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/var/lib/jenkins/pytorch/test/dynamo/test_autograd_function.py", line 1520, in forward
add_kernel[grid](x, y, output, n_elements, BLOCK_SIZE=16)
File "/var/lib/jenkins/triton/python/triton/runtime/jit.py", line 330, in <lambda>
return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
File "/var/lib/jenkins/triton/python/triton/runtime/jit.py", line 580, in run
bound_args, sig_and_spec, constexpr_vals, non_constexpr_vals, excess_kwargs = self.binder(*args, **kwargs)
File "<string>", line 2, in dynamic_func
File "/var/lib/jenkins/triton/python/triton/backends/amd/compiler.py", line 168, in compute_spec_key
return HIPAttrsDescriptor.get_property_key(arg, align)
File "/var/lib/jenkins/triton/python/triton/backends/amd/compiler.py", line 111, in get_property_key
generic_key = AttrsDescriptor.get_property_key(val, align)
File "/var/lib/jenkins/triton/python/triton/backends/compiler.py", line 207, in get_property_key
if align and AttrsDescriptor.is_divisible_by_16(val):
File "/var/lib/jenkins/triton/python/triton/backends/compiler.py", line 193, in is_divisible_by_16
return x.data_ptr() % 16 == 0
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <function produce_trampoline_autograd_apply.<locals>.trampoline_autograd_apply at 0x7f40c7f5df30>(*(FakeTensor(..., device='cuda:0', size=(10,), requires_grad=True), FakeTensor(..., device='cuda:0', size=(10,), requires_grad=True)), **{}):
Cannot access data pointer of Tensor (e.g. FakeTensor, FunctionalTensor). If you're using torch.compile/export/fx, it is likely that we are erroneously tracing into a custom kernel. To fix this, please wrap the custom kernel into an opaque custom op. Please see the following for details: https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html
from user code:
File "/var/lib/jenkins/pytorch/test/dynamo/test_autograd_function.py", line 1532, in f
z = Add.apply(x, y)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_autograd_function.py AutogradFunctionTests.test_triton_kernel_multiple_out
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_autograd_function.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,854,465,772
|
export input dict mutation in strict mode
|
ydwu4
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The following code fail to export in strict mode due to input mutation:
```python
import torch
inps2 = (
torch.ones([])*1,
{
"a": torch.ones([])*10,
"b": torch.ones([])*10,
},
# pyre-ignore
torch.Tensor([True])
)
class Foo2(torch.nn.Module):
def __init__(self, shape=None, pose=None):
super().__init__()
self._preds_shape_regressor = shape
self._preds_pose_regressor = pose
def forward(self, x, y, z):
# clone y to avoid mutating y, which is not properly supported in strict mode yet
# mport torch.utils._pytree as pytree
# y = pytree.tree_map_only(torch.Tensor, lambda v: v.clone(), y)
y.update({"c": x.sin(), "d": x.cos()})
return y
ep = torch.export.export(Foo2(shape=True), inps2, strict=True)
print(ep)
print(ep.module()(*inps2))
```
With error msg:
```
File "/data/users/yidi/pytorch/test.py", line 26, in <module>
ep = torch.export.export(Foo2(shape=True), inps2, strict=True)
File "/data/users/yidi/pytorch/torch/export/__init__.py", line 370, in export
return _export(
File "/data/users/yidi/pytorch/torch/export/_trace.py", line 1048, in wrapper
raise e
File "/data/users/yidi/pytorch/torch/export/_trace.py", line 1021, in wrapper
ep = fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/export/_trace.py", line 2084, in _export
ep = _export_for_training(
File "/data/users/yidi/pytorch/torch/export/_trace.py", line 1048, in wrapper
raise e
File "/data/users/yidi/pytorch/torch/export/_trace.py", line 1021, in wrapper
ep = fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/export/_trace.py", line 1947, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
File "/data/users/yidi/pytorch/torch/export/_trace.py", line 1316, in _strict_export_lower_to_aten_ir
) = _extract_fake_inputs(gm_torch_level, args, kwargs)
File "/data/users/yidi/pytorch/torch/export/_trace.py", line 252, in _extract_fake_inputs
fake_args = pytree.tree_map_only(torch.Tensor, lookup_fake, args)
File "/data/users/yidi/pytorch/torch/utils/_pytree.py", line 1274, in tree_map_only
return tree_map(map_only(type_or_types_or_pred)(func), tree, is_leaf=is_leaf)
File "/data/users/yidi/pytorch/torch/utils/_pytree.py", line 1097, in tree_map
return treespec.unflatten(map(func, *flat_args))
File "/data/users/yidi/pytorch/torch/utils/_pytree.py", line 943, in unflatten
leaves = list(leaves)
File "/data/users/yidi/pytorch/torch/utils/_pytree.py", line 1215, in wrapped
return func(x)
File "/data/users/yidi/pytorch/torch/export/_trace.py", line 247, in lookup_fake
val = fake_inps[count]
```
In this case, dynamo run updates the input args with "c" and "d" thus causing input to be mutated. We should either improve the error message or support input dict mutation somehow.
### Versions
on master
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo
| true
|
2,854,417,784
|
dict_tag optimization leads to wrong results with relational guards
|
isuruf
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
`dict_tag` optimization checks if a dictionary that we guard on has changed or not and if all the values in it are immutable, no further processing is done on the dictionary if the tag is the same. This avoids running a lot of guards on the dictionary and its values and makes the C++ guards faster.
However this assumes that running a guard has no side-effects. This is true for almost all cases except for RelationalGuards which do keep a state.
In RelationalGuards, we need 2 values for the guards and the way it works is that the guard returns True for the first run, stores the first value in the guard and only returns True or False in the last run. When a guard is not run because of dict_tag optimization, this leads to false positives.
Eg:
```python
import torch
a = torch.randn(5).cuda()
d = {"a": a}
def fn(a, d):
return a is d["a"], torch.sum(a)
cfn = torch.compile(fn)
print(cfn(a, d)[0])
print(cfn(a.clone(), d)[0])
```
Above prints `True` in both cases because of the dict_tag optimization. (`a is d["a"]` is a object aliasing guard which is a relational guard.)
In https://github.com/pytorch/pytorch/pull/139899, I disabled the dict_tag optimization when RelationalGuards are present, but this led to slowdowns in guard checking. Therefore it was reverted in https://github.com/pytorch/pytorch/pull/146232
What's the best way to move forward on this?
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @anijain2305
### Versions
PyTorch version: 2.7.0a0+git6f69859
| true
|
2,854,350,225
|
Skip unsupported types by MPS in `test_torchinductor.py`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147266
* #147205
* __->__ #147211
- Skip unsupported dtypes in `test_split_cumsum` (and manually skip int64 for MacOS13)
- Adapt `test_cat` to use `torch.half` instead of `torch.double` on MPS
- Skip `test_adaptive_avg_pool1d_argmax` is avgpool is not implemented for all sizes
-
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,854,323,582
|
Consider relaxing the in-place mutation restriction for torch.cond and torch.while_loop
|
tingyangk
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 2
|
NONE
|
### 🐛 Describe the bug
`torch.while_loop` can be very beneficial for use cases such as LLM decoding. However, given the current in-place mutation limitations that `torch.cond` and `torch.while_loop` have, the real application for `torch.cond` and `torch.while_loop` is restricted.
Use the implementation of [EMMA](https://arxiv.org/pdf/2312.04515) for example:
- Ideally, we'd like to write code like in the following snippet, which mutates the "result" matrix every step, but unfortunately, we cannot due to the `Tensor.index_copy_()` call which induces in-place mutation.
```
def monotonic_alignment_while_index_copy(p):
bsz, tgt_len, src_len = p.size()
p_ext = p.roll(1, [-1]).unsqueeze(-2).expand(-1, -1, src_len, -1).triu(1)
T = (1 - p_ext).cumprod(-1).triu()
alpha = p[:, [0]] * T[:, 0]
i = torch.tensor(1, device="cuda")
result = torch.zeros((tgt_len - 1, bsz, src_len, src_len), device="cuda")
def cond_fn(i, alpha):
return i < tgt_len
def body_fn(i, alpha):
new_alpha = p.index_select(1, i) * torch.bmm(alpha, T.index_select(1, i).squeeze(1))
result.index_copy_(0, i - 1, new_alpha) # where in-place mutation occurs by Tensor.index_copy_()
return i + 1, new_alpha
_, _ = torch.while_loop(cond_fn, body_fn, (i, alpha))
return result.reshape(bsz, src_len * (tgt_len - 1), src_len)
```
- We can use `Tensor.clone()` as a work around, but there will be too many unnecessary copies taking place. The extra copies can get even worse when the target length becomes longer.
```
def monotonic_alignment_while_clone(p):
bsz, tgt_len, src_len = p.size()
p_ext = p.roll(1, [-1]).unsqueeze(-2).expand(-1, -1, src_len, -1).triu(1)
T = (1 - p_ext).cumprod(-1).triu()
alpha = p[:, [0]] * T[:, 0]
i = torch.tensor(1, device="cuda")
result = torch.zeros((tgt_len - 1, bsz, src_len, src_len), device="cuda")
def cond_fn(i, alpha, result):
return i < tgt_len
def body_fn(i, alpha, result):
new_alpha = p.index_select(1, i) * torch.bmm(alpha, T.index_select(1, i).squeeze(1))
new_result = result.clone().index_copy_(0, i - 1, new_alpha.unsqueeze(0)) # use Tensor.clone() as a work around
return i + 1, new_alpha, new_result
_, _, final_result = torch.while_loop(cond_fn, body_fn, (i, alpha, result))
return final_result.reshape(bsz, src_len * (tgt_len - 1), src_len)
```
In JAX, however, though we cannot do in-place mutation directly either, there's a chance that `jax.jit` can eliminate the extra copy if we use the array's `.at` property.
Quote from [JAX's docs](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#array-updates-x-at-idx-set-y):
> However, inside jit-compiled code, if the input value x of x.at[idx].set(y) is not reused, the compiler will optimize the array update to occur in-place.
It will be great if PyTorch can figure out a way to achieve similar optimizations!
@galv @ydwu4 @eellison @aakhundov
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0a0+git4434376
Is debug build: True
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L40S
Nvidia driver version: 570.86.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 45 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 25
On-line CPU(s) list: 0-24
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9454 48-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 25
Stepping: 1
BogoMIPS: 5491.74
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext invpcid_single ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor fsrm flush_l1d
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 800 KiB (25 instances)
L1i cache: 800 KiB (25 instances)
L2 cache: 25 MiB (25 instances)
L3 cache: 800 MiB (25 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-24
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.0.66
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.7.0a0+git4434376
[pip3] torchmetrics==1.6.1
[pip3] triton==3.2.0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.7.0.66 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-lightning 2.5.0.post0 pypi_0 pypi
[conda] torch 2.7.0a0+git4434376 dev_0 <develop>
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh
| true
|
2,854,278,470
|
Passing `src_key_padding_mask` as `bool` vs `float` causes different outputs from `nn.TransformerEncoderLayer`
|
petercall
|
closed
|
[
"module: nn",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
The module `nn.TransformerEncoderLayer` is outputting different values from the `forward` function based on the data type of the `src_key_padding_mask` argument. The documentation says that either boolean data type or float data type should be accepted, so the difference in output is puzzling.
```python
import torch
import torch.nn as nn
batch_size = 1
context_length = 4
embed_dim = 4
nheads=1
model = nn.TransformerEncoderLayer(embed_dim, nheads, dropout = 0, batch_first = True, norm_first = True)
input_tensor = torch.randn(batch_size, context_length, embed_dim)
padding_mask = torch.randint(low=0, high = 2, size = (batch_size, context_length))
bool_output = model(input_tensor, src_key_padding_mask = padding_mask.to(torch.float32))
float_output = model(input_tensor, src_key_padding_mask = padding_mask.to(torch.bool))
print(torch.allclose(bool_output, float_output, equal_nan = True, atol=1e-05))
```
```
output: False
```
I coded up a manual version of `nn.TransformerEncoderLayer` by hand, and mine is matching the output when the input is of dtype `bool`. However, I also noted that when I also include causal masking, the `float` implementation is able to handle when an entire row in the attention matrix equals `-inf`, and the `bool` implementation (as well as my manual implementation) returns `nans` for those rows. I believe that the `float` implementation has had some fix applied to handle when an entire row is set to `-inf`, but the `bool` implementation has not. I think the `bool` implementation needs to be updated to also handle this case. As to why they are outputting different values, I think it could be related to the fix applied to the `float` implementation. Ultimately we don't want the model to return rows of `nans`, so I favor the `float` implementation, but it should be fixed so that the output of `nn.TransformerEconderLayer` is the same regardless of the dtype of `src_key_padding_mask`.
### Versions
Collecting environment information...
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A5000
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) W-2295 CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
Stepping: 7
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 576 KiB (18 instances)
L1i cache: 576 KiB (18 instances)
L2 cache: 18 MiB (18 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-35
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.0 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.37 0 nvidia
[conda] cuda-runtime 12.4.0 0 nvidia
[conda] cudatoolkit-dev 11.7.0 h1de0b5d_6 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.4.2.65 0 nvidia
[conda] libcufft 11.2.0.44 0 nvidia
[conda] libcurand 10.3.7.37 0 nvidia
[conda] libcusolver 11.6.0.99 0 nvidia
[conda] libcusparse 12.3.0.142 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.4.99 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.8 py312h5eee18b_0
[conda] mkl_random 1.2.4 py312hdb19cb5_0
[conda] numpy 1.26.4 py312hc5e2394_0
[conda] numpy-base 1.26.4 py312h0da6c21_0
[conda] pytorch 2.4.0 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.4.0 py312_cu124 pytorch
[conda] torchtriton 3.0.0 py312 pytorch
[conda] torchvision 0.19.0 py312_cu124 pytorch
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,854,186,170
|
[CPU][Quantization] `torch.flip` on `torch.quint4x2` quantized tensor causes memory corruption (invalid free/malloc)
|
WLFJ
|
closed
|
[
"module: crash",
"oncall: quantization",
"bug",
"topic: fuzzer"
] | 9
|
NONE
|
### 🐛 Describe the bug
When executing the following test case on CPU, applying `torch.flip` to a quantized tensor can result in memory corruption errors. The issue is non-deterministic and can produce different errors across multiple runs.
example:
```python
import torch
def f(*args):
sym_5, sym_6, sym_7 = args
var_279 = torch.quantize_per_tensor(torch.randn((100,)), scale=sym_5, zero_point=sym_6, dtype=sym_7)
var_374 = torch.flip(var_279, dims=(0,))
return var_374
res = f(3., 10, torch.quint2x4)
print(res)
```
Running the script multiple times produces different memory corruption errors, such as:
1. free(): invalid size
2. malloc(): unsorted double linked list corrupted
3. munmap_chunk(): invalid pointer
### Versions
PyTorch 2.7.0.dev20250209+cu124
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim
| true
|
2,854,181,597
|
[torch] Make amdsmi cdll hook private
|
danzimm
|
closed
|
[
"module: cuda",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 6
|
CONTRIBUTOR
|
Summary: https://github.com/pytorch/pytorch/actions/runs/13314282597/job/37186177974 yelled at me for landing a seemingly public API that's not exported. It's a private API, so lets prepend `_` to make that clear
Test Plan: CI
Differential Revision: D69665234
cc @ptrblck @msaroufim @eqy
| true
|
2,854,127,295
|
[Inductor] SIGILL instead of `ZeroDivisionError` in `torch.remainder` when using `@torch.compile` (Nightly Regression)
|
WLFJ
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"module: empty tensor",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
# Issue Description:
When executing the following test case with `@torch.compile` using Inductor on the PyTorch Nightly build, the process crashes with a SIGILL signal instead of raising the expected `ZeroDivisionError`.
```python
import torch
@torch.compile
def f(*args):
sym_0, sym_1, sym_2, sym_3 = args
var_808 = torch.eye(n=sym_0, m=sym_1)
var_380 = torch.less_equal(var_808, other=sym_2)
var_458 = torch.ops.aten.resize(self=var_380, size=sym_3)
return torch.remainder(self=1, other=var_458)
res = f(1024, 0, 1, (3300,))
print(res)
```
# Expected Behavior:
Without Inductor, the script raises the expected `ZeroDivisionError`:
```
Traceback (most recent call last):
File "test.py", line 11, in <module>
res = f(1024, 0, 1, (3300,))
^^^^^^^^^^^^^^^^^^^^^^
File "test.py", line 9, in f
return torch.remainder(self=1, other=var_458)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: ZeroDivisionError
```
When using Inductor in **PyTorch 2.4.0+cu121**, the expected ZeroDivisionError is also correctly raised inside Inductor:
```
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: ZeroDivisionError
While executing %remainder : [num_users=1] = call_function[target=torch.ops.aten.remainder.Scalar_Tensor](args = (1, %resize), kwargs = {})
Original traceback:
File "test.py", line 10, in f
return torch.remainder(self=1, other=var_458)
```
# Actual Behavior (Nightly Build):
Instead of raising `ZeroDivisionError`, the process crashes with a SIGILL signal when using Inductor on the `2.7.0.dev20250209+cu124` build.
### Versions
PyTorch 2.7.0.dev20250209+cu124
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,854,120,847
|
[MPSInductor] Adjust check_bounds
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147266
* __->__ #147205
* #147211
To make upper bound inclusive, which fixes `test_vectorized_ops_masked` and results in the following code
```python
mps_lib_0 = compile_mps_shader("""
#include <c10/metal/random.h>
#include <c10/metal/special_math.h>
#include <c10/metal/utils.h>
kernel void generated_kernel(
device float* out_ptr0,
constant float* in_ptr0,
uint xindex [[thread_position_in_grid]]
) {
int x0 = (xindex) % (64);
int x1 = (xindex) / (64);
auto tmp5 = in_ptr0[x0 + 63*x1];
int x2 = xindex;
auto tmp0 = x0;
auto tmp1 = static_cast<long>(tmp0);
auto tmp2 = 63;
auto tmp3 = tmp1 < tmp2;
if (x0 > 63) return;
auto tmp6 = tmp3 ? tmp5 : 7;
out_ptr0[x2] = static_cast<float>(tmp6);
}
""")
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,854,053,905
|
Fix the AOTI compile failure with ARM CPU for Meta internal
|
hl475
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary: Fix the AOTI compile failure with ARM CPU for Meta internal
Differential Revision: D69642211
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,854,011,968
|
Fix rms_norm in fp16/bf16
|
riccardofelluga
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 22
|
CONTRIBUTOR
|
Fixes #134106. This PR moves the `upcasted_result` down-casting after all computation is done.
Since the multiplication with the weight_opt input is not done in half precision, the current code path is doing the following: fp16 -> fp32 -> fp16 -> fp32 -> fp16. What we want tho is to avoid down-casting and this PR proposes: fp16 -> fp32 -> fp16. This results in better accuracy as it avoids truncating.
| true
|
2,853,870,900
|
[inductor][refactor] Move _compile_file to cpp_builder
|
desertfire
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: To further conslidate cpp build logic into cpp_builder
Test Plan: CI
Differential Revision: D69595327
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,853,866,494
|
torch._dynamo.exc.Unsupported: 'skip function getfullargspec
|
bhack
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Exporting a model that is using popular mmengine crashed.
That root cause seems this line in the engine:
https://github.com/open-mmlab/mmengine/blob/main/mmengine/utils/misc.py#L362
### Error logs
Relevant part of the error
```python
with `torch._dynamo.exc.Unsupported: 'skip function getfullargspec in file /opt/conda/lib/python3.11/inspect.py'`.
File "/opt/conda/lib/python3.11/site-packages/mmengine/utils/misc.py", line 362, in new_func
args_info = getfullargspec(old_func)
```
### Versions
nightly
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,853,636,359
|
DISABLED test_max_autotune (__main__.TestFlexAttention)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_max_autotune&suite=TestFlexAttention&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37196534734).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_max_autotune`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_flex_attention.py", line 2035, in test_max_autotune
self.run_test_with_paged_attention(score_mod)
File "/var/lib/jenkins/pytorch/test/inductor/test_flex_attention.py", line 711, in run_test_with_paged_attention
self._check_out(
File "/var/lib/jenkins/pytorch/test/inductor/test_flex_attention.py", line 371, in _check_out
self._check_equal(golden_out, ref_out, compiled_out, fudge_factor, "Out")
File "/var/lib/jenkins/pytorch/test/inductor/test_flex_attention.py", line 344, in _check_equal
self.assertTrue(False, "Output/Grad with NaN")
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true : Output/Grad with NaN
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_flex_attention.py TestFlexAttention.test_max_autotune
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,853,213,795
|
[Inductor] Fix Inplace Buffer inner name conflict
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147199
**Summary**
Fix issue: https://github.com/pytorch/pytorch/issues/146975, when create `InplacedBuffer` inner name, we only count the number of unique `InplacedBuffer` or `RemovedArg`. The name may have conflict, for example reported in this issue
```
---- make inplace create, input_name is: buf22; output_name is: buf27; buf.inner_name is: in_out_ptr2
dict_values([
InplacedBuffer(inner_name='in_out_ptr0', other_names=['buf6', 'buf11']),
InplacedBuffer(inner_name='in_out_ptr0', other_names=['buf6', 'buf11']),
InplacedBuffer(inner_name='in_out_ptr1', other_names=['buf24', 'buf26']),
InplacedBuffer(inner_name='in_out_ptr1', other_names=['buf24', 'buf26'])])
---- make inplace create, input_name is: buf0; output_name is: buf3; buf.inner_name is: in_out_ptr2
dict_values([
<torch._inductor.codegen.common.RemovedArg object at 0x7fbf75516350>,
<torch._inductor.codegen.common.RemovedArg object at 0x7fbf75516350>,
<torch._inductor.codegen.common.RemovedArg object at 0x7fbf75516350>,
<torch._inductor.codegen.common.RemovedArg object at 0x7fbf75516350>,
InplacedBuffer(inner_name='in_out_ptr2', other_names=['buf22', 'buf27', 'buf31', 'buf33']),
InplacedBuffer(inner_name='in_out_ptr2', other_names=['buf22', 'buf27', 'buf31', 'buf33'])
<torch._inductor.codegen.common.RemovedArg object at 0x7fbf75516350>,
InplacedBuffer(inner_name='in_out_ptr2', other_names=['buf22', 'buf27', 'buf31', 'buf33']),
InplacedBuffer(inner_name='in_out_ptr2', other_names=['buf22', 'buf27', 'buf31', 'buf33'])
])
```
- The first time create `in_out_ptr2`, there are 2 unique `InplacedBuffer`
- The second time create `in_out_ptr2`, there is 1 `RemovedArg` and 1 unique `InplacedBuffer`
They are 2 different `InplacedBuffer`, but with same name `in_out_ptr2`. In this PR, we fix this regression by counting the number of `RemovedArg`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,853,116,943
|
add PrivateUse1 backend in fsdp collecitves
|
zqwenn
|
closed
|
[
"oncall: distributed",
"open source",
"release notes: distributed (fsdp)"
] | 2
|
CONTRIBUTOR
|
add PrivateUse1 backend in fsdp collecitves
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,853,082,604
|
Unify all sympy versions to avoid conflicts within PyTorch
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 11
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147197
As the title stated.
There are some tiny diffrences between 1.13.1 and 1.13.3:
1.13.1:
https://github.com/sympy/sympy/blob/2e489cf4b1438ae134ba98a44a80cc9add1306b0/sympy/core/numbers.py#L1591
1.13.3:
https://github.com/sympy/sympy/blob/b4ce69ad5d40e4e545614b6c76ca9b0be0b98f0b/sympy/core/numbers.py#L1591
**Previous PR:**
https://github.com/pytorch/pytorch/pull/143908
**ISSUE Related:**
https://github.com/pytorch/pytorch/issues/147144
| true
|
2,853,026,211
|
[Draft][Inductor][CPP] Adopt block sparse for FlexAttention CPU
|
jianan-gu
|
open
|
[
"open source",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,852,986,127
|
update kineto submodule to include fix for windows build
|
briancoutinho
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Fixes an issue causing windows builds to fail
https://github.com/pytorch/kineto/pull/1039
| true
|
2,852,971,060
|
Add ppc64le wheel build support
|
sandeepgupta12
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 6
|
NONE
|
This PR adds support for building ppc64le wheels as part of the CI/CD pipeline. The goal is to enable ppc64le architecture compatibility for wheel builds, ensuring that TensorFlow/PyArrow (or any related package) can be distributed for Power architecture users.
**Changes Introduced**
✅ Enabled ppc64le architecture in CI/CD pipeline
✅ Added Dockerfile and build script to support ppc64le builds
✅ Modified .github\workflows\_linux-build.yml and added ppc64le.yml configuration to include ppc64le
✅ Added files to configure ephemeral self-hosted runner similar to s390x under .github\scripts\ppc64le-ci
**Motivation & Context**
Currently, ppc64le wheels are not built in the official release pipeline, requiring users to build them manually. By adding native support in CI, ensures that Power users can install the package seamlessly via pip
Additional Notes
- This is the first PR introducing ppc64le wheel builds; further optimizations can be made based on feedback.
- Open to suggestions on improving the build process, dependencies, or CI/CD efficiency.
-We can assist in setting up and maintaining the ephemeral self-hosted runner on an OSU VM if needed.
**Creation of OSU VM**: To facilitate further testing and CI integration, we request the creation of an OSU VM configured for PPC64LE. Below are the details where you can create the OSU VM-
URL- https://osuosl.org/services/powerdev/request_hosting/
IBM Advocate- [gerrit@us.ibm.com](mailto:gerrit@us.ibm.com)
**Details:**
The Open Source Lab (OSL) at Oregon State University (OSU), in partnership with IBM, provides access to IBM Power processor-based servers for developing and testing open source projects. The OSL offers following clusters:
OpenStack (non-GPU) Cluster:
• Architecture: Power little endian (LE) instances
• Virtualization: Kernel-based virtual machine (KVM)
• Access: Via Secure Shell (SSH) and/or through OpenStack's API and GUI interface
• Capabilities: Ideal for functional development and continuous integration (CI) work. It supports a managed Jenkins service hosted on the cluster or as a node incorporated into an external CI/CD pipeline.
**Additional Information:**
• We are prepared to provide any further details or assistance needed to support the PPC64LE architecture.
Please let us know if there are any specific requirements or steps needed to move forward with this request.
| true
|
2,852,942,436
|
[inductor][triton] Ignore block ptr advances for removed buffers
|
kundaMwiza
|
closed
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ci-no-td"
] | 13
|
CONTRIBUTOR
|
block ptr advancements should also be deferrered conditional on the associated buffer not being removed. For example, if `FusedSchedulerNode(op0-op1)` has a store in `SchedulerNode` `op0` that is read in `op1`, the store and associated block ptr that would be created for `op0` in isolation is no longer needed.
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,852,914,379
|
Add Torch Logs for ir_pre_fusion and ir_post_fusion
|
zeshengzong
|
closed
|
[
"open source",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,852,906,115
|
[Inductor] Add input value checking to randint meta function
|
DDEle
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Fixes #147070
Adding value checking for the range to the meta function, similar to which in the CUDA/CPU aten op.
Test with
```
PYTORCH_TEST_WITH_DYNAMO=1 pytest test/test_tensor_creation_ops.py -k test_randint_inference
```
| true
|
2,852,859,513
|
Wheel v1
|
sandeepgupta12
|
closed
|
[
"topic: not user facing"
] | 3
|
NONE
|
Fixes #ISSUE_NUMBER
| true
|
2,852,819,568
|
[StaticRuntime] Support a new pattern (aten::to with 5 inputs) for ClipRangesToGatherToOffsets
|
coufon
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 7
|
CONTRIBUTOR
|
Summary:
Support the following new pattern for ClipRangesToGatherToOffsets:
Before optimization:
```
%11175 : Tensor, %11176 : Tensor = fb::clip_ranges_gather(%int_66.1, %getitem_1784.1, %347)
%getattr_256.1 : int = prim::dtype(%11175)
%to_298.1 : Tensor = aten::to(%11176, %getattr_256.1, %13, %13, %12)
%lengths_to_offsets_333.1 : Tensor = fb::lengths_to_offsets(%to_298.1, %8)
```
After optimization:
```
%11199 : int = prim::dtype(%int_66.1)
%11200 : Tensor, %11201 : Tensor = fb::clip_ranges_gather_to_offsets(%int_66.1, %getitem_1784.1, %347, %8, %11199)
```
It is similar with https://github.com/pytorch/pytorch/pull/146931, but aten::to has 5 inputs instead of 4.
Differential Revision: D69627793
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,852,802,071
|
Fix torch.mean out dtype check
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147188
**For CPU**:
Type promotion is supported for torch.mean
**For Meta**:
Not supported for torch.mean
ISSUE related:
https://github.com/pytorch/pytorch/issues/138399
| true
|
2,852,775,600
|
[torch.export] How to export a model with kv cache
|
exeex
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 6
|
NONE
|
### 🐛 Describe the bug
In an attention layer, kv cache needs a variable number "start_pos" from outside.
(may related to https://github.com/pytorch/pytorch/issues/146990)
Here is a simplified model for reproducing the issue:
```python
import torch
from torch import nn
class Cache(nn.Module):
def __init__(self, head_dim):
super().__init__()
max_token = 128
self.register_buffer("cache_k", torch.zeros(
(1, max_token, head_dim,)), persistent=False)
def forward(
self,
x: torch.Tensor,
start_pos: torch.Tensor
):
_, seqlen, _ = x.size()
end_pos = start_pos+seqlen
self.cache_k[:, start_pos:end_pos, :] = x
return self.cache_k[:, :end_pos, :]
if __name__ == "__main__":
from torch.export import Dim
with torch.no_grad():
# Prepare for input
start_pos = torch.scalar_tensor(8, dtype=torch.int32)
seqlen = 8
hidden_size = 32
h = torch.randn(1, seqlen, hidden_size)
# Prepare for mdoel
model = Cache(hidden_size)
dynamic_shapes = {"x": {1: Dim.DYNAMIC},"start_pos": None}
torch.export.export(model, args=(h, start_pos), dynamic_shapes=dynamic_shapes)
```
```Error message
Exception has occurred: Unsupported (note: full exception trace is shown but execution is paused at: _run_module_as_main)
Dynamic slicing on data-dependent value is not supported
from user code:
File "/home/tim/nvpu_uno/nnc/tests/test_cache.py", line 18, in forward
self.cache_k[:, start_pos:end_pos, :] = x
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/variables/lists.py", line 923, in __init__
unimplemented("Dynamic slicing on data-dependent value is not supported")
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1873, in BUILD_SLICE
self.push(SliceVariable(items))
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1569, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/export/_trace.py", line 662, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/export/_trace.py", line 1283, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/export/_trace.py", line 1834, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/export/_trace.py", line 1970, in _export
return _export_for_training(
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/export/__init__.py", line 368, in export
return _export(
File "/home/tim/nvpu_uno/nnc/tests/test_cache.py", line 32, in <module>
torch.export.export(model, args=(h, start_pos), dynamic_shapes=dynamic_shapes)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/runpy.py", line 196, in _run_module_as_main (Current frame)
return _run_code(code, main_globals, None,
torch._dynamo.exc.Unsupported: Dynamic slicing on data-dependent value is not supported
from user code:
File "/home/tim/nvpu_uno/nnc/tests/test_cache.py", line 18, in forward
self.cache_k[:, start_pos:end_pos, :] = x
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3955WX 16-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 79%
CPU max MHz: 4402.7339
CPU min MHz: 2200.0000
BogoMIPS: 7785.85
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 64 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,852,759,908
|
ROCm: Remove static specifier for allow_tf32 variable.
|
jagadish-amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"topic: not user facing",
"ciflow/rocm"
] | 6
|
CONTRIBUTOR
|
Since the env variable HIPBLASLT_ALLOW_TF32 can change, remove static type for allow_tf32 variable so that it captures the current value of env variable HIPBLASLT_ALLOW_TF32.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,852,696,898
|
[cutlass backend] forward fix of #146877
|
henrylhtsang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147185
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,852,679,749
|
[MPS][BE] Migrate polar to use functor
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147184
* #147183
* #147182
| true
|
2,852,679,684
|
[MPS][BE] Add copysign integral flavors as functor
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147184
* __->__ #147183
* #147182
| true
|
2,852,679,619
|
[BE][MPS] Infer results of functor
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147184
* #147183
* __->__ #147182
Do not assume that functor will return the same results as its arguments, but rather dynamically infer it using `decltype` and `::metal::declval`
This is a no-op that prepares for migration of `copysign` of integral arguments, that would return a float
| true
|
2,852,662,973
|
Remove code for Python < 3.9
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,852,657,421
|
[DTensor] `Partial(sum)` reductions are wrongly cached (?)
|
main-horse
|
open
|
[
"oncall: distributed",
"triaged",
"module: dtensor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
First of all, a very simple motivating example:
```python
# OMP_NUM_THREADS=1 torchrun --nproc-per-node 2 what.py
import os
import torch
from torch.distributed.tensor import DTensor, Partial, init_device_mesh
# Create mesh
mesh = init_device_mesh('cuda', (int(os.environ.get("WORLD_SIZE", "1")),))
# Create random local tensor (different seed on each rank)
randn_local_tensor = torch.randn(4096, 4096, device='cuda')/64
# Create Partial(sum) DTensor from local tensors
dt = DTensor.from_local(randn_local_tensor, mesh, placements=[Partial()])
# Expected: -5*dt != 2*dt (because dt is just random)
assert not (-5*dt.full_tensor() == 2*dt.full_tensor()).all()
# Not expected: when dt is Partial, -5*dt == 2*dt ???
assert (-5*dt == 2*dt).all() # <-- What?
# Exit
torch.distributed.destroy_process_group()
```
In the above code, we
1. create a `Partial()` DTensor from different local randn tensors.
2. check that `-5*dt` and `2*dt` are not the same when their work is replicated
3. learn that `-5*dt` and `2*dt` return the same result (???) when the `Partial()` `dt` is used.
If we print out the values involved, the issue becomes more clear:
```
dt=DTensor(local_tensor=tensor[4096, 4096] n=16777216 (64Mb) x∈[-0.083, 0.081] μ=-5.313e-07 σ=0.016 cuda:0, device_mesh=DeviceMesh('cuda', [0, 1]), placements=(Partial(sum),))
-5 * dt.full_tensor()=tensor[4096, 4096] n=16777216 (64Mb) x∈[-0.568, 0.607] μ=3.048e-05 σ=0.111 cuda:0
2 * dt.full_tensor()=tensor[4096, 4096] n=16777216 (64Mb) x∈[-0.243, 0.227] μ=-1.219e-05 σ=0.044 cuda:0
-5 * dt=DTensor(local_tensor=tensor[4096, 4096] n=16777216 (64Mb) x∈[-0.568, 0.607] μ=3.048e-05 σ=0.111 cuda:0, device_mesh=DeviceMesh('cuda', [0, 1]), placements=(Replicate(),))
2 * dt=DTensor(local_tensor=tensor[4096, 4096] n=16777216 (64Mb) x∈[-0.568, 0.607] μ=3.048e-05 σ=0.111 cuda:0, device_mesh=DeviceMesh('cuda', [0, 1]), placements=(Replicate(),))
```
Somehow, the result of `-5*dt` is cached and reused as the return value for `2*dt`...
The following also returns true:
```python
assert (-5*dt).to_local().data_ptr() == (2*dt).to_local().data_ptr()
```
I do not know how to debug what is happening further.
### Versions
```bash
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.127.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 6.5 MiB (208 instances)
L1i cache: 6.5 MiB (208 instances)
L2 cache: 416 MiB (104 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-103
NUMA node1 CPU(s): 104-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] lovely-numpy==0.2.13
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,852,644,010
|
[FSDP2] OOM when use integer `reshard_after_forward` that smaller than DP size
|
FindDefinition
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 4
|
NONE
|
### 🐛 Describe the bug
When we use fsdp2 module to do inference only with `reshard_after_forward` set, we found that if we use `reshard_after_forward=True` or `reshard_after_forward=False`, fsdp2 works fine, but if we use a integer `reshard_after_forward=4` with `world_size=8`, OOM happens in second step of inference. The `torch.cuda.memory_*` also shows wrong memory stat during second inference step.
Code:
```Python
import torch
from torch.distributed.device_mesh import init_device_mesh
from torch.nn import functional as F
import os
from torch.distributed.fsdp import (
fully_shard,
MixedPrecisionPolicy,
)
from torch.distributed.device_mesh import DeviceMesh
import torch.distributed as dist
_world_size = int(os.environ["WORLD_SIZE"])
assert _world_size == 8, "you must run this script with world size 8 to reproduce the bug"
device_mesh = init_device_mesh(device_type="cuda", mesh_shape=(_world_size,))
class FFN(torch.nn.Module):
def __init__(self, dim, inter_dim):
super().__init__()
self.w1 = torch.nn.Linear(dim, inter_dim)
self.w2 = torch.nn.Linear(inter_dim, dim)
self.w3 = torch.nn.Linear(dim, inter_dim)
def forward(self, x) -> torch.Tensor:
return self.w2(F.silu(self.w1(x)) * self.w3(x))
class VeryLargeFFN(torch.nn.Module):
def __init__(self, num_layers, dim, inter_dim):
super().__init__()
ffns = {}
for i in range(num_layers):
ffns[str(i)] = FFN(dim, inter_dim)
self.ffns = torch.nn.ModuleDict(ffns)
def forward(self, x, show_wrong_memory_stats: bool = False) -> torch.Tensor:
for block in self.ffns.values():
if dist.get_rank() == 0 and show_wrong_memory_stats:
stat = torch.cuda.memory_stats()
active_peak = stat.get("active_bytes.all.current", 0) / (1024 * 1024 * 1024)
alloc_peak = stat.get("allocated_bytes.all.current", 0) / (1024 * 1024 * 1024)
reserved_peak = stat.get("reserved_bytes.all.current", 0) / (1024 * 1024 * 1024)
print(f"active_peak: {active_peak:.2f}GB, alloc_peak: {alloc_peak:.2f}GB, reserved_peak: {reserved_peak:.2f}GB")
# print(cur_alloc)
x = block(x)
return x
def fsdp_mod( net: VeryLargeFFN, mesh: DeviceMesh, reshard: int):
full_shard: bool | int = reshard == -1
if reshard > 0:
full_shard = reshard
mixed_fsdp2 = MixedPrecisionPolicy(reduce_dtype=torch.float32, param_dtype=torch.bfloat16, cast_forward_inputs=False)
for block in net.ffns.values():
fully_shard(block, mesh=mesh, reshard_after_forward=full_shard, mp_policy=mixed_fsdp2)
fully_shard(net, mesh=mesh, reshard_after_forward=full_shard, mp_policy=mixed_fsdp2)
mod = VeryLargeFFN(32, 2048, 8192).cuda().eval().to(torch.bfloat16)
# fsdp_mod(mod, device_mesh, 0) # if we use 8GPUs with no reshard, no problem
fsdp_mod(mod, device_mesh, 4) # if we use 8GPUs with 4 reshard, OOM happens
for i in range(2):
sample_inp = torch.randn(64, 16384, 2048).cuda().to(torch.bfloat16)
if dist.get_rank() == 0:
print(f"-----i={i}-----")
with torch.no_grad():
mod(sample_inp, show_wrong_memory_stats=True)
torch.cuda.synchronize()
# print(torch.cuda.memory_summary())
dist.barrier()
dist.destroy_process_group()
```
Message:
you need to use `watch nvidia-smi` to check memory usage, `torch.cuda.memory_*` don't work, don't need traceback.
### Versions
both `2.6.0` and `2.7.0.dev20250212+cu124`
```
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250212+cu124
[pip3] torchaudio==2.6.0.dev20250212+cu124
[pip3] torchvision==0.22.0.dev20250212+cu124
[pip3] triton==3.1.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250212+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250212+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250212+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o
| true
|
2,852,580,305
|
Add numerical tests for speciality ops
|
henrylhtsang
|
closed
|
[
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147178
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,852,543,882
|
Fix `torch.max` optional args `dim`, `keepdim` description
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 9
|
CONTRIBUTOR
|
[`torch.max`](https://pytorch.org/docs/stable/generated/torch.max.html#torch.max) optional args `dim`, `keepdim` not described in document, but users can ignore them.
```python
>>> import torch
>>> a = torch.randn(3,1,3)
>>> a.max()
tensor(1.9145)
>>> a.max(dim=1)
torch.return_types.max(
values=tensor([[ 1.1436, -0.0728, 1.3312],
[-0.4049, 0.1792, -1.2247],
[ 0.8767, -0.7888, 1.9145]]),
indices=tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]))
```
## Changes
- Add `optional` description for `dim`, `keepdim`
- Add example of using `dim`, `keepdim`
## Test Result
### Before

### After

| true
|
2,852,528,699
|
Fix failing export of DTensor toy model
|
tousif-anwar
|
open
|
[
"triaged",
"open source",
"Stale",
"release notes: export"
] | 3
|
NONE
|
Fixes #147172
Address the issue of strict-mode export failing at AOTAutograd when exporting a model with DTensors.
* **torch/_export/__init__.py**
- Modify `aot_compile` function to handle DTensors correctly.
- Add `_handle_dtensor` function to process DTensors during the export process.
* **torch/_export/converter.py**
- Update `_trace_and_get_graph_from_model` to handle DTensors.
- Add logic to process DTensors in the export process.
* **test/test_export.py**
- Add test cases to verify the export of models with DTensors.
- Create a test class `TestExportDTensor` to validate the export process with DTensors.
---
For more details, open the [Copilot Workspace session](https://copilot-workspace.githubnext.com/pytorch/pytorch/pull/147176?shareId=e21c31ec-2ef1-475c-b5e4-fee9343bc370).
| true
|
2,852,506,663
|
UserWarning with Compiled Autograd
|
cora-codes
|
closed
|
[
"triaged",
"oncall: pt2",
"module: compiled autograd"
] | 1
|
NONE
|
### 🐛 Describe the bug
I've came across the following when using compiled autograd: `UserWarning: Trying to prepend a node to itself. This behavior has no effect on the graph"` I don't see any behavior to indicate that this is causing issues, but it is a bit annoying to see the error per a worker. I'd be happy to provide a better reproduction once I have one.
### Versions
nightly
cc @chauhang @penguinwu @xmfan
| true
|
2,852,505,962
|
Release/2.5: [ROCm] TopK optimizations for AMD GPUs
|
apakbin
|
closed
|
[
"oncall: distributed",
"module: rocm",
"open source",
"release notes: nn",
"fx",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
Mirroring the PR: https://github.com/pytorch/pytorch/pull/146387 for the release/2.5 branch
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,852,501,652
|
[cutlass backend] add subproc tests
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147485
* __->__ #147173
* #147169
I want to separate subproc autotuning from the main tests. And I observed that for addmm, it can work without subproc.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,852,496,821
|
[export] failing to export DTensor toy model
|
pianpwk
|
closed
|
[
"oncall: pt2",
"module: dtensor",
"oncall: export"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Testing out https://github.com/kwen2501/export-playground/blob/main/dist_pre_export.py
Strict-mode export is failing at AOTAutograd, when exporting a model with DTensors (`parallelize_module` has been called). Not sure what's going on, `_dynamo.export` is able to produce a graph without collectives, with DTensors as example values. The failing node/op seems to be a `torch._C.nn.linear` call on a DTensor (maybe it's suspect that dynamo produces the `_C` variant?).
The model works with torch.compile.
Command:
```
torchrun script.py
```
script.py:
```
import os
import torch
import torch.distributed as dist
from torch.distributed.device_mesh import DeviceMesh
from torch.distributed.tensor.parallel import (
ColwiseParallel,
parallelize_module,
RowwiseParallel,
)
from torch.export import export
# MLP Layer
class MLPModule(torch.nn.Module):
def __init__(self, d_hid: int):
super().__init__()
self.net1 = torch.nn.Linear(d_hid, d_hid)
self.relu = torch.nn.ReLU()
self.net2 = torch.nn.Linear(d_hid, d_hid)
def forward(self, x):
x = self.net1(x)
x = self.relu(x)
x = self.net2(x)
return x
def apply_tp(model, mesh):
parallelize_module(model.net1, mesh, ColwiseParallel(), src_data_rank=None)
parallelize_module(model.net2, mesh, RowwiseParallel(), src_data_rank=None)
def main():
# Initialize distributed environment
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29400'
dist.init_process_group(backend="nccl", init_method="env://")
rank = dist.get_rank()
world_size = dist.get_world_size()
device = torch.device(f"cuda:{rank}")
torch.cuda.set_device(device)
# Create distributed model
d_hid = 1024
model = MLPModule(d_hid)
model = model.to(device)
mesh = DeviceMesh("cuda", list(range(world_size)))
apply_tp(model, mesh)
bs = 2
x = torch.rand(bs, d_hid, device=device)
# **************************************
# We would export model here and hope it
# would capture the model's collectives
# **************************************
# Try export
ep = export(model, (x,), strict=True)
print(ep)
# Real run
y = model(x)
y.wait()
print(y.shape)
# Cleanup
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
Error:
```
(pytorch-3.10) [pianpwk@devvm2305.cco0 /data/users/pianpwk/pytorch (main)]$ with-proxy torchrun --rdzv_endpoint=localhost:29400 test_dist_pre_export.py
NCCL version 2.21.5+cuda12.0
[rank0]: Traceback (most recent call last):
[rank0]: File "/data/users/pianpwk/pytorch/test_dist_pre_export.py", line 72, in <module>
[rank0]: main()
[rank0]: File "/data/users/pianpwk/pytorch/test_dist_pre_export.py", line 59, in main
[rank0]: ep = export(model, (x,), strict=True)
[rank0]: File "/data/users/pianpwk/pytorch/torch/export/__init__.py", line 370, in export
[rank0]: return _export(
[rank0]: File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1047, in wrapper
[rank0]: raise e
[rank0]: File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1020, in wrapper
[rank0]: ep = fn(*args, **kwargs)
[rank0]: File "/data/users/pianpwk/pytorch/torch/export/exported_program.py", line 121, in wrapper
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 2083, in _export
[rank0]: ep = _export_for_training(
[rank0]: File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1047, in wrapper
[rank0]: raise e
[rank0]: File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1020, in wrapper
[rank0]: ep = fn(*args, **kwargs)
[rank0]: File "/data/users/pianpwk/pytorch/torch/export/exported_program.py", line 121, in wrapper
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1946, in _export_for_training
[rank0]: export_artifact = export_func( # type: ignore[operator]
[rank0]: File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1387, in _strict_export_lower_to_aten_ir
[rank0]: aten_export_artifact = lower_to_aten_callback(
[rank0]: File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1667, in _export_to_aten_ir_make_fx
[rank0]: gm, graph_signature = transform(_make_fx_helper)(
[rank0]: File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1587, in _make_fx_helper
[rank0]: gm = make_fx(
[rank0]: File "/data/users/pianpwk/pytorch/torch/fx/experimental/proxy_tensor.py", line 2194, in wrapped
[rank0]: return make_fx_tracer.trace(f, *args)
[rank0]: File "/data/users/pianpwk/pytorch/torch/fx/experimental/proxy_tensor.py", line 2132, in trace
[rank0]: return self._trace_inner(f, *args)
[rank0]: File "/data/users/pianpwk/pytorch/torch/fx/experimental/proxy_tensor.py", line 2103, in _trace_inner
[rank0]: t = dispatch_trace(
[rank0]: File "/data/users/pianpwk/pytorch/torch/_compile.py", line 51, in inner
[rank0]: return disable_fn(*args, **kwargs)
[rank0]: File "/data/users/pianpwk/pytorch/torch/_dynamo/eval_frame.py", line 764, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/data/users/pianpwk/pytorch/torch/fx/experimental/proxy_tensor.py", line 1136, in dispatch_trace
[rank0]: graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
[rank0]: File "/data/users/pianpwk/pytorch/torch/fx/experimental/proxy_tensor.py", line 1692, in trace
[rank0]: res = super().trace(root, concrete_args)
[rank0]: File "/data/users/pianpwk/pytorch/torch/_dynamo/eval_frame.py", line 764, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/data/users/pianpwk/pytorch/torch/fx/_symbolic_trace.py", line 836, in trace
[rank0]: (self.create_arg(fn(*args)),),
[rank0]: File "/data/users/pianpwk/pytorch/torch/fx/experimental/proxy_tensor.py", line 1191, in wrapped
[rank0]: out = f(*tensors) # type:ignore[call-arg]
[rank0]: File "<string>", line 1, in <lambda>
[rank0]: File "/data/users/pianpwk/pytorch/torch/export/_trace.py", line 1491, in wrapped_fn
[rank0]: return tuple(flat_fn(*args))
[rank0]: File "/data/users/pianpwk/pytorch/torch/_functorch/_aot_autograd/utils.py", line 184, in flat_fn
[rank0]: tree_out = fn(*args, **kwargs)
[rank0]: File "/data/users/pianpwk/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 875, in functional_call
[rank0]: out = PropagateUnbackedSymInts(mod).run(
[rank0]: File "/data/users/pianpwk/pytorch/torch/fx/interpreter.py", line 171, in run
[rank0]: self.env[node] = self.run_node(node)
[rank0]: File "/data/users/pianpwk/pytorch/torch/fx/experimental/symbolic_shapes.py", line 7087, in run_node
[rank0]: result = super().run_node(n)
[rank0]: File "/data/users/pianpwk/pytorch/torch/fx/interpreter.py", line 236, in run_node
[rank0]: return getattr(self, n.op)(n.target, args, kwargs)
[rank0]: File "/data/users/pianpwk/pytorch/torch/fx/interpreter.py", line 316, in call_function
[rank0]: return target(*args, **kwargs)
[rank0]: File "/data/users/pianpwk/pytorch/torch/fx/experimental/proxy_tensor.py", line 1239, in __torch_function__
[rank0]: return func(*args, **kwargs)
[rank0]: File "/data/users/pianpwk/pytorch/torch/fx/experimental/proxy_tensor.py", line 1286, in __torch_function__
[rank0]: return func(*args, **kwargs)
[rank0]: RuntimeError: Unable to cast NotImplemented to Tensor
[rank0]: While executing %linear : [num_users=1] = call_function[target=torch._C._nn.linear](args = (%input_tensor, %self___net1__parameters__weight, %self___net1__parameters__bias), kwargs = {})
[rank0]: GraphModule: class GraphModule(torch.nn.Module):
[rank0]: def forward(self, x):
[rank0]: arg0: "f32[2, 1024][1024, 1]";
[rank0]:
[rank0]: arg0, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
[rank0]: l_x_ = arg0
[rank0]:
[rank0]: # File: /data/users/pianpwk/pytorch/torch/distributed/tensor/parallel/style.py:107 in _prepare_input_fn, code: input_tensor = DTensor.from_local(
[rank0]: input_tensor: "f32[2, 1024][1024, 1]" = torch__dynamo_variables_torch_prim_from_local(l_x_); l_x_ = None
[rank0]:
[rank0]: # File: /data/users/pianpwk/pytorch/test_dist_pre_export.py:22 in forward, code: x = self.net1(x)
[rank0]: self___net1__parameters__weight: "f32[1024, 1024][1024, 1]" = self.self___net1__parameters__weight
[rank0]: self___net1__parameters__bias: "f32[1024][1]" = self.self___net1__parameters__bias
[rank0]: linear: "f32[2, 1024][1024, 1]" = torch._C._nn.linear(input_tensor, self___net1__parameters__weight, self___net1__parameters__bias); input_tensor = self___net1__parameters__weight = self___net1__parameters__bias = None
[rank0]:
[rank0]: # File: /data/users/pianpwk/pytorch/torch/distributed/tensor/parallel/style.py:144 in _prepare_output_fn, code: outputs = outputs.redistribute(placements=output_layouts, async_op=True)
[rank0]: outputs: "f32[2, 1024][1024, 1]" = torch__dynamo_variables_tensor_prim_redistribute(linear); linear = None
[rank0]:
[rank0]: # File: /data/users/pianpwk/pytorch/torch/distributed/tensor/parallel/style.py:146 in _prepare_output_fn, code: return outputs.to_local() if use_local_output else outputs
[rank0]: hook_result: "f32[2, 1024][1024, 1]" = torch__dynamo_variables_tensor_prim_to_local(outputs); outputs = None
[rank0]:
[rank0]: # File: /data/users/pianpwk/pytorch/test_dist_pre_export.py:23 in forward, code: x = self.relu(x)
[rank0]: x: "f32[2, 1024][1024, 1]" = self.L__self___relu(hook_result); hook_result = None
[rank0]:
[rank0]: # File: /data/users/pianpwk/pytorch/torch/distributed/tensor/parallel/style.py:222 in _prepare_input_fn, code: input_tensor = DTensor.from_local(
[rank0]: input_tensor_1: "f32[2, 1024][1024, 1]" = torch__dynamo_variables_torch_prim_from_local_1(x); x = None
[rank0]:
[rank0]: # File: /data/users/pianpwk/pytorch/test_dist_pre_export.py:24 in forward, code: x = self.net2(x)
[rank0]: self___net2__parameters__weight: "f32[1024, 1024][1024, 1]" = self.self___net2__parameters__weight
[rank0]: self___net2__parameters__bias: "f32[1024][1]" = self.self___net2__parameters__bias
[rank0]: linear_1: "f32[2, 1024][1024, 1]" = torch._C._nn.linear(input_tensor_1, self___net2__parameters__weight, self___net2__parameters__bias); input_tensor_1 = self___net2__parameters__weight = self___net2__parameters__bias = None
[rank0]:
[rank0]: # File: /data/users/pianpwk/pytorch/torch/distributed/tensor/parallel/style.py:277 in _prepare_output_fn, code: outputs = outputs.redistribute(placements=output_layouts, async_op=True)
[rank0]: outputs_1: "f32[2, 1024][1024, 1]" = torch__dynamo_variables_tensor_prim_redistribute_1(linear_1); linear_1 = None
[rank0]:
[rank0]: # File: /data/users/pianpwk/pytorch/torch/distributed/tensor/parallel/style.py:279 in _prepare_output_fn, code: return outputs.to_local() if use_local_output else outputs
[rank0]: hook_result_1: "f32[2, 1024][1024, 1]" = torch__dynamo_variables_tensor_prim_to_local_1(outputs_1); outputs_1 = None
[rank0]: return pytree.tree_unflatten([hook_result_1], self._out_spec)
[rank0]:
[rank0]: Original traceback:
[rank0]: File "/data/users/pianpwk/pytorch/test_dist_pre_export.py", line 22, in forward
[rank0]: x = self.net1(x)
[rank0]:[W213 18:03:57.452018016 ProcessGroupNCCL.cpp:1505] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
E0213 18:03:58.924000 2561960 torch/distributed/elastic/multiprocessing/api.py:870] failed (exitcode: 1) local_rank: 0 (pid: 2562056) of binary: /home/pianpwk/.conda/envs/pytorch-3.10/bin/python
Traceback (most recent call last):
File "/home/pianpwk/.conda/envs/pytorch-3.10/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch', 'console_scripts', 'torchrun')())
File "/data/users/pianpwk/pytorch/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 354, in wrapper
return f(*args, **kwargs)
File "/data/users/pianpwk/pytorch/torch/distributed/run.py", line 889, in main
run(args)
File "/data/users/pianpwk/pytorch/torch/distributed/run.py", line 880, in run
elastic_launch(
File "/data/users/pianpwk/pytorch/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/data/users/pianpwk/pytorch/torch/distributed/launcher/api.py", line 270, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
test_dist_pre_export.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-02-13_18:03:58
host : devvm2305.cco0.facebook.com
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 2562056)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+gitb9a22b3
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.34
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-0_fbk12_hardened_11583_g0bef9520ca2b-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 92
On-line CPU(s) list: 0-91
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 92
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 5.8 MiB (92 instances)
L1i cache: 5.8 MiB (92 instances)
L2 cache: 46 MiB (92 instances)
L3 cache: 1.4 GiB (92 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-91
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] adam-atan2-pytorch==0.1.1
[pip3] alphafold3-pytorch==0.6.6
[pip3] bert_pytorch==0.0.1a4
[pip3] ema-pytorch==0.7.3
[pip3] executorch==0.4.0.dev20240807+cpu
[pip3] flake8==7.1.1
[pip3] frame-averaging-pytorch==0.1.2
[pip3] lion-pytorch==0.2.2
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.1
[pip3] onnxruntime==1.18.0
[pip3] onnxscript==0.1.0.dev20250122
[pip3] open-clip-torch==2.24.0
[pip3] optree==0.13.1
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] pytorch-lightning==2.0.7
[pip3] pytorch_sphinx_theme==0.0.24
[pip3] pytorch-triton==3.0.0+45fff310c8
[pip3] rotary-embedding-torch==0.8.5
[pip3] torch==2.7.0a0+gitb9a22b3
[pip3] torch_geometric==2.4.0
[pip3] torch-mlir==20241017.255
[pip3] torch-stoi==0.2.1
[pip3] torch_tensorrt==2.6.0.dev20241007+cu124
[pip3] torchao==0.5.0
[pip3] torchaudio==2.6.0a0+36815ef
[pip3] torchdiffeq==0.2.4
[pip3] torchmetrics==1.0.3
[pip3] torchrec==0.9.0a0+5e30669
[pip3] torchsde==0.2.6
[pip3] torchsr==1.0.4
[pip3] torchtext==0.18.0
[pip3] torchtune==0.0.0
[pip3] torchtyping==0.1.5
[pip3] torchvision==0.20.0a0+6279faa
[pip3] torchx==0.7.0
[pip3] triton==3.1.0
[conda] adam-atan2-pytorch 0.1.1 pypi_0 pypi
[conda] alphafold3-pytorch 0.6.6 pypi_0 pypi
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] blas 1.0 mkl
[conda] ema-pytorch 0.7.3 pypi_0 pypi
[conda] executorch 0.4.0.dev20240809+cpu pypi_0 pypi
[conda] frame-averaging-pytorch 0.1.2 pypi_0 pypi
[conda] lion-pytorch 0.2.2 pypi_0 pypi
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.8 py310h5eee18b_0
[conda] mkl_random 1.2.4 py310hdb19cb5_0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] open-clip-torch 2.24.0 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-labs-segment-anything-fast 0.2 pypi_0 pypi
[conda] pytorch-lightning 2.0.7 pypi_0 pypi
[conda] pytorch-sphinx-theme 0.0.24 dev_0 <develop>
[conda] pytorch-triton 3.0.0+45fff310c8 pypi_0 pypi
[conda] pytorch3d 0.7.7 dev_0 <develop>
[conda] rotary-embedding-torch 0.8.5 pypi_0 pypi
[conda] torch 2.7.0a0+gitb9a22b3 dev_0 <develop>
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-mlir 20241017.255 pypi_0 pypi
[conda] torch-stoi 0.2.1 pypi_0 pypi
[conda] torch-tensorrt 2.6.0.dev20241007+cu124 pypi_0 pypi
[conda] torchao 0.6.0+git745085fb dev_0 <develop>
[conda] torchaudio 2.6.0a0+36815ef dev_0 <develop>
[conda] torchbench 0.1 dev_0 <develop>
[conda] torchdiffeq 0.2.4 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchrec 0.9.0a0+5e30669 dev_0 <develop>
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchtext 0.17.0a0+1d4ce73 dev_0 <develop>
[conda] torchtune 0.0.0 pypi_0 pypi
[conda] torchtyping 0.1.5 pypi_0 pypi
[conda] torchvision 0.16.2 pypi_0 pypi
[conda] torchx 0.7.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @wanchaol @tianyu-l @wz337 @XilunWu @d4l3k @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,852,495,363
|
torch.compile not DCEing unused rand calls
|
eellison
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"internal ramp-up task"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Running the following with `TORCH_LOGS="post_grad_graphs" `
```
import torch
@torch.compile()
def foo(y):
x = torch.rand([10])
return y + 2
foo(torch.rand([4], device="cuda"))
```
Gives:
```
def forward(self, arg0_1: "f32[4][1]cuda:0"):
# No stacktrace found for following nodes
inductor_seeds_default: "i64[1][1]cpu" = torch.ops.prims.inductor_seeds.default(1, device(type='cpu'))
inductor_lookup_seed_default: "i64[][]cpu" = torch.ops.prims.inductor_lookup_seed.default(inductor_seeds_default, 0); inductor_seeds_default = None
inductor_random_default: "f32[10][1]cpu" = torch.ops.prims.inductor_random.default([10], inductor_lookup_seed_default, 'rand'); inductor_lookup_seed_default = inductor_random_default = None
# File: /data/users/eellison/pytorch/work_dir/test_hi5.py:7 in foo, code: return y + 2
add: "f32[4][1]cuda:0" = torch.ops.aten.add.Tensor(arg0_1, 2); arg0_1 = None
return (add,)
```
`inductor_random_default` is constructed and then immediately deleted. it is unused.
The root cause of this issue is that we consider `nondeterministic_seeded` operators to be impure. https://github.com/pytorch/pytorch/blob/05001f0459923177e4c3b4467f47b42285a512c2/torch/fx/node.py#L776-L778
This should be the case when `torch._inductor.config.fallback_random == True` to match eager numerics so the rng state is equal to eager. But when `fallback_random = False` unused rand operators should get removed from the graph.
Deleting the above check in `node.py` speeds up `with-proxy python benchmarks/dynamo/huggingface.py --performance --inductor --device cuda --inference --bfloat16 --print-compilation-time --cold-start-latency --only M2M100ForConditionalGeneration`
from 1.632x -> 2.228x for me locally because we no longer skip cudagraphs due to the cpu rng op.
### Versions
master
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,852,474,867
|
[BUG][PyTorch 2.0 Export][quant]:get_source_partitions() may return different matches with same input graph
|
GodHforever
|
open
|
[
"oncall: quantization",
"good first issue",
"oncall: pt2",
"oncall: export"
] | 7
|
NONE
|
### 🐛 Describe the bug
I am attempting to extend the quantization backend based on PyTorch 2.0 export. The operator I have chosen is `torch.gather` .
The input code I tested is as follows
```python
class GatherLayer(nn.Module):
def forward(self, x):
assert x.shape == (2,2)
x = torch.gather(x, dim=0, index=torch.tensor([[0, 0], [1, 0]]))
return x
example_inputs = (torch.tensor([[0,1],[0,1]]),)
model = GatherLayer()
model.eval()
exported_model = torch.export.export(model, example_inputs).module()
# print(exported_model.graph)
prepared_model = prepare_pt2e(exported_model, quantizer)
prepared_model(*example_inputs)
# print(prepared_model.graph)
quantized_model = convert_pt2e(prepared_model)
```
. The part of the code I constructed for the quantization backend is as follows:
```python
def _annotate_gather(
gm: torch.fx.GraphModule,
quantization_config: Optional[QuantizationConfig],
filter_fn: Optional[Callable[[Node], bool]] = None,
) -> Optional[List[List[Node]]]:
print(gm.graph)
partitions = get_source_partitions(gm.graph, [torch.gather], filter_fn)
matches = list(itertools.chain.from_iterable(partitions.values()))
annotated_partitions = []
for matche in matches:
output_nodes = matche.output_nodes
input_nodes = matche.input_nodes
gather_node = output_nodes[0]
input_qspec_map = {}
partition = []
input_node = input_nodes[1] # sometimes input_nodes[1] is input, while sometimes input_nodes[0] is input
input_qspec_map[input_node] = get_input_act_qspec(quantization_config)
partition.append(input_node)
gather_node.meta["quantization_annotation"] = QuantizationAnnotation(
input_qspec_map=input_qspec_map,
output_qspec=get_output_act_qspec(quantization_config),
_annotated=True,
)
_mark_nodes_as_annotated(partition)
annotated_partitions.append(partition)
return annotated_partitions
```
I found that with the same input, the matching results can differ, **_specifically in the order of `input_nodes`_** . Input graph is:
```
graph():
%lifted_tensor_0 : [num_users=1] = get_attr[target=lifted_tensor_0]
%x : [num_users=1] = placeholder[target=x]
%lift_fresh_copy : [num_users=1] = call_function[target=torch.ops.aten.lift_fresh_copy.default](args = (%lifted_tensor_0,), kwargs = {})
%detach : [num_users=1] = call_function[target=torch.ops.aten.detach.default](args = (%lift_fresh_copy,), kwargs = {})
%gather : [num_users=1] = call_function[target=torch.ops.aten.gather.default](args = (%x, 0, %detach), kwargs = {})
return (gather,)
```
after matching with `get_source_partitions` , the order of node `x` and node `detach` in `input_nodes` is not same all the time, which causes errors in my subsequent code.
**I totally run 100 times, and In 25 of these cases node `x` appears in `input_nodes[1]` and the rest in `input_nodes[0]`**
Notice:
I tried the `SubgraphMatcherWithNameNodeMap` , but the pattern of `torch.gather` is not easy to describe due to 'index`, so I turned to this api.
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.28.4
Libc version: glibc-2.31
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8369HB CPU @ 3.30GHz
Stepping: 11
CPU MHz: 3800.047
CPU max MHz: 4200.0000
CPU min MHz: 1200.0000
BogoMIPS: 6600.06
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 66 MiB
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 ida arat avx512_vnni
Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.5.0
[pip3] numpy==1.26.3
[pip3] onnx==1.17.0
[pip3] torch==2.5.1+cpu
[pip3] torchaudio==2.5.1+cpu
[pip3] torchvision==0.20.1+cpu
[conda] intel-extension-for-pytorch 2.5.0 pypi_0 pypi
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 1.26.3 pypi_0 pypi
[conda] torch 2.5.1+cpu pypi_0 pypi
[conda] torchaudio 2.5.1+cpu pypi_0 pypi
[conda] torchvision 0.20.1+cpu pypi_0 pypi
| true
|
2,852,468,365
|
[cutlass backend] remove triton from most tests and add an integration test
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147485
* #147173
* __->__ #147169
Removing aten and triton from the list of backends for the tests that have it. Instead, add a small integration test to make sure autotuning works fine.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,852,459,957
|
[FSDP2] The evil `record_stream` in c10d causes FSDP2 to over-allocate GPU memory
|
leonardo0lyj
|
closed
|
[
"oncall: distributed",
"module: c10d",
"module: fsdp"
] | 17
|
NONE
|
Hey Andrew @awgu,
As a big fan of FSDP2, I find an potential bug 😄
## Demand:
- No inter-stream memory fragmentation (incurred by copy in streams)
- Explicit Prefetch
- CPU runs a head of GPU by a lot
## `_set_unshard_async_op(True)`
To satisfy these demands, FSDP2 has to turn on [`_set_unshard_async_op(True)`](https://github.com/pytorch/pytorch/blob/20a369aa3abb6083600d5b22fcd8ba6e861c3959/torch/distributed/fsdp/_fully_shard/_fully_shard.py#L413) with explicit prefetch `set_modules_to_forward_prefetch` and `set_modules_to_backward_prefetch`.
## Memory Over-Allocation
Then memory over-allocation happens like this:

with memory traces:


## Root Cause
As known to all, these memory over-allocations are caused by the evil `tensor.record_stream(ncclStream)`. Although FSDP2 tried to avoid this evil originated from FSDP1, such `record_stream` still is [embedded in all c10d collectives](https://github.com/pytorch/pytorch/blob/0acbf8039abccfc17f9c8529d217209db5a7cc85/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L5373) (when `async_op=True`). Therefore, FSDP2 still suffers over-allocation from this evil in c10d.
## Candidate Solution
Not sure how can we avoid the `record_stream` even when `async_op=True`?
IMO, candidate solutions are below:
1. Make `TORCH_NCCL_AVOID_RECORD_STREAMS=True` as an default value, getting rid of the `record_stream` in c10d. (Safety should be good without `record_stream`, as collective with `async_op=True` usually starts from allocation stream and ends at allocation stream, or users indeed know how to [manually sync streams](https://pytorch.org/docs/stable/generated/torch.Tensor.record_stream.html).)
2. Make `TORCH_NCCL_AVOID_RECORD_STREAMS=True` an advanced option to each collective, such as `dist.all_gather(..., _avoid_record_stream=True)`. This limits the scope of environmental `TORCH_NCCL_AVOID_RECORD_STREAMS` to each specific collective.
3. Use only `dist.all_gather(async_op=False)` in FSDP2, but [changes the `current_stream`](https://github.com/pytorch/pytorch/blob/20a369aa3abb6083600d5b22fcd8ba6e861c3959/torch/distributed/fsdp/_fully_shard/_fsdp_param_group.py#L92) to the `all_gather_stream` such that all gather still allocates/frees in `current_stream` while runs in `all_gather_stream` and overlaps with `current_stream`, just like `async_op=True`.
```python
def get_all_gather_streams(
self, async_op: bool, training_state: TrainingState
) -> tuple[torch.Stream, torch.Stream]:
if not async_op and training_state in (
TrainingState.FORWARD,
TrainingState.PRE_BACKWARD,
):
# Use separate streams for implicit prefetching
return self.all_gather_copy_in_stream, self.all_gather_stream
# Use separate streams for explicit prefetching!
current_stream = self.device_handle.current_stream()
return current_stream, self.all_gather_stream # Change this!
```
How do you prefer?
(Let us make FSDP great again 😄)
## Code
P.S. the code to reproduce over-allocation:
```python
class MLP(nn.Module):
def __init__(self, hidden_dim: int, bias: bool = False):
super().__init__()
self.fc1 = nn.Linear(hidden_dim, hidden_dim, bias=bias)
self.gelu = nn.GELU()
self.fc2 = nn.Linear(hidden_dim, hidden_dim, bias=bias)
def forward(self, x):
x = self.fc1(x)
x = self.gelu(x)
x = self.fc2(x)
return x
class MultiMLP(nn.Module):
def __init__(self, hidden_dim: int, bias: bool = False, layers: int = 4):
super().__init__()
self.pre_norm = nn.LayerNorm(hidden_dim, bias=bias)
self.mlps = nn.ModuleList([MLP(hidden_dim, bias) for _ in range(layers)])
self.post_norm = nn.LayerNorm(hidden_dim, bias=bias)
def forward(self, x):
x = self.pre_norm(x)
for mlp in self.mlps:
x = x + mlp(x)
x = self.post_norm(x)
return x
class TestMemory(DTensorTestBase):
@with_comms
def test_over_allocation(self):
mesh = init_device_mesh("cuda", (self.world_size,))
device = torch.device("cuda")
hidden_dim = 10240
total_bsz = 16
# ----- init model --------
torch.manual_seed(0)
model = MultiMLP(hidden_dim=hidden_dim).to(device).to(torch.float32)
# -------- fsdp2 wrap --------
fully_shard_fn = functools.partial(
fully_shard,
mesh=mesh,
reshard_after_forward=True,
)
last_fsdp_module = None
for module in model.modules():
if isinstance(module, MLP):
fully_shard_fn(module)
if last_fsdp_module is not None:
last_fsdp_module.set_modules_to_forward_prefetch([module])
module.set_modules_to_backward_prefetch([last_fsdp_module])
last_fsdp_module = module
fsdp_model = fully_shard_fn(model)
fsdp_model._set_unshard_async_op(True)
optim = torch.optim.Adam(fsdp_model.parameters())
# ----- init data -----
torch.manual_seed(self.rank)
bsz = total_bsz // self.world_size
# -------- training loop --------
torch.distributed.barrier()
torch.cuda.synchronize(self.rank)
train_iter = 4
for iter in range(train_iter):
# torch.distributed.barrier()
# torch.cuda.synchronize(self.rank)
if self.rank == 0 and iter == train_iter - 1:
torch.cuda.memory._record_memory_history(max_entries=int(1E6))
with record_function("## zero grad ##"):
optim.zero_grad()
input = torch.randn((bsz, hidden_dim), device="cuda")
with record_function(f"## forward ##"):
output = fsdp_model(input)
loss = output.mean()
with record_function(f"## backward ##"):
loss.backward()
with record_function("## optimizer step ##"):
optim.step()
if self.rank == 0 and iter == train_iter - 1:
timestamp = datetime.now().strftime("%b_%d_%H_%M_%S")
file_name = f"mem_{timestamp}"
torch.cuda.memory._dump_snapshot(f"{file_name}.pickle")
torch.cuda.memory._record_memory_history(enabled=None)
torch.distributed.barrier()
torch.cuda.synchronize(self.rank)
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang
| true
|
2,852,459,837
|
[PT2]: allow empty dict to pass type check
|
kqfu
|
closed
|
[
"oncall: jit",
"fb-exported",
"release notes: jit"
] | 7
|
CONTRIBUTOR
|
Summary:
Seeing errors like when testing sigmoid for some models.
```
terminate called after throwing an instance of 'c10::Error'
what(): forward() Expected a value of type 'Dict[int, Tuple[Tensor, Tensor, Tensor]]' for argument 'event_based_features' but instead found type 'Dict[Any, Any]'.
```
Let empty dict pass type check.
Reviewed By: henryoier
Differential Revision: D69482349
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,852,426,972
|
[ONNX] Consolidate constants to a single location
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147166
* #147165
* #147164
| true
|
2,852,426,916
|
[ONNX] Set warning stacklevel so it appears at the torch.onnx call site
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147166
* __->__ #147165
* #147164
| true
|
2,852,426,860
|
[ONNX] Handle number of outputs in builder
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: bug fixes"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147166
* #147165
* __->__ #147164
| true
|
2,852,424,516
|
[BE] Use `c10::multiply_integers` in cholesky_impl
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
That replaces explicit for loop
| true
|
2,852,418,092
|
[dynamo][inspect] Graph break on mappingproxy
|
anijain2305
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
With `inspect` getting inlined, we are seeing new graph breaks on `signature.bind` using `mappingproxy`
```
import inspect
import torch
def greet(greeting, name, punctuation='!'):
"""Simple function to greet a person."""
print(f"{greeting}, {name}{punctuation}")
# Obtain the signature of the function
sig = inspect.signature(greet)
def fn(x):
sig.bind("Hello", "Alice")
return torch.sin(x)
opt_fn = torch.compile(fn, backend="eager", fullgraph=True)
x = torch.randn(3)
opt_fn(x)
```
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,852,415,485
|
Record the XPU and XCCL build settings in the compiled binary
|
pkourdis
|
open
|
[
"caffe2",
"open source",
"Stale",
"topic: not user facing",
"release notes: xpu"
] | 17
|
NONE
|
Fixes #ISSUE_NUMBER
Currently the XPU and XCCL build settings are not recorded in the compiled binary and are not shown using the `torch.__config__.show()` which is a quick way to check if the binary has been built with such support.
Below is the output adding them (see end of last line):
```
Python 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:24:40) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__config__.show())
PyTorch built with:
- GCC 13.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2025.1-Product Build 20250203 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- CPU capability usage: AVX512
XPU backend - Build settings: BLAS_INFO=mkl, BUILD_TYPE=RelWithDebInfo, COMMIT_SHA=43eb39d7c832b5560f7bfa8d29cc7919ac21c0ca, CXX_COMPILER=/home/pkourdis/compilers/gcc-13.3.0/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=OFF -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-dangling-reference -Wno-error=dangling-reference -Wno-error=redundant-move -DUSE_XPU -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.7.0, USE_CUDA=0, USE_CUDNN=OFF, USE_CUSPARSELT=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=1, USE_MPI=0, USE_NCCL=OFF, USE_NNPACK=0, USE_OPENMP=ON, USE_ROCM=0, USE_ROCM_KERNEL_ASSERT=OFF, USE_XCCL=1, USE_XPU=1,
```
| true
|
2,852,415,091
|
[inductor] add lowering for repeat_interleave.Tensor with output size specified
|
eellison
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"internal ramp-up task"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Repro, and [internal workplace post](https://fb.workplace.com/groups/1075192433118967/posts/1599399114031627):
```
import torch
@torch.compile()
def f(input, repeats):
return torch.repeat_interleave(input, repeats, dim=0, output_size=3) + 1
f = torch.compile(f)
input = torch.tensor([[1, 2], [3, 4]], device="cuda")
repeat = torch.tensor([1, 2], device="cuda")
f(input, repeat)
```
If you run the above example with `TORCH_LOGS="aot_graphs"` you will get:
```
/data/users/eellison/pytorch/torch/fx/_lazy_graph_module.py class <lambda>(torch.nn.Module):
def forward(self, arg0_1: "i64[2][1]cuda:0", arg1_1: "i64[2, 2][2, 1]cuda:0"):
# File: /data/users/eellison/pytorch/work_dir/test_hi5.py:6 in f, code: return torch.repeat_interleave(input, repeats, dim=0, output_size=3) + 1
repeat_interleave: "i64[3][1]cuda:0" = torch.ops.aten.repeat_interleave.Tensor(arg0_1, output_size = 3); arg0_1 = None
index: "i64[3, 2][2, 1]cuda:0" = torch.ops.aten.index.Tensor(arg1_1, [repeat_interleave]); arg1_1 = repeat_interleave = None
add: "i64[3, 2][2, 1]cuda:0" = torch.ops.aten.add.Tensor(index, 1); index = None
return (add,)
```
Some amount of decomposition happens happens because `repeat_interleave` is CompositeImplicit. Additionally, if you run with `TORCH_LOGS="output_code"` you will see that we do not currently lower `torch.ops.aten.repeat_interleave.Tensor`.
The semantics of repeat_interleave.Tensor with a static output size can be recreated with a combination of `cumsum` and `searchsorted`. We should add a lowering or decomposition for repeat_interleave.Tensor for the static output size case.
### Alternatives
_No response_
### Additional context
_No response_
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,852,413,173
|
[MPS] Fix cholesky_ex for empty inputs
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
By making sure that `info` is actually initialized if input is empty(but no need to do anything about `out`, is it's guaranteed to be an empty tensor)
Also move output resizing logic before `input.numel()` check
Fixes https://github.com/pytorch/pytorch/issues/147128
| true
|
2,852,412,403
|
[cutlass backend] forward fix of standalone runner for fbcode
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147158
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,852,374,633
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
2,852,362,217
|
require_exact_stride better handling of expanded dims
|
eellison
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"internal ramp-up task"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
We had a previous perf bug https://github.com/pytorch/pytorch/issues/145760 because [require_exact_strides](https://github.com/pytorch/pytorch/blob/057bcd3a454464340025c8d1b698829e2db110e3/torch/_inductor/ir.py#L5275-L5279) did not handle expanded dims well. An expanded dims is a singleton dimension which is expanded to a larger size. E.g.
```
>>> t = torch.rand([16, 1, 16])
>>> t.stride()
(16, 16, 1)
>>> t.expand([16, 16, 16]).stride()
(16, 0, 1)
>>>
```
It can occur in broadcasting among other cases. When realizing an expanded tensor we should be computing just the smaller tensor (of size[16, 1, 16] in this example) and then viewing it as the complete size.
If you compare the good [tlparse](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/eellison/custom/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100) [output code](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/eellison/custom/-_0_0_0/inductor_output_code_cp2yrjujufzuobb6jxf32ack2fsaai2bgmlgfof3jntth55syjnm_6.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100) from the bad [tlparse](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/eellison/custom/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100) [output code](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/eellison/custom/-_0_0_0/inductor_output_code_csbcxuof5ttrms7ro5fsib47jnz7i246wejaixzuh5qhboe4olub_6.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100) you'll see that in the bad output code we are launching a grid size 16x larger, so the kernel takes much longer.
To verify that the issue has been fixed you should be able update the [sdpa lowering](https://github.com/pytorch/pytorch/blob/057bcd3a454464340025c8d1b698829e2db110e3/torch/_inductor/lowering.py#L2532-L2539) to just call `ir.ExternKernel.require_exact_strides(arg, out_strides)` without any slicing and get the same perf as exists today in the original perf repro.
### Error logs
_No response_
### Versions
master
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,852,360,963
|
[dynamo][not ready] Handle builtin methods as f_locals
|
anijain2305
|
open
|
[
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147155
```
import torch
def gn(x):
return torch.sin(x)
def fn(method, x, a):
# method is a.append
method(gn(x))
return a
opt_fn = torch.compile(fn, backend="eager", fullgraph=True)
x = torch.randn(4)
a = []
print(opt_fn(a.append, x, a))
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,852,339,909
|
[CI] Use job name to index into test times json
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
When the test times are generated, it doesn't know what the build environment is because it's an environment variable. But when we index into the test times, we (previously) didn't know what the job name is. These are usually the same but sometimes they're different and when they're different it ends up using default, which can have unbalanced sharding
I think job name was added at some point to most of the CI environments but I didn't realize, so we can now update this code to use the job name instead so the generation and the indexing match
also upload stats workflow for mps
Checked that inductor_amx doesn't use default
| true
|
2,852,320,528
|
Re-land exclude upsample_bilinear2d.vec and nearest2d.vec from default export decomposition table
|
GregoryComer
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 6
|
MEMBER
|
Note: This is a re-land of https://github.com/pytorch/pytorch/pull/141791, which I reverted due to breaking some Meta-internal tests - an internal ET delegate did not handle the non-decomposed upsample_nearest2d, and it was not caught in CI. I've resolved that issue and should be ready to safely re-land.
Summary:
As upsample_bilinear2d.vec and upsample_nearest2d.vec are core ATen ops, they should not be decomposed by default in the export path. Because the operators have CompositeImplicitAutograd dispatch, their decomposition is registered by default. This change adds an override list for CIA decompositions being registered in the default decomp table.
In the long-term, we likely will want to exclude decompositions for all core-tagged CIA ops, but this will require all consumers to be ready to handle the remaining two ops, avg_pool1d, and adaptive_avg_pool1d. Until they are ready, I believe an explicit override list is the safest option.
Additionally, I've also removed the ExecuTorch XNNPACK delegate ConvertToUpsampleBilinear2d pass, as the pass breaks (and is not needed), given that the op is not decomposed. The purpose of this pass was originally to pattern match the decomposition and recompose it, but this is no longer necessary.
Fixes https://github.com/pytorch/pytorch/issues/116684.
Test Plan:
Added a new test (`test_default_decomposition_core_cia_ops`) in test_export.py to verify that upsample_bilinear2d.vec (and in the future, other core-tagged CIA ops) are not decomposed by default. Also, I manually validated end to end with ExecuTorch that the op is not decomposed in to_edge (see N6238522).
```
buck test //caffe2/test:test_export -- test_default_decomposition_core_cia_ops
```
Differential Revision: D69625112
| true
|
2,852,303,435
|
[dynamo][fx] Don't emit `call_function` node to construct dataclass instances for Dynamo and `make_fx` tracing
|
StrongerXi
|
closed
|
[
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146950
* #146367
* #146714
* #146713
* __->__ #147152
* #147145
As title. The behavior change is limited to Dynamo and `make_fx` tracing
for backward compatibility reasons with `symbolic_trace`.
It heps enforce the invariant that Dynamo and `make_fx` graphs would
always contain tensor ops -- so rather than having these `call_function`
nodes to construct `NamedTuple`, we inline them directly as instance
arguments to the user nodes.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,852,299,723
|
Delete Mixed MM Special Casing
|
eellison
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ci-no-td"
] | 31
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147151
Now that torchinductor supports prologue fusion we can delete all the mixed mm code. When I benchmarked int8 weight only mm in the new path compared to int8mm in the old path in the [following benchmark](https://gist.github.com/eellison/46e321709572c11c077d0612cb3492b7) I got a 1.244x geomean speedup comparing Huggingface linear shapes with bias. There's a couple reasons for the speedup:
- prologue fusion is often unprofitable, even for int8 mm. because the current mixed mm benchmarking only compares triton_int8_mm vs (dtype_conversion + cublas), we miss out on scenarios where the triton template is profitable but the prologue fusion is not.
- similarly, we miss out on potential epilogue fusions like bias if we dispatch to the [fallback mixed mm](https://github.com/pytorch/pytorch/blob/5006932cbc724de72108ddbfe4d8a57786e167e3/torch/_inductor/kernel/mm.py#L750-L751) that mixed_mm will dispatch to instead of the deferred epilogue tuning in current path.
It's possible some of the speedups would be smaller on larger models where the epilogue might get fused into a following kernel. Nonetheless, even if this is perf neutral it is worth landing for code deduplication.
The one kernel that is a little special and would not fall out of the prologue fusion is the uint4x2_mixed_mm kernel. it's still possible to generate with prologue fusion but not currently exactly as the current [impl](https://github.com/pytorch/pytorch/blob/bd370c138a9378d807ad16228cc6a066f14a526d/torch/_inductor/kernel/unpack_mixed_mm.py#L43-L49). But the current impl does not compare to a cublas baseline so I found that it is making things slower (35% slower on a not particularly big 1024, 1024, 1024 mm shape on h100). this should be fine to delete.
Future optimizations could include:
- cutlass prologue path
- making prologue fusion support the persistent tma based mm template. from @drisspg's experience this led to nice wins with fp8 but not as nice wins with bf16 mm. I think similarly, lower memory bandwidth int8 mm would benefit.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D70114858](https://our.internmc.facebook.com/intern/diff/D70114858)
| true
|
2,852,292,244
|
[AOTInductor] Guard RAII_cpuMalloc with macro
|
muchulee8
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: Silence RAII_cpuMalloc(size_t) defined but not used [-Wunused-function]
Test Plan: Existing tests
Differential Revision: D69623481
| true
|
2,852,291,084
|
dynamo: Count number of opcodes processes
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147149
This gives us a decent proxy for how big of a graph we functionally had to parse.
Note that this is a cummulative counter. If people feel strongly, I can either write into the dynamo_timed datasets with metrics contexts, or clear the counters / write a counter per frame id as well.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,852,275,307
|
For addmm and bmm, check if config.autotune_fallback_to_aten before using aten as a fallback. Also fix bmm cutlass backend
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147148
This PR also fixes BMM, which was silently failing for a while.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,852,230,125
|
[dynamo] Remove unintended lru_cache
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
I forgot to remove it while add frozenset __contains__ method in this PR
- https://github.com/pytorch/pytorch/pull/146062?fbclid=IwZXh0bgNhZW0CMTEAAR3S_qq8bYxO7pDuHqpr2X-vqkXQrY0KtT14z46bfuRDYikjJBet3uKF2dE_aem_o1c7I4eawKyaEsfiWhnTmw
This is causing memory leak
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,852,184,763
|
[fsdp] add an experimental allocator hook for buffers that participate in collective communication
|
yifuwang
|
open
|
[
"oncall: distributed",
"open source",
"Stale",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147146
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,852,184,615
|
[dynamo][fx] Don't emit `call_function` node to construct `NamedTuple` instances for Dynamo and `make_fx` tracing
|
StrongerXi
|
closed
|
[
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146950
* #146367
* #146714
* #146713
* #147152
* __->__ #147145
As title. This effectively undoes #49553, for Dynamo and `make_fx`
tracing only (for `symbolic_trace` backward compatibility reasons).
It heps enforce the invariant that Dynamo and `make_fx` graphs would
always contain tensor ops -- so rather than having these `call_function`
nodes to construct `NamedTuple`, we inline them directly as instance
arguments to the user nodes.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,852,174,667
|
lintrunner and requirements.txt have different versions for sympy
|
henrylhtsang
|
closed
|
[
"module: lint",
"triaged",
"actionable"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
as tiled
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torch 2.7.0a0+git48e0300 requires sympy==1.13.1; python_version >= "3.9", but you have sympy 1.13.0 which is incompatible.
### Versions
trunk
cc @malfet @seemethere
| true
|
2,852,140,971
|
torch.compile failed with FSDP model with `ignore_states` or `ignore_modules`
|
YurongYou
|
open
|
[
"triaged",
"module: fsdp",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 1
|
NONE
|
### 🐛 Describe the bug
I have a model that has a submodule must be run in fp32 while the rest of it is run in bf16, thus I need to exclude the submodule from FSDP otherwise FSDP will complain model weights are not in the same dtype. But looks like this option is not compatible with torch.compile.
Minimal reproducible code:
```python
# test.py
import os
import torch
import torch.distributed as dist
from torch import nn
from torch.distributed import device_mesh
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
def get_local_rank() -> int:
"""Get the rank (GPU device) of the worker locally on the node.
Returns:
rank (int): The local rank of the worker.
"""
local_rank = 0
if dist.is_available() and dist.is_initialized() and "LOCAL_RANK" in os.environ:
local_rank = int(os.environ["LOCAL_RANK"])
return local_rank
class Model(nn.Module):
def __init__(self):
super().__init__()
self.linear1 = nn.Linear(10, 20)
self.linear2 = nn.Linear(20, 10)
self.dropout = nn.Dropout(0.5) # Example of a module to ignore
def forward(self, x):
x = self.linear1(x)
x = self.dropout(x)
x = self.linear2(x)
return x
torch.distributed.init_process_group(
backend="nccl", init_method="env://"
)
# necessary here because we initialize global process group by ourselves
torch.cuda.set_device(torch.device(f"cuda:{get_local_rank()}"))
mesh = device_mesh.init_device_mesh(
"cuda",
(1, 1),
mesh_dim_names=("replicate", "shard"),
)
model = Model()
# Wrap the model with FSDP, ignoring the dropout layer
fsdp_model = FSDP(model, use_orig_params=True, ignored_states=[model.linear1])
# Compile the FSDP-wrapped model
compiled_model = torch.compile(fsdp_model)
# Example usage
input_tensor = torch.randn(2, 10)
output_tensor = compiled_model(input_tensor)
```
### Error logs
run with `torchrun --nproc_per_node=1 --standalone -m scripts.test_fsdp` will get errors:
```
[rank0]:W0213 13:19:12.218000 4193972 torch/_logging/_internal.py:1081] [0/0] Profiler function <class 'torch.autograd.profiler.record_function'> will be ignored
[rank0]: Traceback (most recent call last):
[rank0]: File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
[rank0]: return _run_code(code, main_globals, None,
[rank0]: File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
[rank0]: exec(code, run_globals)
[rank0]: File "/lustre/fs12/portfolios/nvr/users/yurongy/projects/xxx/.git.worktrees/main/scripts/test_fsdp.py", line 55, in <module>
[rank0]: output_tensor = compiled_model(input_tensor)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
[rank0]: return self._torchdynamo_orig_callable(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1064, in __call__
[rank0]: result = self._inner_convert(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
[rank0]: return _compile(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 952, in _compile
[rank0]: raise InternalTorchDynamoError(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
[rank0]: guarded_code = compile_inner(code, one_graph, hooks, transform)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
[rank0]: return _compile_inner(code, one_graph, hooks, transform)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_utils_internal.py", line 87, in wrapper_function
[rank0]: return function(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
[rank0]: out_code = transform_code_object(code, transform)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
[rank0]: transformations(instructions, code_options)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 634, in transform
[rank0]: tracer.run()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
[rank0]: super().run()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
[rank0]: while self.step():
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
[rank0]: self.dispatch_table[inst.opcode](self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
[rank0]: return inner_fn(self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1680, in CALL_FUNCTION_EX
[rank0]: self.call_function(fn, argsvars.items, kwargsvars)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/lazy.py", line 156, in realize_and_forward
[rank0]: return getattr(self.realize(), name)(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/nn_module.py", line 899, in call_function
[rank0]: return variables.UserFunctionVariable(fn, source=source).call_function(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
[rank0]: return super().call_function(tx, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
[rank0]: return cls.inline_call_(parent, func, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
[rank0]: tracer.run()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
[rank0]: while self.step():
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
[rank0]: self.dispatch_table[inst.opcode](self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
[rank0]: return inner_fn(self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
[rank0]: self.call_function(fn, args, {})
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
[rank0]: return super().call_function(tx, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
[rank0]: return cls.inline_call_(parent, func, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
[rank0]: tracer.run()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
[rank0]: while self.step():
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
[rank0]: self.dispatch_table[inst.opcode](self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
[rank0]: return inner_fn(self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
[rank0]: self.call_function(fn, args, {})
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
[rank0]: return super().call_function(tx, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
[rank0]: return cls.inline_call_(parent, func, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
[rank0]: tracer.run()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
[rank0]: while self.step():
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
[rank0]: self.dispatch_table[inst.opcode](self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
[rank0]: return inner_fn(self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
[rank0]: self.call_function(fn, args, {})
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
[rank0]: return super().call_function(tx, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
[rank0]: return cls.inline_call_(parent, func, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
[rank0]: tracer.run()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
[rank0]: while self.step():
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
[rank0]: self.dispatch_table[inst.opcode](self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
[rank0]: return inner_fn(self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
[rank0]: self.call_function(fn, args, {})
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
[rank0]: return super().call_function(tx, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
[rank0]: return cls.inline_call_(parent, func, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
[rank0]: tracer.run()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
[rank0]: while self.step():
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
[rank0]: self.dispatch_table[inst.opcode](self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
[rank0]: return inner_fn(self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
[rank0]: self.call_function(fn, args, {})
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
[rank0]: return super().call_function(tx, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
[rank0]: return cls.inline_call_(parent, func, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
[rank0]: tracer.run()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
[rank0]: while self.step():
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
[rank0]: self.dispatch_table[inst.opcode](self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
[rank0]: return inner_fn(self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
[rank0]: self.call_function(fn, args, {})
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
[rank0]: return super().call_function(tx, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
[rank0]: return cls.inline_call_(parent, func, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
[rank0]: tracer.run()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
[rank0]: while self.step():
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
[rank0]: self.dispatch_table[inst.opcode](self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
[rank0]: return inner_fn(self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
[rank0]: self.call_function(fn, args, {})
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
[rank0]: return super().call_function(tx, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
[rank0]: return cls.inline_call_(parent, func, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
[rank0]: tracer.run()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
[rank0]: while self.step():
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
[rank0]: self.dispatch_table[inst.opcode](self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
[rank0]: return inner_fn(self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
[rank0]: self.call_function(fn, args, {})
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
[rank0]: return super().call_function(tx, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
[rank0]: return cls.inline_call_(parent, func, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
[rank0]: tracer.run()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
[rank0]: while self.step():
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
[rank0]: self.dispatch_table[inst.opcode](self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
[rank0]: return inner_fn(self, inst)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
[rank0]: self.call_function(fn, args, {})
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builtin.py", line 967, in call_function
[rank0]: return handler(tx, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builtin.py", line 848, in builtin_dispatch
[rank0]: rv = fn(tx, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builtin.py", line 766, in call_self_handler
[rank0]: result = self_handler(tx, *args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builtin.py", line 1675, in call_getattr
[rank0]: hasattr_var = self.call_hasattr(tx, obj, name_var)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builtin.py", line 1564, in call_hasattr
[rank0]: return obj.call_hasattr(tx, name)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/user_defined.py", line 1189, in call_hasattr
[rank0]: var_vt = self.var_getattr(tx, name)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/nn_module.py", line 1120, in var_getattr
[rank0]: return super().var_getattr(tx, name)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/user_defined.py", line 1026, in var_getattr
[rank0]: out = self.manually_trace_nn_module_getattr(tx, name)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/nn_module.py", line 1130, in manually_trace_nn_module_getattr
[rank0]: out = self.getattr_helper(tx, "_parameters", name_vt)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/nn_module.py", line 1055, in getattr_helper
[rank0]: if isinstance(dict_vt, variables.ConstDictVariable):
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/base.py", line 110, in __instancecheck__
[rank0]: instance = instance.realize()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/lazy.py", line 63, in realize
[rank0]: self._cache.realize()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/lazy.py", line 29, in realize
[rank0]: self.vt = VariableBuilder(tx, self.source)(self.value)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 377, in __call__
[rank0]: vt = self._wrap(value)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 590, in _wrap
[rank0]: self.install_guards(GuardBuilder.SEQUENCE_LENGTH)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 418, in install_guards
[rank0]: or source.guard_source() == GuardSource.CONSTANT
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/source.py", line 588, in guard_source
[rank0]: return _GUARD_SOURCE_UNSPECIALIZED_BUILTIN_NN_MODULE[self.base.guard_source()]
[rank0]: torch._dynamo.exc.InternalTorchDynamoError: KeyError: GuardSource.LOCAL_FSDP_MODULE
[rank0]: from user code:
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/external_utils.py", line 40, in inner
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 848, in forward
[rank0]: args, kwargs = _root_pre_forward(self, self, args, kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_runtime_utils.py", line 517, in _root_pre_forward
[rank0]: _lazy_init(state, module)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_runtime_utils.py", line 134, in _lazy_init
[rank0]: _check_flat_params_on_expected_device(state, root_module)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_runtime_utils.py", line 150, in _check_flat_params_on_expected_device
[rank0]: for handle in traversal_utils._get_fsdp_handles(module):
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_traversal_utils.py", line 110, in _get_fsdp_handles
[rank0]: for fsdp_state in _get_fsdp_states(module)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_traversal_utils.py", line 99, in _get_fsdp_states
[rank0]: fsdp_states, _ = _get_fsdp_states_with_modules(module)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_traversal_utils.py", line 84, in _get_fsdp_states_with_modules
[rank0]: if not _composable(submodule):
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_traversal_utils.py", line 40, in _composable
[rank0]: registry = _get_registry(module)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/_composable/contract.py", line 224, in _get_registry
[rank0]: return getattr(module, REGISTRY_KEY, None)
[rank0]: Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
[rank0]: You can suppress this exception and fall back to eager by setting:
[rank0]: import torch._dynamo
[rank0]: torch._dynamo.config.suppress_errors = True
[rank0]:[W213 13:19:12.034600135 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
E0213 13:19:12.828000 4193887 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 4193972) of binary: /usr/bin/python3
I0213 13:19:12.834000 4193887 torch/distributed/elastic/multiprocessing/errors/__init__.py:368] ('local_rank %s FAILED with no error file. Decorate your entrypoint fn with @record for traceback info. See: https://pytorch.org/docs/stable/elastic/errors.html', 0)
Traceback (most recent call last):
File "/usr/local/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 919, in main
run(args)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 910, in run
elastic_launch(
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
```
I tried every ablation in https://pytorch.org/docs/main/torch.compiler_troubleshooting.html#ablation and they all failed.
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1032-oracle-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-254
Off-line CPU(s) list: 255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7J13 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3673.0950
CPU min MHz: 0.0000
BogoMIPS: 4900.26
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-254
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] msgpack-numpy==0.4.8
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.2
[pip3] onnx-graphsurgeon==0.5.2
[pip3] pytorch3d==0.7.7+cu124
[pip3] s3torchconnector==1.2.5
[pip3] s3torchconnectorclient==1.2.5
[pip3] torch==2.5.1+cu124
[pip3] torch-fidelity==0.3.0
[pip3] torch-kmeans==0.2.0
[pip3] torch-tb-profiler==0.4.3
[pip3] torchmetrics==1.4.0.post0
[pip3] torchprofile==0.0.4
[pip3] torchtyping==0.1.5
[pip3] torchvision==0.20.1+cu124
[pip3] transformer_engine_torch==1.13.0.post1+cu124.pt251
[pip3] triton==3.1.0
[conda] Could not collect
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,852,078,077
|
ROCm F8 Datatype Selector
|
petrex
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Stale",
"release notes: linalg_frontend"
] | 3
|
CONTRIBUTOR
|
TLDR: This PR address https://github.com/pytorch/ao/issues/1066. Adding logic to override FP8 datatype selection based on GPU archs
--------------------
This pull request introduces support for ROCm (Radeon Open Compute) in the CUDA data type handling within the `aten` module. The changes include adding ROCm-specific headers, defining architecture-specific type mappings, and modifying scalar type handling for ROCm compatibility.
ROCm support additions:
* [`aten/src/ATen/cuda/CUDADataType.h`](diffhunk://#diff-9188bb13b1a49f459141f5f9b875593d1c5ce2beb5ad711fdbaf5bc7089ec015R8-R11): Included ROCm headers when `USE_ROCM` is defined and added functions to get the current GPU architecture and override Float8 types based on the GPU architecture. [[1]](diffhunk://#diff-9188bb13b1a49f459141f5f9b875593d1c5ce2beb5ad711fdbaf5bc7089ec015R8-R11) [[2]](diffhunk://#diff-9188bb13b1a49f459141f5f9b875593d1c5ce2beb5ad711fdbaf5bc7089ec015R59-R134)
Code adjustments for ROCm:
* [`aten/src/ATen/cuda/CUDADataType.h`](diffhunk://#diff-9188bb13b1a49f459141f5f9b875593d1c5ce2beb5ad711fdbaf5bc7089ec015R59-R134): Modified `ScalarTypeToCudaDataType` function to use the overridden Float8 types for ROCm.
Minor changes:
* [`aten/src/ATen/cuda/tunable/GemmHipblaslt.h`](diffhunk://#diff-bfa1a3b5d4bef1892bf50338775f3b0fd8cd31fc1868148f3968b98aefb68e3fR81): Added an extra newline for formatting consistency.
* [`cmake/public/LoadHIP.cmake`](diffhunk://#diff-b98e27b9a5f196a6965a99ee5a7bb15b3fc633d6375b767635b1b04ccb2fd3d5R169): Fixed an indentation issue within the `if(HIP_FOUND)` block.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,852,047,138
|
NCCL Update 2.25.1 with CUDA 12.4 build is failing in CI
|
atalman
|
open
|
[
"oncall: distributed",
"triaged",
"module: nccl"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This is an error:
```
__________ DynamicShapesReproTests.test_ddp_checkpoint_dynamic_shapes __________
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_repros.py", line 6423, in test_ddp_checkpoint
model = nn.parallel.DistributedDataParallel(model)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 835, in __init__
_verify_param_shape_across_processes(self.process_group, parameters)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/utils.py", line 284, in _verify_param_shape_across_processes
return dist._verify_params_across_processes(process_group, tensors, logger)
torch.distributed.DistBackendError: NCCL error in: /var/lib/jenkins/workspace/torch/csrc/distributed/c10d/NCCLUtils.cpp:77, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.25.1
ncclUnhandledCudaError: Call to CUDA function failed.
Last error:
Cuda failure 'operation not supported'
To execute this test, run the following from the base repo dir:
python test/dynamo/test_dynamic_shapes.py DynamicShapesReproTests.test_ddp_checkpoint_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
Failure is here: https://github.com/pytorch/pytorch/actions/runs/13293493296/job/37131797688
More detailed log:
```
__________ DynamicShapesReproTests.test_ddp_checkpoint_dynamic_shapes __________
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_repros.py", line 6423, in test_ddp_checkpoint
model = nn.parallel.DistributedDataParallel(model)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 835, in __init__
_verify_param_shape_across_processes(self.process_group, parameters)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/utils.py", line 284, in _verify_param_shape_across_processes
return dist._verify_params_across_processes(process_group, tensors, logger)
torch.distributed.DistBackendError: NCCL error in: /var/lib/jenkins/workspace/torch/csrc/distributed/c10d/NCCLUtils.cpp:77, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.25.1
ncclUnhandledCudaError: Call to CUDA function failed.
Last error:
Cuda failure 'operation not supported'
Exception raised from create at /var/lib/jenkins/workspace/torch/csrc/distributed/c10d/NCCLUtils.cpp:77 (most recent call first):
C++ CapturedTraceback:
#4 std::_Function_handler<std::shared_ptr<c10::LazyValue<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > const> (), c10::SetStackTraceFetcher(std::function<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&) from Logging.cpp:0
#5 c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) from ??:0
#6 c10d::NCCLComm::create(int, int, ncclUniqueId, signed char, ncclConfig_v21700&) [clone .cold] from NCCLUtils.cpp:0
#7 c10d::ProcessGroupNCCL::initNCCLComm(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::Device&, c10d::OpType, int, bool) from ??:0
#8 c10d::ProcessGroupNCCL::allgather(std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > >&, std::vector<at::Tensor, std::allocator<at::Tensor> >&, c10d::AllgatherOptions const&) from ??:0
#9 c10d::ops::(anonymous namespace)::allgather_CUDA(std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > > const&, c10::ArrayRef<at::Tensor>, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, long) from Ops.cpp:0
#10 c10::impl::make_boxed_from_unboxed_functor<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<std::tuple<std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > >, c10::intrusive_ptr<c10d::Work, c10::detail::intrusive_target_default_null_type<c10d::Work> > > (*)(std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > > const&, c10::ArrayRef<at::Tensor>, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, long), std::tuple<std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > >, c10::intrusive_ptr<c10d::Work, c10::detail::intrusive_target_default_null_type<c10d::Work> > >, c10::guts::typelist::typelist<std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > > const&, c10::ArrayRef<at::Tensor>, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, long> >, false>::call(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) from :0
#11 c10::OperatorHandle::redispatchBoxed(c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const from :0
#12 torch::autograd::basicAutogradNotImplementedFallbackImpl(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) from autograd_not_implemented_fallback.cpp:0
#13 void c10::BoxedKernel::make_boxed_function<&(anonymous namespace)::autograd_fallback>(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) from VariableFallbackKernel.cpp:0
#14 c10::impl::BoxedKernelWrapper<std::tuple<std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > >, c10::intrusive_ptr<c10d::Work, c10::detail::intrusive_target_default_null_type<c10d::Work> > > (std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > > const&, c10::ArrayRef<at::Tensor>, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, long), void>::call(c10::BoxedKernel const&, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > > const&, c10::ArrayRef<at::Tensor>, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, long) from :0
#15 c10d::ProcessGroup::allgather(std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > >&, std::vector<at::Tensor, std::allocator<at::Tensor> >&, c10d::AllgatherOptions const&) from :0
#16 c10d::verify_params_across_processes(c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::optional<std::weak_ptr<c10d::Logger> > const&) from ??:0
#17 pybind11::cpp_function::initialize<torch::distributed::c10d::(anonymous namespace)::c10d_init(_object*, _object*)::{lambda(c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::optional<std::shared_ptr<c10d::Logger> > const&)#113}, void, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::optional<std::shared_ptr<c10d::Logger> > const&, pybind11::name, pybind11::scope, pybind11::sibling, pybind11::arg, pybind11::sibling, pybind11::arg_v, pybind11::call_guard<pybind11::gil_scoped_release> >(torch::distributed::c10d::(anonymous namespace)::c10d_init(_object*, _object*)::{lambda(c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::optional<std::shared_ptr<c10d::Logger> > const&)#113}&&, void (*)(c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::optional<std::shared_ptr<c10d::Logger> > const&), pybind11::name const&, pybind11::scope const&, pybind11::sibling const&, pybind11::arg const&, pybind11::sibling const&, pybind11::arg_v const&, pybind11::call_guard<pybind11::gil_scoped_release> const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call) from init.cpp:0
#18 pybind11::cpp_function::dispatcher(_object*, _object*, _object*) from :0
#19 cfunction_call from :0
#20 _PyObject_MakeTpCall.localalias from :0
#21 _PyEval_EvalFrameDefault from ??:0
#22 _PyFunction_Vectorcall from ??:0
#23 _PyEval_EvalFrameDefault from ??:0
#24 _PyObject_FastCallDictTstate.localalias from :0
#25 slot_tp_init from :0
#26 _PyObject_MakeTpCall.localalias from :0
#27 _PyEval_EvalFrameDefault from ??:0
#28 _PyFunction_Vectorcall from ??:0
#29 _PyEval_EvalFrameDefault from ??:0
#30 method_vectorcall from :0
#31 _PyEval_EvalFrameDefault from ??:0
#32 method_vectorcall from :0
#33 _PyEval_EvalFrameDefault from ??:0
#34 _PyFunction_Vectorcall from ??:0
#35 _PyEval_EvalFrameDefault from ??:0
#36 method_vectorcall from :0
#37 _PyEval_EvalFrameDefault from ??:0
#38 method_vectorcall from :0
#39 _PyEval_EvalFrameDefault from ??:0
#40 method_vectorcall from :0
#41 PyObject_Call from ??:0
#42 _PyEval_EvalFrameDefault from ??:0
#43 _PyFunction_Vectorcall from ??:0
#44 _PyObject_FastCallDictTstate.localalias from :0
#45 _PyObject_Call_Prepend from ??:0
#46 slot_tp_call from :0
#47 _PyObject_MakeTpCall.localalias from :0
#48 _PyEval_EvalFrameDefault from ??:0
#49 _PyFunction_Vectorcall from ??:0
#50 _PyEval_EvalFrameDefault from ??:0
#51 _PyFunction_Vectorcall from ??:0
#52 _PyEval_EvalFrameDefault from ??:0
#53 _PyFunction_Vectorcall from ??:0
#54 _PyEval_EvalFrameDefault from ??:0
#55 method_vectorcall from :0
#56 _PyEval_EvalFrameDefault from ??:0
#57 _PyFunction_Vectorcall from ??:0
#58 _PyObject_FastCallDictTstate.localalias from :0
#59 _PyObject_Call_Prepend from ??:0
#60 slot_tp_call from :0
#61 PyObject_Call from ??:0
#62 _PyEval_EvalFrameDefault from ??:0
#63 _PyFunction_Vectorcall from ??:0
#64 _PyEval_EvalFrameDefault from ??:0
#65 method_vectorcall from :0
#66 _PyEval_EvalFrameDefault from ??:0
#67 _PyFunction_Vectorcall from ??:0
#68 _PyEval_EvalFrameDefault from ??:0
#69 _PyFunction_Vectorcall from ??:0
#70 _PyEval_EvalFrameDefault from ??:0
#71 _PyFunction_Vectorcall from ??:0
#72 _PyEval_EvalFrameDefault from ??:0
#73 _PyFunction_Vectorcall from ??:0
#74 _PyEval_EvalFrameDefault from ??:0
#75 _PyFunction_Vectorcall from ??:0
#76 _PyEval_EvalFrameDefault from ??:0
#77 method_vectorcall from :0
#78 _PyEval_EvalFrameDefault from ??:0
#79 _PyFunction_Vectorcall from ??:0
#80 _PyObject_FastCallDictTstate.localalias from :0
#81 _PyObject_Call_Prepend from ??:0
#82 slot_tp_call from :0
#83 _PyObject_MakeTpCall.localalias from :0
#84 _PyEval_EvalFrameDefault from ??:0
#85 _PyFunction_Vectorcall from ??:0
#86 _PyEval_EvalFrameDefault from ??:0
#87 _PyFunction_Vectorcall from ??:0
#88 _PyEval_EvalFrameDefault from ??:0
#89 method_vectorcall from :0
#90 _PyEval_EvalFrameDefault from ??:0
#91 _PyFunction_Vectorcall from ??:0
#92 _PyObject_FastCallDictTstate.localalias from :0
#93 _PyObject_Call_Prepend from ??:0
#94 slot_tp_call from :0
#95 _PyObject_MakeTpCall.localalias from :0
#96 _PyEval_EvalFrameDefault from ??:0
#97 _PyFunction_Vectorcall from ??:0
#98 _PyEval_EvalFrameDefault from ??:0
#99 _PyFunction_Vectorcall from ??:0
#100 _PyEval_EvalFrameDefault from ??:0
#101 _PyFunction_Vectorcall from ??:0
#102 _PyEval_EvalFrameDefault from ??:0
#103 _PyFunction_Vectorcall from ??:0
#104 _PyEval_EvalFrameDefault from ??:0
#105 method_vectorcall from :0
#106 _PyEval_EvalFrameDefault from ??:0
#107 _PyFunction_Vectorcall from ??:0
#108 _PyObject_FastCallDictTstate.localalias from :0
#109 _PyObject_Call_Prepend from ??:0
#110 slot_tp_call from :0
#111 _PyObject_MakeTpCall.localalias from :0
#112 _PyEval_EvalFrameDefault from ??:0
#113 _PyFunction_Vectorcall from ??:0
#114 _PyEval_EvalFrameDefault from ??:0
#115 _PyFunction_Vectorcall from ??:0
#116 _PyEval_EvalFrameDefault from ??:0
#117 _PyFunction_Vectorcall from ??:0
#118 _PyEval_EvalFrameDefault from ??:0
#119 _PyEval_Vector from :0
#120 PyEval_EvalCode from ??:0
#121 run_eval_code_obj from :0
#122 run_mod from :0
#123 pyrun_file.cold from :0
#124 _PyRun_SimpleFileObject.localalias from :0
#125 _PyRun_AnyFileObject.localalias from :0
#126 Py_RunMain.localalias from :0
#127 Py_BytesMain from ??:0
#128 __libc_start_main from ??:0
#129 _start from ??:0
To execute this test, run the following from the base repo dir:
python test/dynamo/test_dynamic_shapes.py DynamicShapesReproTests.test_ddp_checkpoint_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
__________ DynamicShapesReproTests.test_ddp_checkpoint_dynamic_shapes __________
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_repros.py", line 6423, in test_ddp_checkpoint
model = nn.parallel.DistributedDataParallel(model)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 835, in __init__
_verify_param_shape_across_processes(self.process_group, parameters)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/utils.py", line 284, in _verify_param_shape_across_processes
return dist._verify_params_across_processes(process_group, tensors, logger)
torch.distributed.DistBackendError: NCCL error in: /var/lib/jenkins/workspace/torch/csrc/distributed/c10d/NCCLUtils.cpp:77, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.25.1
ncclUnhandledCudaError: Call to CUDA function failed.
Last error:
Cuda failure 'operation not supported'
Exception raised from create at /var/lib/jenkins/workspace/torch/csrc/distributed/c10d/NCCLUtils.cpp:77 (most recent call first):
C++ CapturedTraceback:
#4 std::_Function_handler<std::shared_ptr<c10::LazyValue<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > const> (), c10::SetStackTraceFetcher(std::function<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&) from Logging.cpp:0
#5 c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) from ??:0
#6 c10d::NCCLComm::create(int, int, ncclUniqueId, signed char, ncclConfig_v21700&) [clone .cold] from NCCLUtils.cpp:0
#7 c10d::ProcessGroupNCCL::initNCCLComm(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::Device&, c10d::OpType, int, bool) from ??:0
#8 c10d::ProcessGroupNCCL::allgather(std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > >&, std::vector<at::Tensor, std::allocator<at::Tensor> >&, c10d::AllgatherOptions const&) from ??:0
#9 c10d::ops::(anonymous namespace)::allgather_CUDA(std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > > const&, c10::ArrayRef<at::Tensor>, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, long) from Ops.cpp:0
#10 c10::impl::make_boxed_from_unboxed_functor<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<std::tuple<std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > >, c10::intrusive_ptr<c10d::Work, c10::detail::intrusive_target_default_null_type<c10d::Work> > > (*)(std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > > const&, c10::ArrayRef<at::Tensor>, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, long), std::tuple<std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > >, c10::intrusive_ptr<c10d::Work, c10::detail::intrusive_target_default_null_type<c10d::Work> > >, c10::guts::typelist::typelist<std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > > const&, c10::ArrayRef<at::Tensor>, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, long> >, false>::call(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) from :0
#11 c10::OperatorHandle::redispatchBoxed(c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const from :0
#12 torch::autograd::basicAutogradNotImplementedFallbackImpl(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) from autograd_not_implemented_fallback.cpp:0
#13 void c10::BoxedKernel::make_boxed_function<&(anonymous namespace)::autograd_fallback>(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) from VariableFallbackKernel.cpp:0
#14 c10::impl::BoxedKernelWrapper<std::tuple<std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > >, c10::intrusive_ptr<c10d::Work, c10::detail::intrusive_target_default_null_type<c10d::Work> > > (std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > > const&, c10::ArrayRef<at::Tensor>, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, long), void>::call(c10::BoxedKernel const&, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > > const&, c10::ArrayRef<at::Tensor>, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, long) from :0
#15 c10d::ProcessGroup::allgather(std::vector<std::vector<at::Tensor, std::allocator<at::Tensor> >, std::allocator<std::vector<at::Tensor, std::allocator<at::Tensor> > > >&, std::vector<at::Tensor, std::allocator<at::Tensor> >&, c10d::AllgatherOptions const&) from :0
#16 c10d::verify_params_across_processes(c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::optional<std::weak_ptr<c10d::Logger> > const&) from ??:0
#17 pybind11::cpp_function::initialize<torch::distributed::c10d::(anonymous namespace)::c10d_init(_object*, _object*)::{lambda(c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::optional<std::shared_ptr<c10d::Logger> > const&)#113}, void, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::optional<std::shared_ptr<c10d::Logger> > const&, pybind11::name, pybind11::scope, pybind11::sibling, pybind11::arg, pybind11::sibling, pybind11::arg_v, pybind11::call_guard<pybind11::gil_scoped_release> >(torch::distributed::c10d::(anonymous namespace)::c10d_init(_object*, _object*)::{lambda(c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::optional<std::shared_ptr<c10d::Logger> > const&)#113}&&, void (*)(c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::optional<std::shared_ptr<c10d::Logger> > const&), pybind11::name const&, pybind11::scope const&, pybind11::sibling const&, pybind11::arg const&, pybind11::sibling const&, pybind11::arg_v const&, pybind11::call_guard<pybind11::gil_scoped_release> const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call) from init.cpp:0
#18 pybind11::cpp_function::dispatcher(_object*, _object*, _object*) from :0
#19 cfunction_call from :0
#20 _PyObject_MakeTpCall.localalias from :0
#21 _PyEval_EvalFrameDefault from ??:0
#22 _PyFunction_Vectorcall from ??:0
#23 _PyEval_EvalFrameDefault from ??:0
#24 _PyObject_FastCallDictTstate.localalias from :0
#25 slot_tp_init from :0
```
### Versions
2.7.0 nightly
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,851,986,201
|
Mask in MaskedTensor does not change device
|
sandeep-189
|
open
|
[
"triaged",
"module: masked operators"
] | 1
|
NONE
|
### 🐛 Describe the bug
When you create a MaskedTensor and change it to cuda, the data is the only one that change to cuda. When we use a reduction function on cuda MaskedTensor (sum, to_tensor, etc), it will always fail since the mask in on another device.
```
import torch
from torch.masked import as_masked_tensor
data = torch.tensor([1,2,3])
mask = torch.tensor([True,False,True])
mt = as_masked_tensor(data, mask).to('cuda')
mt.get_data().device, mt.get_mask().device
```
```
(device(type='cuda', index=0), device(type='cpu'))
```
```
mt.sum(dim=0)
```
```
File [/pyenv/versions/3.11.2/lib/python3.11/site-packages/torch/masked/_ops.py:857](/pyenv/versions/3.11.2/lib/python3.11/site-packages/torch/masked/_ops.py#line=856), in _where(mask, input, fill_value)
857 return torch.where(mask, input, fill_value)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
```
### Versions
Run on torch 2.6, python 3.11
| true
|
2,851,963,113
|
test2
|
henrylhtsang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147139
* #147138
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,851,961,023
|
test 1
|
henrylhtsang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147139
* __->__ #147138
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,851,935,820
|
[inductor] remove hardcoded mapping to resolve ops from ExternKernelSchedulerNode
|
xmfan
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 2
|
MEMBER
|
### 🐛 Describe the bug
https://github.com/pytorch/pytorch/pull/146992/files#r1953070064
during runtime estimation, we use this reverse map to lookup ops contained in ExternKernelSchedulerNode
https://github.com/pytorch/pytorch/blob/b0553cee6bbb0c3cfb7896d8f585f4ea32f1d254/torch/_inductor/scheduler.py#L924-L930
i've been wanting to find an existing registration system, per @eellison one alternative can be FlopCounter registrations
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,851,930,990
|
make subproc tests
|
henrylhtsang
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147136
* #146743
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,851,930,418
|
Don't print fw_metadata in "Found a graph input that requires gradients"
|
ezyang
|
open
|
[
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
It's extremely long, and I don't think it's useful for users.
Sample:
```
File /data/users/ezyang/a/pytorch/torch/_functorch/aot_autograd.py:570, in create_aot_dispatcher_function(flat_fn, fake_flat_args, aot_config, fake_mode, shape_env)
562 def create_aot_dispatcher_function(
563 flat_fn,
564 fake_flat_args: FakifiedFlatArgs,
(...)
567 shape_env: Optional[ShapeEnv],
568 ) -> tuple[Callable, ViewAndMutationMeta]:
569 with dynamo_timed("create_aot_dispatcher_function", log_pt2_compile_event=True):
--> 570 return _create_aot_dispatcher_function(
571 flat_fn, fake_flat_args, aot_config, fake_mode, shape_env
572 )
File /data/users/ezyang/a/pytorch/torch/_functorch/aot_autograd.py:773, in _create_aot_dispatcher_function(flat_fn, fake_flat_args, aot_config, fake_mode, shape_env)
759 # In export, banning data mutations on inputs that require grad for now.
760 # This should be rare, and is tricky to get right. When we trace the backward,
761 # we currently trace with autograd.grad instead of .backward(), which makes it difficult
762 # to ensure that we run autograd all the way through the input **before** it saw the mutation.
763 if (
764 len(
765 [
(...)
771 != 0
772 ):
--> 773 raise RuntimeError(
774 f"""\
775 Found a graph input that requires gradients, and received a mutation.
776 This is currently banned in the aot_export workflow. If you need this functionality, please file a github issue.
777
778 fw_metadata={str(fw_metadata)}"""
779 )
780 if req_subclass_dispatch:
781 raise RuntimeError(
782 """\
783 aot_export is not currently supported with traceable tensor subclass.
784 If you need this feature, please comment on <CREATE_ISSUE_LINK>"""
785 )
RuntimeError: Found a graph input that requires gradients, and received a mutation.
This is currently banned in the aot_export workflow. If you need this functionality, please file a github issue.
fw_metadata=ViewAndMutationMeta(input_info=[InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_gr
```
### Versions
main
cc @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
2,851,918,240
|
`torch.mul` uses `OpMathType` for computation.
|
ysiraichi
|
open
|
[
"module: cuda",
"triaged",
"module: bfloat16",
"module: half",
"module: python frontend"
] | 5
|
COLLABORATOR
|
The element-wise multiplication implementation for CUDA is currently making use of `OpMathType`, which upcast the inputs if they are of `fp16` or `bf16` data-type for actually running the operation. In summary:
```python
>>> a = torch.rand(5, dtype=torch.bfloat16, device="cuda")
>>> b = torch.rand(5, dtype=torch.bfloat16, device="cuda")
>>> r = torch.mul(a, b)
# - Upcasts the inputs to fp32 before running: a * b
# - Downcasts the result back to bf16
```
Tracking back to [the PR that introduced it](https://github.com/pytorch/pytorch/pull/64019), it looks like [this was supposed to deal with scalars that might have more precision than the inputs](https://dev-discuss.pytorch.org/t/cuda-loops-case-study-code-generation-vs-templates/302). However, that's not the case for `mul.Tensor`, since there's no scalar parameter.
**Question:** is there another reason we need this? Shouldn't we just run the multiplication in `bf16`/`fp16`?
cc @ptrblck @msaroufim @eqy @albanD @amjames @ezyang
| true
|
2,851,888,685
|
all reduce non strict
|
avikchaudhuri
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 11
|
CONTRIBUTOR
|
Summary:
Some distributed collectives like `all_reduce` have special handling in Dynamo, where they are mapped to functional collectives. Non-strict was previously blind to such mappings, which means using them would fail to trace. Here we show how intercepting them in non-strict's torch function mode can mimic this remapping logic. More ops to follow.
Side note: a recently added distributed test was in the wrong place, making the expected failures for non-strict not fire because we weren't actually generating those tests to begin with! Now fixed.
Test Plan: moved and updated test
Differential Revision: D69607140
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,851,888,009
|
test 2 ghstack
|
henrylhtsang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147132
* #147131
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,851,886,763
|
test ghstack
|
henrylhtsang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147132
* __->__ #147131
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,851,843,713
|
[cond] support output sizes mismatch in front end
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147567
* #147649
* __->__ #147130
This PR finishes https://github.com/pytorch/pytorch/pull/137615 by addressing the TODOs and comments left there.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.