id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,044,797,354
|
[Dynamo] Replace `unimplemented` with `unimplemented_v2` in `torch/_dynamo/variables/misc.py` [2/2]
|
shink
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: dynamo"
] | 6
|
CONTRIBUTOR
|
Part of #147913
Follow up: #152274
Replace `unimplemented` with`unimplemented_v2` in `torch/_dynamo/variables/misc.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,044,749,011
|
Add CUDA support for Adagrad(fused=True)
|
MeetThePatel
|
open
|
[
"triaged",
"open source",
"release notes: optim"
] | 4
|
CONTRIBUTOR
|
This PR adds CUDA support for Adagrad(fused=True) optimizer, along with 3 minor changes:
- Add a TensorLR variant for CPU Adagrad(fused=True).
- Fix error message in `test/test_optim.py`, where the incorrect optimizer name was being printed.
- Fix an error message in FusedSGD, where it was giving incorrect information.
Along with this PR, I have done some benchmarking for this implementation. The benchmarking script is: [gist](https://gist.github.com/MeetThePatel/b7e93f3d3b65a67a09f8b02440a8cef9). This script also has NVTX tags, so if you want to create a trace, the relevant portions will be tagged with:
- Adagrad implementation (default, foreach, fused).
- Dtype (I benchmarked across `float64`, `float32`, `float16`, and `bfloat16`).
- Tensor shape (I used tensor shapes from GPT-2 as reference).
This benchmark measures speed and memory usage.
Highlights:
- Speed: These results are not the most stable. Speedup relative to default varies wildly (sometimes being slower) based on dtype and shape, as well as from run to run. I'm not sure what's going on here.
- Memory: This fused implementation saves ~38% (peak) VRAM compared to the default and foreach implementations.
The results of this benchmark are given in the comments of the gist: [link to comment](https://gist.github.com/MeetThePatel/b7e93f3d3b65a67a09f8b02440a8cef9?permalink_comment_id=5568984#gistcomment-5568984).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,044,663,148
|
Allow zero sized dimensions in padding operations
|
sladyn98
|
open
|
[
"open source",
"topic: not user facing"
] | 4
|
NONE
|
Previously, the padding implementation in PadNd.cpp required all output dimensions to be strictly positive (> 0), which caused errors when padding tensors with zero-sized dimensions even when the padding for that dimension was also zero.
This change relaxes the constraint to allow non-negative (>= 0) output dimensions, enabling operations like:
x = torch.ones((0, 1))
y = functional.pad(x, [1, 1, 0, 0]) # Now returns tensor of shape (0, 3)
The previous behavior was unnecessarily restrictive since mathematically a zero-sized dimension with zero padding should remain zero-sized.
Fixes [#ISSUE_NUMBER](https://github.com/pytorch/pytorch/issues/152750)
| true
|
3,044,623,989
|
fix test
|
yf225
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153036
* #152775
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,044,617,968
|
Add Split Softmax
|
AMindToThink
|
open
|
[
"module: nn",
"triaged",
"needs research"
] | 2
|
NONE
|
Transformer models often forget their system prompts when processing long text due to the long distances between the source of the information and its destination.
The Split Softmax function is a modification of softmax for use in attention that promotes the model to keep paying attention to the system prompt. It was developed to combat instruction drift and attention decay.
It is described in this paper: [Measuring and Controlling Instruction (In)Stability
in Language Model Dialogs](https://arxiv.org/pdf/2402.10962)
I am doing research involving this function, and I would like to implement split softmax as an activation function in Pytorch, if it is deemed worthwhile.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
3,044,611,322
|
WIP: Fix caching when output has unbacked
|
aorenste
|
open
|
[
"release notes: fx",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153034
| true
|
3,044,594,141
|
missalignment with differenet shape in F.linear with bf16 dtype
|
likelyzhao
|
open
|
[
"needs reproduction",
"triaged",
"module: bfloat16",
"module: linear algebra",
"module: padding"
] | 1
|
NONE
|
### 🐛 Describe the bug
For the F.linear function, when constructing matrix multiplications of varying dimensions via zero-padding, output consistency cannot be guaranteed under bf16 precision (outputs are consistent for some dimensions but inconsistent for others).
```python
import torch
import torch.nn.functional as F
import pdb
torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False
torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction = False
torch.backends.cuda.allow_fp16_bf16_reduction_math_sdp(True)
torch.use_deterministic_algorithms = True
torch.backends.cudnn.benchmark = False
torch.utils.deterministic.fill_uninitialized_memory = True
#pdb.set_trace()
## reproduce with rand variable
for i in range(100):
input_shape = [1661+ i, 3584]
weight_shape = [3584, 3584]
bias_shape = [3584]
#device = torch.device("cpu")
device = torch.device("cuda")
dtype = torch.bfloat16
torch.random.manual_seed(0)
r_input = torch.rand(input_shape, device=device, dtype=dtype)
r_weight_q = torch.rand(weight_shape, device=device, dtype=dtype)
r_bias_q = torch.rand(bias_shape, device=device, dtype=dtype)
# expand weight_q with zeros
zeros_w = torch.zeros((1024, r_weight_q.shape[1]), device=device, dtype=dtype)
zeros_b = torch.zeros((1024), device=device, dtype=dtype)
# 沿行方向拼接(dim=0)
weight_expand = torch.cat((r_weight_q, zeros_w), dim=0)
bias_expand = torch.cat((r_bias_q, zeros_b), dim=0)
#pdb.set_trace()
output_ori = F.linear(r_input, r_weight_q, r_bias_q)
output_expand = F.linear(r_input, weight_expand, bias_expand)
split_dim =-1
split_op_q, split_op_k, split_op_v = output_expand.split([weight_shape[split_dim], 512 , 512], dim=split_dim)
sum = torch.sum(torch.abs(split_op_q.float() - output_ori.float()))
print("diff split_op_q vs output_ori", sum.item()) # assume to be zero
# sum split_op_k
sum = torch.sum(split_op_k)
print("sum split_op_k", sum.item()) # assume to be zero
# sum split_op_v
sum = torch.sum(split_op_v)
print("sum split_op_v", sum.item()) # assume to be zero
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6258R CPU @ 2.70GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1000.0000
BogoMIPS: 5400.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.8 MiB (56 instances)
L1i cache: 1.8 MiB (56 instances)
L2 cache: 56 MiB (56 instances)
L3 cache: 77 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.7.0
[pip3] triton==3.3.0
[conda] numpy 2.2.5 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] torch 2.7.0 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
cc @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
| true
|
3,044,591,634
|
DISABLED test_hook_with_closure (__main__.HooksTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
Platforms: asan, linux, mac, macos, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_hook_with_closure&suite=HooksTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41763750570).
Over the past 3 hours, it has been determined flaky in 22 workflow(s) with 44 failures and 22 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_hook_with_closure`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_hooks.py", line 575, in test_hook_with_closure
opt(x2, obj1).sum().backward()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/compiled_autograd.py", line 1024, in runtime_wrapper
out = compiled_fn(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 350, in __call__
return self.forward(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 678, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 840, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 416, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 403, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1755, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1861, in _call_impl
return inner()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1798, in inner
args_result = hook(self, args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1458, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 624, in __call__
return _compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1133, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1082, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 777, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 813, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 741, in transform
tracer.run()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3494, in run
super().run()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 421, in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 1148, in call_function
return handler(tx, args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 792, in <lambda>
return lambda tx, args, kwargs: obj.call_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 1148, in call_function
return handler(tx, args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 968, in builtin_dispatch
rv = fn(tx, args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 842, in <lambda>
handlers.append(lambda tx, args, _: binop_handler(tx, *args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 465, in <lambda>
[*a.items, *b.unpack_var_sequence(tx)],
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/constant.py", line 124, in unpack_var_sequence
raise NotImplementedError from e
torch._dynamo.exc.InternalTorchDynamoError: NotImplementedError:
from user code:
File "/var/lib/jenkins/workspace/test/dynamo/test_hooks.py", line 902, in hook
return (args[0] + 100,)
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
To execute this test, run the following from the base repo dir:
python test/dynamo/test_hooks.py HooksTests.test_hook_with_closure
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_hooks.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,044,591,578
|
DISABLED test_comprehensive_svd_cuda_float32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_svd_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41765230283).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_svd_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads
return torch.autograd.grad(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/__init__.py", line 503, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2191, in backward
return impl_fn()
^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2177, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2272, in _backward_impl
CompiledFunction.compiled_bw = aot_config.bw_compiler(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler
disable(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 872, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 2234, in bw_compiler
return inner_compile(
^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 710, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 880, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 864, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1487, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1374, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2238, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2248, in _compile_to_module
mod = self._compile_to_module_lines(wrapper_code)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2312, in _compile_to_module_lines
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3022, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmp401_uv9u/in/cinruj2becx4kecqpts3wluagyedyxjwao554togl2aic6xyvuqm.py", line 309, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 481, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 501, in _wait_futures
kernel = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3524, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 368, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 325, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 482, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1279, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1271, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmp96eiehwi/triton/63DQS3QTGSR5JN435ZZGJB7PR5KXH675WH6S75753GZHHHOHU2TQ/triton_poi_fused_add_diag_embed_div_mul_sub_2.cubin')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_cuda.py", line 256, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 44: SampleInput(input=Tensor[size=(3, 5), device="cuda:0", dtype=torch.float32], args=(), kwargs={'some': 'False'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=44 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_svd_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,044,591,521
|
DISABLED test_comprehensive_amin_cuda_float32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_amin_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41762952404).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_amin_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 676, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 880, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 864, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1487, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1374, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2238, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2248, in _compile_to_module
mod = self._compile_to_module_lines(wrapper_code)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2312, in _compile_to_module_lines
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3022, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpgizho8gt/d2/cd2y4mausykw5umyrmzdum2qkzn2fg7nsf2laqhorhycn73fix42.py", line 78, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 481, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 501, in _wait_futures
kernel = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3524, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 368, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 325, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 482, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1279, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1271, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmp_g6y7esm/triton/2U3MZWPKROAIPUJQWFF47YLASWPNKLNMVWDZ7NNDZNKFESIEDIKA/triton_poi_fused_amin_eq_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 2: SampleInput(input=Tensor[size=(), device="cuda:0", dtype=torch.float32], args=(), kwargs={'dim': '-1', 'keepdim': 'False'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=2 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_amin_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,044,591,468
|
DISABLED test_comprehensive_asinh_cuda_float32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_asinh_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41765095197).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_asinh_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/mock.py", line 1424, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
~~~~~~~~~~~~~~~~~~~~^
fn,
^^^
...<2 lines>...
**adjusted_kwargs,
^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
~~~~~~~~~~~^
self,
^^^^^
...<13 lines>...
output_process_fn_grad=output_process_fn_grad,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads
return torch.autograd.grad(
~~~~~~~~~~~~~~~~~~~^
flat_diff_results,
^^^^^^^^^^^^^^^^^^
...<3 lines>...
retain_graph=True,
^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/__init__.py", line 503, in grad
result = _engine_run_backward(
outputs,
...<5 lines>...
accumulate_grad=False,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
t_outputs, *args, **kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^
) # Calls into the C++ engine to run the backward pass
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2191, in backward
return impl_fn()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2177, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2272, in _backward_impl
CompiledFunction.compiled_bw = aot_config.bw_compiler(
~~~~~~~~~~~~~~~~~~~~~~^
copy.deepcopy(bw_module), placeholder_list
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
return self.compiler_fn(gm, example_inputs)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler
disable(
~~~~~~~~
bw_compiler_fn, reason="do not trace backward compiler function"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
)(*args, **kwargs),
~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 872, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 2234, in bw_compiler
return inner_compile(
gm,
...<5 lines>...
boxed_forward_device_index=forward_device,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 710, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
gm,
^^^
example_inputs,
^^^^^^^^^^^^^^^
**kwargs,
^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 880, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
e.__traceback__
) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 864, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
gm, example_inputs, inputs_to_check, **graph_kwargs
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1487, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1374, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2238, in compile_to_module
return self._compile_to_module()
~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2248, in _compile_to_module
mod = self._compile_to_module_lines(wrapper_code)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2312, in _compile_to_module_lines
mod = PyCodeCache.load_by_key_path(
key,
...<2 lines>...
attrs={**self.constants, **self.torchbind_constants},
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 3022, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/tmpel4o06u5/a2/ca2ng4k4n2cyxptty4lubff7g4bdycvpg5ibcceovomu3o4vesag.py", line 91, in <module>
async_compile.wait(globals())
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 481, in wait
self._wait_futures(scope)
~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 501, in _wait_futures
kernel = result.result()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 3524, in result
return self.result_fn()
~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 368, in get_result
kernel.precompile(
~~~~~~~~~~~~~~~~~^
warm_cache_only=False,
^^^^^^^^^^^^^^^^^^^^^^
reload_kernel=reload_kernel_in_parent,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
static_triton_bundle_key=CompiledTritonKernels.key(source_code),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 325, in precompile
self._make_launchers()
~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 482, in _make_launchers
launchers.append(result.make_launcher())
~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1279, in make_launcher
self.reload_cubin_path()
~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1271, in reload_cubin_path
raise RuntimeError(
"Cubin file saved by TritonBundler not found at %s", cubin_location
)
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpmgh8qmyp/triton/W7C2BUE565B33EBVQRIAWWPYMR7MKL4OT4JHCNUSFCSH7PRGZ5UA/triton_poi_fused_add_mul_pow_rsqrt_0.cubin')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(20, 20), device="cuda:0", dtype=torch.float32], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_asinh_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,044,587,182
|
[Typing] Improve device typing for `torch.set_default_device()`
|
shink
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Part of: #152952
Here is the definition of `torch.types.Device`:
https://github.com/pytorch/pytorch/blob/ab997d9ff584e8623de146b6eb9c9074081b045b/torch/types.py#L74
So `_Optional[_Union["torch.device", str, builtins.int]]` is equivalent to it.
cc: @Skylion007
| true
|
3,044,564,077
|
[Typing] Apply `torch.types.Device` in `torch/cuda/memory.py`
|
shink
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Part of: #152952
Here is the definition of `torch.types.Device`:
https://github.com/pytorch/pytorch/blob/ab997d9ff584e8623de146b6eb9c9074081b045b/torch/types.py#L74
It contains `int`, so the `int` in `Union[Device, int]` is redundant.
cc: @Skylion007
| true
|
3,044,559,891
|
remove register_fake
|
yf225
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153026
* #152775
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,044,513,586
|
Multiple CUDA graphs utilizing multiple CUDA GPUs encounter illegal memory access during replay
|
Atream
|
open
|
[
"triaged",
"module: cuda graphs"
] | 3
|
NONE
|
### 🐛 Describe the bug
When capturing multiple CUDA graphs that use multiple CUDA GPUs, only the buffers related to the last captured CUDA graph are retained. As a result, only the last captured CUDA graph can be replayed successfully, while replaying other CUDA graphs leads to illegal memory access.
Testing revealed that manually managing all tensors required during execution outside the CUDA graph allows successful runs. This suggests that the memory pool of previous CUDA graphs might be released or corrupted during subsequent CUDA graph captures.
```python
import torch
origin_device = torch.device("cuda:0")
device_id = 0
TP_size = 2
torch.set_default_device(origin_device)
torch.set_default_dtype(torch.bfloat16)
cuda_graphs = [1, 2]
streams = [torch.cuda.Stream(torch.device(f"cuda:{i}")) for i in range(TP_size)]
buffer = [torch.zeros(i, 7168, device = torch.device("cuda:1")) for i in cuda_graphs] # create buffer manually
def forward(hidden_states, cuda_graph_idx):
orig_stream = torch.cuda.current_stream(origin_device)
main_begin_event = torch.cuda.Event()
orig_stream.record_event(main_begin_event)
for device in range(TP_size):
torch.cuda.set_device(torch.device(f"cuda:{device}"))
streams[device].wait_event(main_begin_event)
with torch.cuda.stream(streams[device]):
if device != device_id:
# This create a new tensor, will cause bug
cur_hidden_states = torch.ones(hidden_states.shape, device=torch.device(f"cuda:{device}"))
# use buffer created before, can run
# buffer[cuda_graph_idx].fill_(1)
if device == device_id:
current_ans = hidden_states * 2
end_event = torch.cuda.Event()
streams[device].record_event(end_event)
orig_stream.wait_event(end_event)
torch.cuda.set_device(origin_device)
for device in range(TP_size):
orig_stream.wait_stream(streams[device])
return current_ans
graphs = [torch.cuda.CUDAGraph() for _ in range(len(cuda_graphs))]
for i in range(len(cuda_graphs)):
print("capturing", i)
hidden_states = torch.randn((cuda_graphs[i], 7168), device=origin_device)
for warm_up_iters in range(20):
forward(hidden_states, i)
for device in range(TP_size):
torch.cuda.synchronize(device)
with torch.cuda.graph(graphs[i]):
forward(hidden_states, i)
for device in range(TP_size):
torch.cuda.synchronize(device)
print("replaying", i)
graphs[i].replay()
for device in range(TP_size):
torch.cuda.synchronize(device)
for i in reversed(range(len(cuda_graphs))):
for device in range(TP_size):
torch.cuda.synchronize(device)
print("replaying", i)
graphs[i].replay()
for device in range(TP_size):
torch.cuda.synchronize(device)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 4.0.1
Libc version: glibc-2.35
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
GPU 2: NVIDIA GeForce RTX 4090
GPU 3: NVIDIA GeForce RTX 4090
GPU 4: NVIDIA GeForce RTX 4090
GPU 5: NVIDIA GeForce RTX 4090
GPU 6: NVIDIA GeForce RTX 4090
GPU 7: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8488C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] ktransformers==0.2.3.post2+cu124torch26fancy
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] ktransformers 0.2.3.post2+cu124torch26fancy pypi_0 pypi
[conda] numpy 2.2.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng
| true
|
3,044,502,694
|
[RFC] Enable XPU+FlexAttention on Intel GPU
|
liangan1
|
open
|
[
"triaged",
"enhancement",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: xpu",
"module: flex attention"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
## Motivation
The Attention has been the critical performance bottleneck in the current LLM models, and FlexAttention is a good choice to cover the broad variants in the transformers series models. With FlexAttention, it is easy for us to enable the paged attention and fused SDPA in the transformers repo on XPU device. Besides, it also provide a candidate to process attention in LLM ecosystem libraries ., e.g., vLLM, SGLang on XPU device.
FlexAttention is also a good start point to push the intel triton based GEMM kernel to be matured. FlexAttention provide both flexattention kernel and flexdecoding kernel to cover both compute bound and memory bound GEMM computation, and different shapes should also been supported to serve LLM inference., e.g. head_dim=64, 96, 128, 256.
## Our Plan
As you know, FlexAttention is flexible enough to cover all kinds of variant of attention which also means that the dependent software stack need to be strong enough to cooperate with triton template kernel. So, it is still a stretch goal to push the XPU+FlexAttention to be landed in the torch-2.8.
## PR List
The FlexAttention is still in active development and the API is not stable yet.
- [ ] [[WIP]Enable XPU path for FlexAttention](https://github.com/pytorch/pytorch/pull/143553)
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @gujinghui @EikanWang @fengyuan14 @guangyey @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
3,044,480,415
|
Fix Codegen.cmake warning
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Fix
```
CMake Warning (dev) in cmake/Codegen.cmake:
A logical block opening on the line
/var/lib/jenkins/workspace/cmake/Codegen.cmake:393 (if)
closes on the line
/var/lib/jenkins/workspace/cmake/Codegen.cmake:401 (endif)
with mis-matching arguments.
```
by removing the condition in `endif`.
We could instead fix it, however, that is not best practice. For example, cmake_lint warns that, and CMake says
```
The optional <condition> argument is supported for backward compatibility only.
```
| true
|
3,044,472,261
|
XPU inference output abnormal with device 'XPU:1'
|
maxwell-zhengxu
|
open
|
[
"high priority",
"triage review",
"triaged",
"module: xpu"
] | 4
|
NONE
|
### 🐛 Describe the bug
Two intel GPUs environment with work well environment, the inference output is always correct for device 'xpu:0' while random output abnormal for device 'xpu:1'
```python
import torch
import torchvision.models as models
torch.manual_seed(0)
model = models.resnet50(weights="ResNet50_Weights.DEFAULT")
model.eval()
data = torch.rand(1, 3, 224, 224)
device = torch.device('xpu:1') # 'xpu:0'
model = model.to(device=device, dtype=torch.float16)
data = data.to(device, dtype=torch.float16)
with torch.no_grad():
ret = model(data)
print(ret)
print("Execution finished")
```
for 'xpu:0' output:
```
-1.3691e+00, 5.7471e-01, -6.7969e-01, -1.2334e+00, 6.6284e-02,
-5.5713e-01, 7.4402e-02, 5.0879e-01, -8.7549e-01, -1.2363e+00,
-9.1492e-02, -7.7588e-01, -1.4102e+00, -9.2334e-01, 6.4600e-01,
-5.6267e-03, -7.8223e-01, -1.1904e+00, -4.1602e-01, 3.2806e-02,
-4.9805e-01, -6.3574e-01, -8.5059e-01, -6.8555e-01, -9.4434e-01,
-8.8623e-01, -6.7920e-01, -6.9824e-01, -2.8833e-01, 2.0312e+00]],
```
for 'xpu:1' output(random):
```
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]],
```
This bug can work around with code "torch.xpu.set_device('xpu:1')", but for one App to use the two XPUs at the same time will have problem.
### Versions
Collecting environment information...
PyTorch version: 2.7.0+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-58-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 144
On-line CPU(s) list: 0-143
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8452Y
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 36
Socket(s): 2
Stepping: 8
CPU max MHz: 3200.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3.4 MiB (72 instances)
L1i cache: 2.3 MiB (72 instances)
L2 cache: 144 MiB (72 instances)
L3 cache: 135 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-35,72-107
NUMA node1 CPU(s): 36-71,108-143
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] pytorch-triton-xpu==3.3.0
[pip3] torch==2.7.0+xpu
[pip3] torchaudio==2.7.0+xpu
[pip3] torchvision==0.22.0+xpu
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,044,465,268
|
Adding a generic attribute for easier checkpoint discrepancy debugging.
|
githubsgi
|
open
|
[
"triaged",
"open source"
] | 5
|
CONTRIBUTOR
|
Adding a generic attributed called layer_id for the object that recompute_fn is a method of. This ties the checkpointing saved and recompute discrepancies to a layer in the model.
topic: not user facing
| true
|
3,044,464,840
|
Add a project section to pyproject.toml, making uv sync work
|
ezyang
|
open
|
[
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153020
With this change, I can now run `uv sync -v` and get all dependencies I need and then trigger build of PyTorch. (The `-v` is good because the build takes a long time and uv hides progress by default.)
Signed-off-by: Edward Z. Yang <ezyang@mit.edu>
| true
|
3,044,455,802
|
[RFC][API-Unstable]Enable A16W4 on XPU Device
|
liangan1
|
open
|
[
"triaged",
"module: xpu"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
## Motivation
As you know, the generation task with LLM is autoregressive and the GEMM computation of the decoding stage for the next token is memory bound. The weight only quantization with A16W4 has been widely adopted by the LLM inference, especially for the client GPU with single-user inference. It can help to reduce the memory consumption and reduce the memory footprint to speedup the inference.
## Plan
We are working the XPU device enabling in torchAO. TorchAO provides multiple quantization recipes for A4W16, e.g., RTN, GPTQ and AWQ. **The goal of torch-2.8 is to provide a performant and comprise int4 solution with RTN and the GPTQ enabling is a stretch goal**. The RTN can provide the reasonable output in the generation task but there may be a big accuracy gap with a specific dataset and metric.
For GPTQ/AWQ, At the current stage, we want to prioritize he GPTQ. In the kernel sides, the int4 matmul with oneDNN should be reused by RTN/GPTQ/AWQ. the current implementation of torchAO uses the float zp and the scale/zp are stacked to one tensor but we need to use int zp. the impact is that we need to change the API of some classes in the torchAO and need more efforts to enable the XPU and make an acceptable design to enable the int zp of GPTQ. There should be no performance gap with different algorithm. Even with RTN, we also use the group wise quantization. The granularity is similar with different algorithm.
## Features
- [x] [int4 WOQ gemm XPU Support #137566](https://github.com/pytorch/pytorch/pull/137566)
- [ ] [OneDNN primitive cache support for Int4 WOQ gemm on XPU #147693](https://github.com/pytorch/pytorch/pull/147693/)
- [x] [INT4 XPU enabling in torchAO](https://github.com/pytorch/ao/pull/1577)
- [ ] [WIP]Enable GPTQ + XPU in torchAO
- [ ] Enable AWQ+ XPU in torchAO
- [ ] Refactoring the layout design in the torchAO to support both float zp and int zp.
### Alternatives
_No response_
### Additional context
_No response_
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,044,434,059
|
DISABLED test_comprehensive_scatter_xpu_bool (__main__.TestInductorOpInfoXPU)
|
chuanqi129
|
closed
|
[
"triaged",
"skipped",
"module: xpu"
] | 1
|
COLLABORATOR
|
Platforms: linux
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_scatter_xpu_bool'%2C%20'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_scatter_xpu_int64'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,044,432,643
|
DISABLED test_comprehensive_scatter_xpu_int64 (__main__.TestInductorOpInfoXPU)
|
chuanqi129
|
closed
|
[
"triaged",
"skipped",
"module: xpu"
] | 1
|
COLLABORATOR
|
Platforms: linux
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_scatter_xpu_bool'%2C%20'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_scatter_xpu_int64'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,044,429,040
|
inconsistent grads between two types of `allgather`s
|
gameofdimension
|
open
|
[
"oncall: distributed",
"module: autograd"
] | 0
|
NONE
|
### 🐛 Describe the bug
I've observed a gradient discrepancy between two PyTorch all-gather implementations: one using the DTensor API, and the other using all_gather_tensor_autograd. My goal is to implement a correct autograd-compatible all-gather operation, but I'm unsure which implementation (if either) produces the right gradients.
```python
import os
from datetime import timedelta
import torch
from torch.distributed import init_device_mesh
from torch.distributed.tensor import DTensor, Replicate, Shard
def make_data(gen, dtype):
data = torch.rand(4, 5, requires_grad=True, generator=gen).to(
device=torch.cuda.current_device(),
dtype=dtype,
)
data.retain_grad()
return data
def init_distributed():
local_rank = int(os.environ["LOCAL_RANK"])
torch.cuda.set_device(local_rank)
init_timeout_seconds = 300
torch.distributed.init_process_group(
backend="nccl",
timeout=timedelta(seconds=init_timeout_seconds),
)
def allgather1(mesh, data: torch.Tensor):
return DTensor.from_local(
data, mesh, [Shard(0)]
).redistribute(
mesh, [Replicate()]
).to_local()
def allgather2(mesh, data):
return torch.distributed._functional_collectives.all_gather_tensor_autograd(
data, gather_dim=0, group=mesh.get_group())
def square_fb(out):
loss = (out**2).sum()
loss.backward()
torch.cuda.synchronize()
return loss
def check1(mesh, data):
out = allgather1(mesh, data)
loss = square_fb(out)
# 1. grad is 2*input
assert torch.allclose(data.grad, 2 * data)
return out, loss
def check2(mesh, data):
out = allgather2(mesh, data)
loss = square_fb(out)
# 2. grad is 2*world_size*input
assert torch.allclose(data.grad, 2 * torch.distributed.get_world_size() * data)
return out, loss
def check_allgather():
init_distributed()
world_size = torch.distributed.get_world_size()
mesh = init_device_mesh(device_type="cuda", mesh_shape=[world_size])
dtype = torch.float32
rank = torch.distributed.get_rank()
data1 = make_data(torch.Generator().manual_seed(10 + rank), dtype)
data2 = make_data(torch.Generator().manual_seed(10 + rank), dtype)
assert torch.allclose(data1, data2)
ag1, loss1 = check1(mesh, data1)
ag2, loss2 = check2(mesh, data2)
assert torch.allclose(ag1, ag2) # two allgathers produce the same result
assert torch.allclose(loss1, loss2) # both forwards are equal too
print("all check passed")
if __name__ == "__main__":
check_allgather()
```
**to run the script**
```
torchrun --nproc-per-node=8 repro.py
### Versions
Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.11.9 (main, May 14 2024, 09:36:59) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.0-136.36.0.112.4.oe2203sp1.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 535.161.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8352Y CPU @ 2.20GHz
Stepping: 6
CPU MHz: 2799.998
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.7.0
[pip3] torchdiffeq==0.2.5
[pip3] torchsde==0.2.6
[pip3] torchvision==0.22.0
[pip3] triton==3.3.0
[conda] Could not collect
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @ezyang @albanD @gqchen @nikitaved @soulitzer @Varal7 @xmfan
| true
|
3,044,413,527
|
c10d/gloo: add ibverbs backend
|
d4l3k
|
open
|
[
"oncall: distributed",
"fb-exported",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 5
|
MEMBER
|
Summary:
X-link: https://github.com/pytorch/gloo/pull/437
This provides a new "UnboundBuffer" implementation for Gloo ibverbs backend so it can be used with PyTorch.
This currently is passing basic tests such as `reduce_test` and `send_recv_test` but there are a number of failures. Putting this up for review so the follow up fixes are less of a mega PR and also so we can start doing some initial testing with this E2E with PyTorch.
Known issues:
* support recv from any is not supported
* AllreduceBcubeBase2 is failing
Test Plan:
```
buck2 run mode/dbgo //gloo/test:send_recv_test_ibverbs
buck2 test //gloo/test:
GLOO_DEVICE_TRANSPORT=IBVERBS buck2 run @//mode/opt //caffe2/test/distributed:c10d -- -r '.*gloo.*' -f
```
We can't run any of the gloo tests in CI since none of our CI machines have ibverbs so they're disabled by default and need to be manually run.
Differential Revision: D73291471
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
3,044,402,645
|
Operations on different precision tensors in CPU lead to different outputs
|
Redempt1onzzZZ
|
closed
|
[
"module: cpu",
"triaged",
"module: edge cases"
] | 3
|
NONE
|
### 🐛 Describe the bug
A similar finding with [https://github.com/pytorch/pytorch/issues/152294](#152294), the bug also consist in "torch.addcdiv", it seems that using only number (65536) as input, it will be transform to inf, however when using array ([65536]), the calculation will run normally.
```
import torch
input_tensor = torch.zeros([1], dtype=torch.float16, device='cpu')
print(input_tensor)
tensor1 = torch.tensor([0.01], dtype=torch.float16, device='cpu')
print(tensor1)
tensor2 = torch.tensor([65536], dtype=torch.float32, device='cpu')
print(tensor2)
result = torch.addcdiv(input_tensor, tensor1, tensor2, value=0.1)
print(result)
````
<img width="236" alt="Image" src="https://github.com/user-attachments/assets/a654941d-1239-4f38-a6cc-7208fa9a103d" />
```
import torch
input_tensor = torch.zeros([1], dtype=torch.float16, device='cpu')
print(input_tensor)
tensor1 = torch.tensor([0.01], dtype=torch.float16, device='cpu')
print(tensor1)
tensor2 = torch.tensor(65536, dtype=torch.float32, device='cpu')
print(tensor2)
result = torch.addcdiv(input_tensor, tensor1, tensor2, value=0.1)
print(result)
```
<img width="252" alt="Image" src="https://github.com/user-attachments/assets/d2be34a0-2179-4fc7-b472-ca8622b9c504" />
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 570.124.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6430
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl 2023.1.0 h213fc3f_46344 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl-service 2.4.0 py311h5eee18b_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl_fft 1.3.11 py311h5eee18b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl_random 1.2.8 py311ha02d727_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] numpy 1.26.4 py311h08b1b3b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] numpy-base 1.26.4 py311hf175353_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,044,401,500
|
[Lint] Add install command for GHA step
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152719
* __->__ #153013
Otherwise, it fails to run the script
| true
|
3,044,401,411
|
[Testing] Add logic for running MPS tests
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #153013
* #152719
* __->__ #153012
Prep change for getting rid of `_mac-test-mps.yml`
A complete no-op for now, but will be used by PR above the stack, but they should be landed few days apart to avoid forcing lots of people to rebase their PRs
| true
|
3,044,392,004
|
[WIP][dynamic shapes] unbacked safer cat, repeat
|
pianpwk
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
With https://github.com/pytorch/pytorch/pull/150483, for https://github.com/pytorch/pytorch/issues/152473
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,044,379,231
|
Detect NVSHMEM location
|
kwen2501
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153010
### Changes
- Detect NVSHMEM install location via `sysconfig.get_path("purelib")`, which typically resolves to `<conda_env>/lib/python/site-packages`, and NVSHMEM include and lib live under `nvidia/nvshmem`
- Added link dir via `target_link_directories`
- Removed direct dependency on mlx5
- Added preload rule (following other other NVIDIA libs)
### Plan of Record
1. End user experience: link against NVSHMEM dynamically (NVSHMEM lib size is 100M, similar to NCCL, thus we'd like users to `pip install nvshmem` than torch carrying the bits)
2. Developer experience: at compile time, prefers wheel dependency than using Git submodule
General rule: submodule for small lib that torch can statically link with
If user pip install a lib, our CI build process should do the same, rather than building from Git submodule (just for its header, for example)
3. Keep `USE_NVSHMEM` to gate non-Linux platforms, like Windows, Mac
4. At configuration time, we should be able to detect whether nvshmem is available, if not, we don't build `NVSHMEMSymmetricMemory` at all.
For now, we have symbol dependency on two particular libs from NVSHMEM:
- libnvshmem_host.so: contains host side APIs;
- libnvshmem_device.a: contains device-side global variables AND device function impls.
| true
|
3,044,337,298
|
DISABLED test_comprehensive_scatter_xpu_bool (__main__.TestInductorOpInfoXPU)
|
etaf
|
open
|
[
"triaged",
"skipped",
"module: xpu"
] | 1
|
COLLABORATOR
|
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_scatter_xpu_bool'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,044,335,974
|
DISABLED test_comprehensive_scatter_xpu_int64 (__main__.TestInductorOpInfoXPU)
|
etaf
|
open
|
[
"triaged",
"skipped",
"module: xpu"
] | 1
|
COLLABORATOR
|
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_scatter_xpu_int64'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,044,318,690
|
Remove redundant type aliases of _device_t for torch.Device (#152952)
|
sanjai-11
|
open
|
[
"oncall: distributed",
"module: cpu",
"triaged",
"module: mkldnn",
"open source",
"module: amp (automated mixed precision)",
"release notes: quantization",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"release notes: distributed (checkpoint)",
"suppress-bc-linter",
"module: compiled autograd",
"release notes: inductor (aoti)"
] | 3
|
NONE
|
Fixes #152952
This PR removes redundant type aliases for `_device_t` and replaces them with `torch.types.Device` where applicable, to make the typing system more consistent across PyTorch.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @mcarilli @ptrblck @leslie-fang-intel @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @xmfan
| true
|
3,044,295,252
|
[cutlass backend] Use src code to generate cutlass gemm name
|
henrylhtsang
|
open
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153006
* #152580
Differential Revision: [D74288965](https://our.internmc.facebook.com/intern/diff/D74288965/)
This shaves off 40s for at least small cases, since we don't have to recompile the kernel again.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,044,256,520
|
[autograd][docs] Add more details on why save_for_backward is important in extending autograd note
|
soulitzer
|
open
|
[
"release notes: autograd"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #153094
* __->__ #153005
cc @stas00
| true
|
3,044,255,324
|
[WIP][Inductor-CPU] int8 WoQ concat linear
|
sanchitintel
|
open
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
WIP
- [ ] Add UT corresponding to torchao pattern
- [ ] Add perf data
- [ ] Refactor
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,044,226,030
|
[cutlass backend] Skip cuda lib path if it is torch/lib
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153003
Differential Revision: [D74284808](https://our.internmc.facebook.com/intern/diff/D74284808/)
This is a bit risky for cutlass backend, so decided to separate it out. Tested offline.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,044,220,979
|
[CI] Use sccache installed in docker image in xla build
|
clee2000
|
open
|
[
"topic: not user facing",
"ciflow/pull"
] | 2
|
CONTRIBUTOR
|
The edited comment should have the info
Sccache stopped working on xla at some point near dec 17 2023. I am not sure what commit caused it. I think it was having trouble writing to the cache.
Either way, there is an sccache already installed on the docker image, so we should use that instead of a binary from s3 which we're probably no longer sure where it came from/what commit it was built from
The one in the docker image is installed here https://github.com/pytorch/xla/blob/69d438ee65cc250c974ca80edd80462ffbb2e163/.github/upstream/Dockerfile#L61 and is also very old, so I have https://github.com/pytorch/xla/pull/9102 to update it
| true
|
3,044,212,130
|
[cutlass backend][test] re-enable test_cuda_compile_command for fbcode
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153001
Differential Revision: [D74284047](https://our.internmc.facebook.com/intern/diff/D74284047/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,044,159,796
|
[export] Unflatten None
|
angelayi
|
open
|
[
"ciflow/trunk",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
3,044,149,425
|
`lintrunenr init` fails
|
malfet
|
open
|
[
"module: lint",
"triaged",
"module: devx"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Attempting to run `lintrunner init` fails
```
% lintrunner init --take FLAKE8
Warning: Could not find a lintrunner config at: '.lintrunner.private.toml'. Continuing without using configuration file.
[2025-05-06T22:17:48Z INFO lintrunner::linter] Initializing linter: 'FLAKE8'
[2025-05-06T22:17:48Z INFO lintrunner::linter] the init commands are ["python3", "tools/linter/adapters/pip_init.py", "--dry-run=0", "flake8==6.1.0", "flake8-bugbear==23.3.23", "flake8-comprehensions==3.15.0", "flake8-executable==2.1.3", "flake8-logging-format==0.9.0", "flake8-pyi==23.3.1", "flake8-simplify==0.19.3", "mccabe==0.7.0", "pycodestyle==2.11.1", "pyflakes==3.1.0", "torchfix==0.4.0 ; python_version >= \"3.9\" and python_version < \"3.13\""]
<MainThread:DEBUG> $ uv pip install --user flake8==6.1.0 flake8-bugbear==23.3.23 flake8-comprehensions==3.15.0 flake8-executable==2.1.3 flake8-logging-format==0.9.0 flake8-pyi==23.3.1 flake8-simplify==0.19.3 mccabe==0.7.0 pycodestyle==2.11.1 pyflakes==3.1.0 torchfix==0.4.0 ; python_version >= "3.9" and python_version < "3.13"
error: pip's `--user` is unsupported (use a virtual environment instead)
<MainThread:DEBUG> took 38ms
Traceback (most recent call last):
File "/Users/nshulga/git/pytorch/pytorch/tools/linter/adapters/pip_init.py", line 92, in <module>
run_command(pip_args)
File "/Users/nshulga/git/pytorch/pytorch/tools/linter/adapters/pip_init.py", line 20, in run_command
return subprocess.run(args, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/fbcode/platform010/Python3.12.framework/Versions/3.12/lib/python3.12/subprocess.py", line 571, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['uv', 'pip', 'install', '--user', 'flake8==6.1.0', 'flake8-bugbear==23.3.23', 'flake8-comprehensions==3.15.0', 'flake8-executable==2.1.3', 'flake8-logging-format==0.9.0', 'flake8-pyi==23.3.1', 'flake8-simplify==0.19.3', 'mccabe==0.7.0', 'pycodestyle==2.11.1', 'pyflakes==3.1.0', 'torchfix==0.4.0 ; python_version >= "3.9" and python_version < "3.13"']' returned non-zero exit status 2.
[2025-05-06T22:17:48Z INFO lintrunner::linter] the status is ExitStatus(unix_wait_status(256))
error: lint initializer for 'FLAKE8' failed with non-zero exit code
```
### Versions
2.7.0, nightly
cc @ZainRizvi @huydhn @clee2000
| true
|
3,044,141,928
|
[Dynamo][trace_rules] Add torch.distributed.fb.simple_fsdp to LEGACY_MOD_INLINELIST
|
yf225
|
closed
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Functions / modules in `torch.distributed.fb.simple_fsdp` are guaranteed to be traceable, and inlining into them is prerequisite for having both pre-forward / post-forward hooks to be in the same graph as forward for SimpleFSDP modules.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152998
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,044,135,504
|
[Testing] Add copysign from scalar regression test
|
malfet
|
closed
|
[
"Merged",
"release notes: python_frontend",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152997
But instead of adding it just for MPS backend, add it to OpInfo
Fixes https://github.com/pytorch/pytorch/issues/152582
| true
|
3,044,082,824
|
DISABLED test_comprehensive_rsub_cuda_float64 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_rsub_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41741925240).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_rsub_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 676, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 876, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 860, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1476, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1363, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2238, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2248, in _compile_to_module
mod = self._compile_to_module_lines(wrapper_code)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2312, in _compile_to_module_lines
mod = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpw_5t4te6/n4/cn43t6hhpisxcmgg35m75zudart56xsryxstmmc3gzkaqtgzcquz.py", line 80, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 479, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 499, in _wait_futures
kernel = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 368, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 325, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 482, in _make_launchers
launchers.append(result.make_launcher())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1279, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1271, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpglir1l04/triton/NRV6JB5CRGNKIBVNC3J5DZEHWDXUHJURMVJZAUPVA4B5WSAIZZGA/triton_poi_fused_rsub_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 6: SampleInput(input=Tensor[size=(10, 1, 5), device="cuda:0", dtype=torch.float64], args=TensorList[Tensor[size=(10, 5), device="cuda:0", dtype=torch.float64]], kwargs={}, broadcasts_input=True, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_rsub_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,044,071,880
|
[inductor] dtype promotion error in cat decomp
|
pianpwk
|
open
|
[
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: inductor",
"merging"
] | 4
|
CONTRIBUTOR
|
cloning single tensor wasn't following dtype promotion rules
for SAM model: https://github.com/pytorch/pytorch/issues/152606
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,044,034,530
|
[dynamo] Actually support functools.lru_cache
|
williamwen42
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-functools"
] | 0
|
MEMBER
|
Followup to https://github.com/pytorch/pytorch/issues/146598
Currently, when Dynamo traces a `lru_cache`d function, we simply trace the underlying function. This is not sound when the underlying function depends on state outside that function (e.g. globals, cells).
Fully supporting the cache lookup involved in `lru_cache` would likely involve accessing, modeling, and updating the underlying attributes of the underlying `lru_cache` wrapper object in C.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
3,044,025,410
|
[inductor] Fix ModularIndexing assumptions
|
isuruf
|
open
|
[
"module: cpu",
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"merging"
] | 4
|
COLLABORATOR
|
Fixes https://github.com/pytorch/pytorch/issues/151198.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152993
Since the result of ModularIndexing can be zero due to the modulo
operation, we should not make any assumption about ModularIndexing
being positive
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,044,012,443
|
conv2d with int8 on CUDA: GET was unable to find an engine to execute this computation
|
c-f-h
|
open
|
[
"module: cuda",
"module: convolution",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
The following script works fine if I switch to CPU, or change the tensor dtypes to float32. Otherwise, see the error below.
```py
import torch
device = torch.device("cuda") # works fine with "cpu"
print(f"Using device: {device}")
# works fine if both are float32
input = torch.randint(low=0, high=2, size=(1, 1, 6, 6), dtype=torch.int8).to(device)
kernel = torch.randint(low=0, high=2, size=(1, 1, 3, 3), dtype=torch.int8).to(device)
output = torch.nn.functional.conv2d(input, kernel, padding=1)
print("Convolution successful. Output shape:", output.shape)
```
Traceback:
```
Using device: cuda
Traceback (most recent call last):
File "C:\Users\Clemens\prog\cuda-conv-int8.py", line 10, in <module>
output = torch.nn.functional.conv2d(input, kernel, padding=1)
RuntimeError: GET was unable to find an engine to execute this computation
```
### Versions
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Home (10.0.19045 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.13.3 (tags/v3.13.3:6280bb5, Apr 8 2025, 14:47:33) [MSC v.1943 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1060 6GB
Nvidia driver version: 572.83
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel(R) Core(TM) i5-4460 CPU @ 3.20GHz
Manufacturer: GenuineIntel
Family: 1
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3201
MaxClockSpeed: 3201
L2CacheSize: 1024
L2CacheSpeed: None
Revision: 15363
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] torch==2.7.0+cu128
[pip3] torchaudio==2.7.0+cu128
[pip3] torchvision==0.22.0+cu128
[conda] Could not collect
cc @ptrblck @msaroufim @eqy @jerryzh168
| true
|
3,043,985,559
|
[FrozenSet] Fixes for FrozenSet
|
guilhermeleobas
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152991
* #152990
* #152908
* #152907
* #152989
* #152906
* #152905
* #152903
* #152902
* #152901
* #152904
* #152988
* #152987
* #150792
* #152900
* #153070
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,043,985,422
|
[Set] Raise TypeError if set is called with the wrong number of arguments
|
guilhermeleobas
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* __->__ #152990
* #152908
* #152907
* #152989
* #152906
* #152905
* #152903
* #152902
* #152901
* #152904
* #152988
* #152987
* #150792
* #152900
* #153070
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,043,985,252
|
[Set] Update `set.union` and `set.update` to support *args
|
guilhermeleobas
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* #152908
* #152907
* __->__ #152989
* #152906
* #152905
* #152903
* #152902
* #152901
* #152904
* #152988
* #152987
* #150792
* #152900
* #153070
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,043,984,885
|
[Set] Raise `TypeError` if argument is unhashable
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* #152908
* #152907
* #152989
* #152906
* #152905
* #152903
* #152902
* #152901
* #152904
* __->__ #152988
* #152987
* #150792
* #152900
* #153070
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,043,984,736
|
[Set] Handle exception in ConstantVariable operation
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* #152908
* #152907
* #152989
* #152906
* #152905
* #152903
* #152902
* #152901
* #152904
* #152988
* __->__ #152987
* #150792
* #152900
* #153070
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,043,976,311
|
[WIP] Add XPU support for FlightRecorder
|
frost-intel
|
open
|
[
"oncall: distributed",
"open source",
"ciflow/trunk",
"release notes: distributed (c10d)",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
This is the first part of bringing XPU/XCCL support for FlightRecorder.
`AcceleratorEvent` is a generic interface for CUDAEvent and XPUEvent, which is used in FlightRecorder to work with both XCCL and NCCL.
Since the actual instantiation of the FlightRecorder and DebugInfoWriter objects happens in ProcessGroupNCCL, a future PR in https://github.com/intel/torch-xpu-ops will provide actual FlightRecorder support in ProcessGroupXCCL.
For now, I avoid any cosmetic changes to classes/functions/variables with "nccl" names, though this could be changed in the future to indicate that code is device agnostic.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,043,971,158
|
`torch.load` can't deserialize `datetime` objects, even with the appropriate `safe_globals`
|
gtebbutt
|
open
|
[
"module: serialization",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
Spent a while chasing this one down on the assumption that a custom class from my code was being inadvertently saved, especially with the earlier message requiring `getattr` to be added to `safe_globals`, but it turns out it'll happen on any output containing a `datetime` object:
```python
import torch
import datetime
import zoneinfo
data = {
"a": torch.tensor([1,2,3]),
"b": datetime.datetime(2025, 1, 1, 12, 0, tzinfo=zoneinfo.ZoneInfo(key="UTC")),
}
torch.save(data, "data.pt")
with torch.serialization.safe_globals([datetime.datetime, getattr, zoneinfo.ZoneInfo]):
torch.load("data.pt")
```
```
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/usr/local/lib/python3.12/site-packages/torch/serialization.py", line 1524, in load
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Weights only load failed. In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
Please file an issue with the following so that we can make `weights_only=True` compatible with your use case: WeightsUnpickler error: Trying to call reduce for unrecognized function <built-in method _unpickle of type object at 0x563383311d20>
```
### Versions
Running in a container built from `docker.io/python:3.12`:
```
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.36
Python version: 3.12.10 (main, Apr 9 2025, 00:29:37) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.12.26-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 5090
Nvidia driver version: 570.144
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 9950X 16-Core Processor
CPU family: 26
Model: 68
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 31%
CPU max MHz: 5752.0000
CPU min MHz: 600.0000
BogoMIPS: 8599.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx_vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 768 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] torch==2.7.0+cu128
[pip3] torchmetrics==1.7.1
[pip3] triton==3.3.0
[conda] Could not collect
```
cc @mruberry @mikaylagawarecki
| true
|
3,043,958,275
|
[hop_schema] support gen_schema for invoke_subgraph
|
ydwu4
|
open
|
[
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152984
* #152974
* #151067
| true
|
3,043,956,173
|
compile_fx: make a compile event that corresponds to the fx_compile waitcounter
|
c00w
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152983
This is a pretty minor change, but by having exact correspondence, we can
easily confirm data differences between perfetto and wait counters
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,043,912,132
|
[torch][ao] Properly strip tracking stats in _fold_conv_bn_qat for 1D
|
JakeStevens
|
open
|
[
"fb-exported",
"release notes: quantization",
"release notes: AO frontend"
] | 5
|
NONE
|
Summary: _fold_conv_bn_qat has logic to remove the tracking stats. Currently, this includes a check that includes only torch.nn.modules.batchnorm.BatchNorm2d. As a result, the tracking stats are not properly removed when 1D is used. This diff updates to fix this.
Test Plan:
Run N7113483 without this fix.
{F1977726982}
```
bento kernel build sensorml
```
Re-run with local version of kernel, containing this diff:
{F1977727151}
Notice that now, num_batches is removed.
Differential Revision: D74269649
| true
|
3,043,888,333
|
Catch TypeError from ValueRanges
|
jansel
|
open
|
[
"module: cpu",
"fb-exported",
"ciflow/trunk",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
Summary: This is a possible workaround to https://fb.workplace.com/groups/1075192433118967/permalink/675836685333300/
Test Plan: Ask poster to confirm fix
Differential Revision: D74268733
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,043,887,194
|
Fix `'TensorBox' object has no attribute 'is_input_buffer'`
|
jansel
|
open
|
[
"fb-exported",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 4
|
CONTRIBUTOR
|
Summary: Fix for https://fb.workplace.com/groups/1075192433118967/permalink/1664491270855744/
Test Plan: Used reproducer from D74262030
Differential Revision: D74270090
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,043,823,259
|
FPE when using `torch.lcm_` with int32 tensor and int16 scalar
|
SilentTester73
|
open
|
[
"module: crash",
"module: cpu",
"module: error checking",
"triaged",
"module: edge cases"
] | 3
|
NONE
|
### 🐛 Describe the bug
### Description
When using `torch.lcm_` in-place operation between a large int32 tensor and an int16 scalar, the program crashes with a floating point exception. The operation works fine with smaller tensors, but fails with a specific large tensor containing various integer values.
### Steps to Reproduce
I've created a minimal reproduction script:
Also available at colab: [https://colab.research.google.com/drive/17Ih4ovpuq_Sjo7S9_2XiYtxX0yUIL7YE?usp=sharing](https://colab.research.google.com/drive/17Ih4ovpuq_Sjo7S9_2XiYtxX0yUIL7YE?usp=sharing)
```python
import torch
# Create a problematic tensor
tensor1 = torch.tensor([
[[-306846766, 58826, 0, 1073741824]],
[[0, 0, -794885632, -154301178]],
[[-2147483648, -1895825408, -1838202234, 914053657]]
], dtype=torch.int32)
# Create a second tensor with a simple scalar value
tensor2 = torch.tensor(26213, dtype=torch.int16)
# This line causes a floating point exception
torch.lcm_(tensor1, tensor2)
```
### Expected Behavior
The function should either:
1. Complete successfully and modify tensor1 in-place with the LCM values, or
2. Raise a proper Python exception with an informative error message when encountering problematic values (like zeros or other values that cause arithmetic errors)
### Actual Behavior
The program crashes with a "Floating point exception (core dumped)" message.
```
About to attempt lcm operation
Floating point exception (core dumped)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 17.0.6 (++20231209124227+6009708b4367-1~exp1~20231209124336.77)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-58-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9684X 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 2
BogoMIPS: 5099.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc amd_ibpb_ret arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 2.3 GiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.7.0
[pip3] triton==3.3.0
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @malfet
| true
|
3,043,782,002
|
[Pytorch] Add `torch.cuda.streams.Event` to save torch functions list
|
dongji-gao
|
open
|
[
"fb-exported"
] | 4
|
CONTRIBUTOR
|
Summary: TSIA
Test Plan: WIP
Differential Revision: D74266940
| true
|
3,043,769,613
|
[MegaCache] Make MegaCache generic to allow external plugins registration
|
tbohutyn
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor",
"module: dynamo"
] | 4
|
CONTRIBUTOR
|
Implements #152976
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @oulgen
| true
|
3,043,763,742
|
Refactor MegaCache to make it generic
|
tbohutyn
|
open
|
[
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Refactoring MegaCache to make it generic would allow for external plugins' caches to register in MegaCache. It would also remove specific cache logic from it.
Related to https://github.com/pytorch/pytorch/pull/143341
Proposed PR https://github.com/pytorch/pytorch/pull/152977
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu
| true
|
3,043,748,868
|
[dtensor] Extend Partial partition of replicated tensor for min/max reduce
|
BowenBao
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"topic: improvements",
"ciflow/inductor",
"release notes: distributed (dtensor)"
] | 2
|
COLLABORATOR
|
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,043,721,503
|
[hop_schema] add HopSchemaGenerator to make it easier to create hop schema
|
ydwu4
|
open
|
[
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152984
* __->__ #152974
* #151067
| true
|
3,043,701,258
|
Adding XPU support to DTensor examples.
|
githubsgi
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Adds XPU support to visualize_sharding_example.py and comm_mode_features_example.py .
topic: not user facing
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,043,700,902
|
avoid falling back to as_strided for non-contiguous in-place reshape.
|
laithsakka
|
open
|
[
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
When non-contiguous tensor reshape operates has unbacked symbols, there is a very high probability of hitting data dependent errors if we call view_symint, hence instead we call as_strided instead. We could have cloned as well, but as_strided sounds more efficient.
```
if (!self.sym_numel().has_hint() || !product.has_hint()){
return self.as_strided_symint(sizes, stride.value());
}
```
To not do that we have to relax guard_size_obl from so many places, and as well.
build on top of this for that.
https://github.com/pytorch/pytorch/pull/152965
that idea is to revisit this in the long run after we have mode the codebase more unbacked friendly.
cc @chauhang @penguinwu
| true
|
3,043,698,093
|
DISABLED test_comprehensive_scatter_xpu_int32 (__main__.TestInductorOpInfoXPU)
|
chuanqi129
|
open
|
[
"triaged",
"skipped",
"module: xpu"
] | 1
|
COLLABORATOR
|
Platforms: linux
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_scatter_xpu_int32'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,043,694,521
|
DISABLED test_comprehensive_gather_xpu_int64 (__main__.TestInductorOpInfoXPU)
|
chuanqi129
|
open
|
[
"triaged",
"skipped",
"module: xpu"
] | 1
|
COLLABORATOR
|
Platforms: linux
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_gather_xpu_int64'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,043,681,942
|
[nativert] Move GraphSignature to pytorch core
|
yiming0416
|
open
|
[
"fb-exported",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Summary:
Torch Native Runtime RFC: https://github.com/pytorch/rfcs/pull/72
An in-memory representation of `GraphSignature` for graph specs of an exported program, which will be consumed by the runtime.
Test Plan: Added tests under `test/cpp/nativert/test_graph_signature.cpp`
Differential Revision: D73895378
| true
|
3,043,660,961
|
[inductor] Generate synthetic offsets appropriately for autotuning _scaled_grouped_mm
|
bertmaher
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152968
Summary: The autotuner is using zero-filled tensors to autotune
_scaled_grouped_mm and that's not appropriate for the offsets tensor, since it
essentially corresponds to "no input" and thus yields invalid perf results.
We can't really use the actual input tensors, since we might be compiling this
op in the context of an entire graph.
So instead, I decided to create a synthetic offsets tensor assuming that each
group is (roughly) the same size. I don't have data but I'd guess this
approach is OK for MoE since we're generally hoping to load-balance the
experts; I'm not sure how well it applies to other scenarios that might be more
heavy-tailed.
Test Plan:
```
pytest test_matmul_cuda.py -k test_scaled_grouped_gemm_
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,043,651,973
|
[ATen][CUDA] Optimize 128 bit vectorization
|
pytorchbot
|
closed
|
[
"open source",
"release notes: cuda"
] | 1
|
COLLABORATOR
|
Fixes #147376.
As per request: https://github.com/pytorch/pytorch/pull/145746#pullrequestreview-2642118301
This PR omits sm80 or older of using vec8 kernels due to long compilation and large binary size.
cc @ptrblck @msaroufim @eqy @jerryzh168 @manuelcandales @SherlockNoMad @angelayi
| true
|
3,043,651,250
|
[Memento] On-demand mode using without torch api
|
mzzchy
|
open
|
[
"fb-exported",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Differential Revision: D74179606
| true
|
3,043,618,388
|
WIP so many changes to generate non-as strided view
|
laithsakka
|
open
|
[
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152965
* #152722
* #148872
| true
|
3,043,614,211
|
[FSDP2] need dummy forward/backward to stay SPMD
|
weifengpy
|
open
|
[
"oncall: distributed",
"triaged"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
FSDP2 assumes SPMD on every rank, meaning every rank needs to call forward/backward to issue all-gather / reduce-scatter
However, user reported two cases that some rank might be skipping forward/backward
* torchtune might mask all the activations. they have to create a dummy input to avoid job hanging https://github.com/pbontrager/torchtune/blob/ee2f5999976ce476c8dcb8e0f09dc7258a9b704c/torchtune/modules/loss/cross_entropy_loss.py#L76
* dataloader might return empty input for certain ranks. TODO: add github issue
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,043,598,064
|
DTensor support for dynamic shapes is soft
|
bdhirsh
|
open
|
[
"oncall: distributed",
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
The state of DTensor + compile + dynamic shapes today is roughly:
(1) for generic "pt2-friendly" tensor subclasses, we support compiling them with dynamic shapes. This includes cases where both the outer subclass shape and it's inner tensor shape(s) vary independently.
(2) At the same time, dynamic shapes support imposes some extra requirements on tensor subclasses that DTensor does not fully meet today. One way to think of it is that any place where DTensor's code handles a tensor size, the code needs to be written in a way that is aware that this size might be a `torch.SymInt` object instead of a plain `int`.
There are a few cases where DTensor handling for SymInts needs some additional work:
(1) `DTensorSpec`. Today, this metadata holds the raw sizes/strides of the outer tensor shape. This is mainly a problem, because compile expects any subclass metadata to be able to be treated as a constant (SymInts are not supported). In particular, compile generates guards on the exact value of this metadata, and we will recompile if you have a graph input that is a tensor subclass with slightly different metadata.
There are a few ways we could fix this. One easier option might be the following: subclass authors can specify a custom method that allows them to customize how dynamo generates metadata guards. We could try having DTensor define its metadata guards to not include the outer_size/stride arguments on the DTensorSpec, and see if that is enough to avoid problems.
Hopefully that is enough. A more "pt2-friendly" option that would also be a more invasive DTensor change would be to remove outer_size/stride from `DTensorSpec` entirely, and instead require any code that needs to access this outer_size/stride to take in the actual DTensor itself as an argument, to read the sizes/strides from.
(2) sharding propagation caching. In eager mode, sharding prop uses an lru cache to reduce cpu overhead. Under compile, this cache is problematic because it doesn't support symints. We have some logic in DTensor today to check for symints and skip the cache if there are any, but this logic is brittle. Instead, we should probably have all DTensor caching logic branch on whether or not we are currently inside of the compiler
(3) sharding propagation rules themselves. Some of these rules are written to branch on input tensor shapes. This has two potential problems for compile. The first problem is that any branching will cause compile to specialize and emit guards, forcing a recompile if the condition fails. This can be desirable in some cases but not others, depending on whether we would prefer to generate one generic compiled artifact for some compiled DTensor code, or if we want specialized artifacts for different shpes.
The second problem is around data dependent shapes. If there is any data-dependent code in the model, then this will manifest inside of compile as tensor's whose shape are "unbacked symints", meaning they have no backing "hint" that compile can take advantage of. When unbacked symints show up inside of the compiler, any code that branches on these shapes will result in a graph break. One way to deal with this is to audit places where we branch on shape, and agree on a "default path" that we can take for some of our sharding prop rules, if the shape is not actually known at compile time.
There are pro's and con's to this approach. In the longer run, we may want to generate specialized compiled artifacts for different value ranges of these shapes.
There is a related issue here that @tianyu-l ran into around dynamic shapes and sharding prop in eager mode: when dealing with data-dependent operations, the existing sharding prop caching will cache miss on every new shape, incurring significant eager overhead. It might be worth thinking about how to design the compiler-related improvements in the context of this problem. For example: one option might be to hardcode some "obvious" sharding prop rules for TP, that don't require specializing on shape, and use them when we know our shapes are data dependent. I have a very rough prototype of this here: https://github.com/pytorch/pytorch/pull/150582
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @chauhang @penguinwu
| true
|
3,043,571,434
|
TestNestedTensorOpInfoCUDA.test_compile_backward_matmul_cuda_float32 Test Failure
|
nWEIdia
|
open
|
[
"module: tests",
"triaged",
"module: nestedtensor"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
Steps to Reproduce: please see https://github.com/pytorch/pytorch/issues/152962#issuecomment-2859328199
`Traceback (most recent call last):
File "/usr/lib/python3.12/unittest/case.py", line 58, in testPartExecutor
yield
File "/usr/lib/python3.12/unittest/case.py", line 539, in subTest
yield
File "/opt/pytorch/pytorch/test/test_nestedtensor.py", line 8823, in test_compile_backward
out_compile = compiled_f(sample.input, *sample.args, **sample.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 671, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 1569, in _call_user_compiler
raise BackendCompilerFailed(
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 1544, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/__init__.py", line 2400, in __call__
return self.compiler_fn(model_, inputs_, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/backends/debugging.py", line 219, in aot_eager_decomp_partition
return aot_autograd(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/backends/common.py", line 106, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 1178, in aot_module_simplified
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 1150, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 574, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 824, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 789, in aot_dispatch_autograd
fx_g, joint_inputs, maybe_subclass_meta = aot_dispatch_autograd_graph(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 318, in aot_dispatch_autograd_graph
fx_g = _create_graph(joint_fn_to_trace, updated_joint_inputs, aot_config=aot_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 55, in _create_graph
fx_g = make_fx(
^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2288, in wrapped
return make_fx_tracer.trace(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2226, in trace
return self._trace_inner(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2197, in _trace_inner
t = dispatch_trace(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_compile.py", line 51, in inner
return disable_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 850, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1221, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 850, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 837, in trace
(self.create_arg(fn(*args)),),
^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 691, in flatten_fn
tree_out = root_fn(*tree_args)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1276, in wrapped
out = f(*tensors) # type:ignore[call-arg]
^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 717, in inner_fn
outs = fn(*args)
^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 819, in joint_fn
return inner_fn(
^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 791, in inner_fn
wrapped_outs = fn(*all_args)
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 668, in joint_helper
return _functionalized_f_helper(primals, tangents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 416, in _functionalized_f_helper
f_outs = fn(*f_args)
^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 283, in inner_fn_with_anomaly
return inner_fn(*args)
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 268, in inner_fn
backward_out = torch.autograd.grad(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/autograd/__init__.py", line 451, in grad
return handle_torch_function(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/overrides.py", line 1721, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1324, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/autograd/__init__.py", line 451, in grad
return handle_torch_function(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/overrides.py", line 1743, in handle_torch_function
result = torch_func_method(public_api, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/nested_tensor.py", line 367, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/autograd/__init__.py", line 502, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/nested_tensor.py", line 332, in __torch_dispatch__
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 218, in inner
return func(aten_op, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 2596, in matmul_backward_default
grad_self = torch.matmul(grad, other.transpose(-1, -2))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/nested_tensor.py", line 332, in __torch_dispatch__
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 218, in inner
return func(aten_op, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 1250, in matmul_default
return torch.stack(_unbind_impl(inp, other))
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 1158, in _unbind_impl
func(a_comp, b_comp) for (a_comp, b_comp) in zip(a.unbind(), b.unbind())
^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/nested_tensor.py", line 332, in __torch_dispatch__
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 218, in inner
return func(aten_op, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 1061, in unbind_int
return torch.split(values, lengths_scalars, dim=(ragged_idx - 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/functional.py", line 222, in split
return tensor.split(split_size_or_sections, dim)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_tensor.py", line 1053, in split
return torch._VF.split_with_sizes(self, split_size, dim)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/functional_tensor.py", line 511, in __torch_dispatch__
outs_unwrapped = func._op_dk(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1426, in __torch_dispatch__
return proxy_call(self, func, self.pre_dispatch, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 989, in proxy_call
track_tensor_tree(out, proxy_out, constant=constant, tracer=tracer)
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 684, in track_tensor_tree
wrap_with_proxy(inner_res, proxy_res, constant)
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 658, in wrap_with_proxy
wrap_with_proxy(ee, proxy[idx], get_constant(constant, idx)) # type: ignore[index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 631, in wrap_with_proxy
set_meta(proxy, e)
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 501, in set_meta
proxy.node.meta["tensor_meta"] = _extract_tensor_metadata(val)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/passes/shape_prop.py", line 55, in _extract_tensor_metadata
if result.is_contiguous(memory_format=query_format):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/sym_node.py", line 588, in guard_size_oblivious
r = self.evaluate(size_oblivious=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/sym_node.py", line 510, in evaluate
return self.shape_env.evaluate_sym_node(self, size_oblivious)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/symbolic_shapes.py", line 6711, in evaluate_sym_node
return self.evaluate_expr(
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/symbolic_shapes.py", line 6727, in evaluate_expr
return self._evaluate_expr(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/symbolic_shapes.py", line 6996, in _evaluate_expr
raise self._make_data_dependent_error(
torch._dynamo.exc.BackendCompilerFailed: backend='aot_eager_decomp_partition' raised:
GuardOnDataDependentSymNode: Could not guard on data-dependent expression Eq(27, u0) (unhinted: Eq(s27, u0)). (Size-like symbols: u0)
Caused by: (fx/passes/shape_prop.py:55 in _extract_tensor_metadata)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
2025-05-06, ptyche , main [job↗](https://gitlab-master.nvidia.com/dl/dgx/pytorch/-/jobs/165128384) [ stacktraces ▾](https://dl.gitlab-master-pages.nvidia.com/pytorch/nightly-reports/unit_test_query/?mode=1&value=TestNestedTensorOpInfoCUDA.test_compile_backward_matmul_cuda_float32#)
Traceback (most recent call last):
File "/usr/lib/python3.12/unittest/case.py", line 58, in testPartExecutor
yield
File "/usr/lib/python3.12/unittest/case.py", line 539, in subTest
yield
File "/opt/pytorch/pytorch/test/test_nestedtensor.py", line 8823, in test_compile_backward
out_compile = compiled_f(sample.input, *sample.args, **sample.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 671, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 1569, in _call_user_compiler
raise BackendCompilerFailed(
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 1544, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/__init__.py", line 2400, in __call__
return self.compiler_fn(model_, inputs_, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/backends/debugging.py", line 219, in aot_eager_decomp_partition
return aot_autograd(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/backends/common.py", line 106, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 1178, in aot_module_simplified
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 1150, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 574, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 824, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 789, in aot_dispatch_autograd
fx_g, joint_inputs, maybe_subclass_meta = aot_dispatch_autograd_graph(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 318, in aot_dispatch_autograd_graph
fx_g = _create_graph(joint_fn_to_trace, updated_joint_inputs, aot_config=aot_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 55, in _create_graph
fx_g = make_fx(
^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2288, in wrapped
return make_fx_tracer.trace(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2226, in trace
return self._trace_inner(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2197, in _trace_inner
t = dispatch_trace(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_compile.py", line 51, in inner
return disable_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 850, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1221, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 850, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 837, in trace
(self.create_arg(fn(*args)),),
^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 691, in flatten_fn
tree_out = root_fn(*tree_args)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1276, in wrapped
out = f(*tensors) # type:ignore[call-arg]
^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 717, in inner_fn
outs = fn(*args)
^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 819, in joint_fn
return inner_fn(
^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 791, in inner_fn
wrapped_outs = fn(*all_args)
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 668, in joint_helper
return _functionalized_f_helper(primals, tangents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 416, in _functionalized_f_helper
f_outs = fn(*f_args)
^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 283, in inner_fn_with_anomaly
return inner_fn(*args)
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 268, in inner_fn
backward_out = torch.autograd.grad(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/autograd/__init__.py", line 451, in grad
return handle_torch_function(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/overrides.py", line 1721, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1324, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/autograd/__init__.py", line 451, in grad
return handle_torch_function(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/overrides.py", line 1743, in handle_torch_function
result = torch_func_method(public_api, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/nested_tensor.py", line 367, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/autograd/__init__.py", line 502, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/nested_tensor.py", line 332, in __torch_dispatch__
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 218, in inner
return func(aten_op, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 2596, in matmul_backward_default
grad_self = torch.matmul(grad, other.transpose(-1, -2))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/nested_tensor.py", line 332, in __torch_dispatch__
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 218, in inner
return func(aten_op, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 1250, in matmul_default
return torch.stack(_unbind_impl(inp, other))
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 1158, in _unbind_impl
func(a_comp, b_comp) for (a_comp, b_comp) in zip(a.unbind(), b.unbind())
^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/nested_tensor.py", line 332, in __torch_dispatch__
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 218, in inner
return func(aten_op, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nested/_internal/ops.py", line 1061, in unbind_int
return torch.split(values, lengths_scalars, dim=(ragged_idx - 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/functional.py", line 222, in split
return tensor.split(split_size_or_sections, dim)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_tensor.py", line 1053, in split
return torch._VF.split_with_sizes(self, split_size, dim)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/functional_tensor.py", line 511, in __torch_dispatch__
outs_unwrapped = func._op_dk(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1426, in __torch_dispatch__
return proxy_call(self, func, self.pre_dispatch, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 989, in proxy_call
track_tensor_tree(out, proxy_out, constant=constant, tracer=tracer)
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 684, in track_tensor_tree
wrap_with_proxy(inner_res, proxy_res, constant)
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 658, in wrap_with_proxy
wrap_with_proxy(ee, proxy[idx], get_constant(constant, idx)) # type: ignore[index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 631, in wrap_with_proxy
set_meta(proxy, e)
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/proxy_tensor.py", line 501, in set_meta
proxy.node.meta["tensor_meta"] = _extract_tensor_metadata(val)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/passes/shape_prop.py", line 55, in _extract_tensor_metadata
if result.is_contiguous(memory_format=query_format):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/sym_node.py", line 588, in guard_size_oblivious
r = self.evaluate(size_oblivious=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/sym_node.py", line 510, in evaluate
return self.shape_env.evaluate_sym_node(self, size_oblivious)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/symbolic_shapes.py", line 6711, in evaluate_sym_node
return self.evaluate_expr(
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/symbolic_shapes.py", line 6727, in evaluate_expr
return self._evaluate_expr(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/symbolic_shapes.py", line 6996, in _evaluate_expr
raise self._make_data_dependent_error(
torch._dynamo.exc.BackendCompilerFailed: backend='aot_eager_decomp_partition' raised:
GuardOnDataDependentSymNode: Could not guard on data-dependent expression Eq(27, u0) (unhinted: Eq(s27, u0)). (Size-like symbols: u0)
Caused by: (fx/passes/shape_prop.py:55 in _extract_tensor_metadata)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"`
### Versions
Should be reproducible with nightly binary on A100, H100 etc.
cc @mruberry @ZainRizvi @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ @eqy @ptrblck
| true
|
3,043,543,233
|
[Dynamo] Remove unused guard PYMODULE_MATCH
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152729
* __->__ #152961
* #152872
* #152865
* #152730
* #152728
* #152727
* #152725
Not used anywhere: https://www.internalfb.com/code/search?q=repo%3Afbcode%20PYMODULE_MATCH
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,043,469,694
|
Change aoti cpp tests to run serially within file
|
yushangdi
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"skip-url-lint"
] | 7
|
CONTRIBUTOR
|
Fixes #152674
https://github.com/pytorch/pytorch/issues/152889
https://github.com/pytorch/pytorch/issues/152888
https://github.com/pytorch/pytorch/issues/152891
`--dist=loadfile` ensures all tests in the same source file run in the same worker.
Tests like `FreeInactiveConstantBufferRuntimeConstantFoldingCuda` expect exclusive access to memory during test time to compute diffs (e.g., initMemory - updateMemory2 == DATASIZE).
With `-n 3`, tests run in separate processes, but CUDA device memory is shared — and cudaMemGetInfo() reads device-wide global state.
```
python test/run_test.py --cpp --verbose -i cpp/test_aoti_inference -dist=loadfile
```
| true
|
3,043,427,737
|
docs: Improve documentation for NCCL timeout / watchdog variables
|
booxter
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"release notes: distributed (c10d)"
] | 2
|
NONE
|
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,043,403,327
|
Follow up to #152209, remove compat patch
|
clee2000
|
open
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Remove compat patch that lets PRs that haven't rebased base #152209 still have docker images.
Merge this next week
| true
|
3,043,388,225
|
[CI] Upgrade sccache to 0.10.0
|
clee2000
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Newest release handles cuda better, and I think this fixes the cases I saw where some cuda related builds weren't being cached correctly
| true
|
3,043,298,872
|
[ROCm] unkip test_non_standard_bool except for failings ops
|
pragupta
|
open
|
[
"module: rocm",
"open source",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,043,285,407
|
UNSTABLE pull / linux-docs / build-docs-functorch-false
|
malfet
|
closed
|
[
"module: docs",
"module: ci",
"triaged",
"unstable"
] | 2
|
CONTRIBUTOR
|
Jobs fails with infinite redirects, likely due to the changes happening to the doc website, see https://github.com/pytorch/pytorch/actions/runs/14862967281/job/41733878657
cc @svekars @sekyondaMeta @AlannaBurke @seemethere @pytorch/pytorch-dev-infra
| true
|
3,043,272,004
|
DTensor placement propagation for `slice` fails during recompile due to SymInts
|
lw
|
open
|
[
"oncall: distributed",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This code fails:
```py
import torch
import torch.distributed
torch.distributed.init_process_group(backend="nccl", rank=0, world_size=1, device_id=torch.device("cuda", 0), init_method="tcp://127.0.0.1:2743")
device_mesh = torch.distributed.device_mesh.DeviceMesh.from_group(torch.distributed.group.WORLD, "cuda")
@torch.compile
def foo(t):
return t[:]
t = torch.randn(1024).bfloat16().cuda()
dt = torch.distributed.tensor.DTensor.from_local(t, device_mesh, (torch.distributed.tensor.Replicate(),))
torch._dynamo.mark_dynamic(dt, 0)
foo(dt)
```
And throws this error:
```
Traceback (most recent call last):
File "/home/lw/repro_dtensor_slice.py", line 16, in <module>
foo(dt)
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 662, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1457, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1238, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1084, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 780, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 819, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 737, in transform
tracer.run()
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3499, in run
super().run()
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 827, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 421, in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 1147, in call_function
return handler(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 791, in <lambda>
return lambda tx, args, kwargs: obj.call_function(
^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 1147, in call_function
return handler(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 1107, in _handle_insert_op_in_graph
return wrap_fx_proxy(tx, proxy)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 2421, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 2487, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 2585, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 3278, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 3176, in get_fake_value
ret_val = wrap_fake_exception(
^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 2690, in wrap_fake_exception
return fn()
^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 3177, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 3374, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 3333, in run_node
return node.target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_compile.py", line 51, in inner
return disable_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 856, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/distributed/tensor/_api.py", line 350, in __torch_dispatch__
return DTensor._op_dispatcher.dispatch(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/distributed/tensor/_dispatch.py", line 160, in dispatch
self.sharding_propagator.propagate(op_info)
File "/my/conda/env/lib/python3.12/site-packages/torch/distributed/tensor/_sharding_prop.py", line 263, in propagate
output_sharding = self.propagate_op_sharding_non_cached(op_info.schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/distributed/tensor/_sharding_prop.py", line 286, in propagate_op_sharding_non_cached
op_strategy = self.op_strategy_funcs[op_schema.op](strategy_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my/conda/env/lib/python3.12/site-packages/torch/distributed/tensor/_ops/_tensor_ops.py", line 353, in gen_slice_strategy
assert isinstance(end, int)
^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_function <built-in function getitem>(*(DTensor(local_tensor=FakeTensor(..., device='cuda:0', size=(s64,), dtype=torch.bfloat16), device_mesh=DeviceMesh('cuda', [0]), placements=(Replicate(),)), slice(None, None, None)), **{}): got AssertionError()
from user code:
File "/home/lw/repro_dtensor_slice.py", line 9, in foo
return t[:]
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
Concretely, it seems that the sharding propagation rule for `slice` expects start and end to be ints, whereas they can also be SymInts.
This issue has happened in our training code when invoking the same graph with multiple shapes as it triggered a recompilation with dynamic shapes.
### Versions
PyTorch 2.7.0
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @chauhang @penguinwu
| true
|
3,043,159,561
|
[nativert] Move Placement to pytorch core
|
yushangdi
|
open
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing"
] | 13
|
CONTRIBUTOR
|
Summary:
Move Placement to pytorch core.
Using `torch::nativert::isSameDevice` explicitly in code to avoid confusion with the `isSameDevice` in torch namespace.
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/cpp/nativert:placement_test
./bin/test_nativert
```
OSS and internal CI
Differential Revision: D74190745
| true
|
3,043,123,452
|
Remove redundant type aliases of _device for torch.Device
|
Skylion007
|
open
|
[
"good first issue",
"triaged",
"actionable"
] | 5
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
We should remove redundant type aliases for `_device_t` and replace with `torch.types.Device` where appropriate to make the typing system a bit more consistent.
#152935 is a good step in that direction
### Alternatives
_No response_
### Additional context
_No response_
| true
|
3,043,119,298
|
[ROCm] Ck gemm architecture guard
|
alugorey
|
open
|
[
"module: rocm",
"triaged",
"open source"
] | 2
|
CONTRIBUTOR
|
Prevents CK gemms from being built unless explicitly specified. USE_ROCM_CK_GEMM controls the build, on by default on ROCm platform
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,043,110,284
|
Add NestedTensorHPU to to_padded_tensor in native_functions.yaml
|
sfraczek
|
open
|
[
"triaged",
"open source",
"ciflow/xpu",
"release notes: xpu"
] | 5
|
NONE
| null | true
|
3,043,004,042
|
[dtensor] add privateuse1 SDPA op support to DTensor
|
1274085042
|
open
|
[
"oncall: distributed",
"triaged",
"open source"
] | 2
|
CONTRIBUTOR
|
**Summary**
This PR adds _scaled_dot_product_fused_attention_overrideable and _scaled_dot_product_fused_attention_overrideable_backward to DTensor ops
@drisspg @fegin @d4l3k @wanchaol @albanD
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,042,998,223
|
[Linter] Add linter to detect device-bias hard code in test cases.
|
etaf
|
open
|
[
"open source",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152948
* #152945
Since XPU does not gate community pull requests, we’ve observed that contributors often hardcode "cuda" in functions decorated with @requires_gpu() when adding new test cases. This causes the tests to fail on XPU and breaks XPU CI.
This PR adds a linter to detect such issues automatically. An example is shown below.
```
Error (TEST_DEVICE_BIAS) [device-bias]
`@requires_gpu` function should not hardcode device='cuda'
11670 | .contiguous()
11671 | )
11672 |
>>> 11673 | inp = torch.rand((64, 64), device="cuda") * 2 - 1
11674 | boundaries = torch.tensor([-0.9, -0.8, 0.1, 0.2, 0.5, 0.9])
11675 |
11676 | self.common(fn, (inp, boundaries), check_lowp=False)
Error (TEST_DEVICE_BIAS) [device-bias]
`@requires_gpu` function should not hardcode .cuda() call
11700 | self.assertEqual(ref, res)
11701 |
11702 | for offset2 in (0, 1, 2, 3, 4):
>>> 11703 | base2 = torch.randn(64 * 64 + 64, dtype=torch.float32).cuda()
11704 | inp2 = torch.as_strided(base2, (64, 64), (64, 1), offset2)
11705 | ref2 = fn(inp2)
11706 | res2 = fn_c(inp2)
Error (TEST_DEVICE_BIAS) [device-bias]
`@requires_gpu` function should not hardcode torch.device('cuda:0')
11723 | return x.sin() + x.cos()
11724 |
11725 | base = torch.randn(
>>> 11726 | 64 * 64 + 64, dtype=torch.float32, device=torch.device("cuda:0")
11727 | )
11728 |
11729 | inp1 = torch.as_strided(base, (32, 32), (32, 1), 4)
Error (TEST_DEVICE_BIAS) [device-bias]
`@requires_gpu` function should not hardcode .to('cuda') call
11771 | torch.manual_seed(42)
11772 | base = torch.randn(64 * 64 + 64, dtype=torch.float32, device=self.device)
11773 | torch.manual_seed(42)
>>> 11774 | base_ref = torch.randn(64 * 64 + 64, dtype=torch.float32).to("cuda")
11775 |
11776 | inp = torch.as_strided(base, size, stride, offset)
11777 | inp_ref = torch.as_strided(base_ref, size, stride, offset)
```
| true
|
3,042,984,947
|
Clean up of CUTLASS_VERSION
|
narekmalk
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Fixes #152847
| true
|
3,042,957,740
|
[dtensor] add privateuse1 SDPA op support to DTensor
|
1274085042
|
closed
|
[
"oncall: distributed",
"open source"
] | 3
|
CONTRIBUTOR
|
**Summary**
This PR adds _scaled_dot_product_fused_attention_overrideable and _scaled_dot_product_fused_attention_overrideable_backward to DTensor ops
@drisspg @fegin @d4l3k @wanchaol @albanD
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,042,791,350
|
[Break XPU] Fix XPU UT failures introduced by community.
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ciflow/xpu"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152948
* __->__ #152945
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,042,756,025
|
DISABLED test_compiler_collectives_automatic_dynamic_tensor (__main__.TestMultiProc)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compiler_collectives_automatic_dynamic_tensor&suite=TestMultiProc&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41701856727).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compiler_collectives_automatic_dynamic_tensor`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 605, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 845, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 894, in _check_return_codes
raise RuntimeError(error)
RuntimeError: Process 0 exited with error code 10 and exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 734, in run_test
getattr(self, test_name)()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 607, in wrapper
fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/pytorch/test/distributed/test_dynamo_distributed.py", line 901, in test_compiler_collectives_automatic_dynamic_tensor
self.assertEqual(res[0], r)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 3 but got 2.
Absolute difference: 1
Relative difference: 0.3333333333333333
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/distributed/test_dynamo_distributed.py TestMultiProc.test_compiler_collectives_automatic_dynamic_tensor
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `distributed/test_dynamo_distributed.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames
| true
|
3,042,755,895
|
DISABLED test_comprehensive_ormqr_cuda_float64 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_ormqr_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41707147808).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_ormqr_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads
return torch.autograd.grad(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/__init__.py", line 503, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2191, in backward
return impl_fn()
^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2177, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2272, in _backward_impl
CompiledFunction.compiled_bw = aot_config.bw_compiler(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler
disable(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 857, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 2223, in bw_compiler
return inner_compile(
^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 708, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 876, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 860, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1476, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1363, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2234, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2281, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpe31m95k2/dl/cdl3fhy2acur7hgkx32o2kxjrlda4roawc7calagde6llm4te4dk.py", line 692, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 479, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 499, in _wait_futures
kernel = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 368, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 325, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 482, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1279, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1271, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmppjb75otd/triton/G6M2VGPKEHZ2RQ5ZRCTYQ7EEOPMKVLP7DKLIAK74WLFX4XDVD6WA/triton_poi_fused_mul_sub_6.cubin')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 16: SampleInput(input=Tensor[size=(2, 2), device="cuda:0", dtype=torch.float64, contiguous=False], args=TensorList[Tensor[size=(2,), device="cuda:0", dtype=torch.float64], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float64]], kwargs={'left': 'True', 'transpose': 'True'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=16 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_ormqr_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,042,748,855
|
aten._scaled_dot_product_efficient_attention returns LSE padded to next highest multiple of 32
|
a-r-r-o-w
|
open
|
[
"module: cuda",
"triaged",
"module: sdpa"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Hi! This is less of a bug report and more of an ask of why the behaviour is this way.
With the following code to obtain LSE from efficient attention backend, the shape of the LSE tensor is `[1, 2, 32]`. It is expected that the size in dim=2 should match the sequence length, which is `8` in this case. Essentially, the LSE tensor has seqlen rounded up to the next highest multiple of 32 (for example, qkv with `seqlen=42` would result in LSE tensor of shape `[b, n, 64]`)
With cudnn and flash backends, we get the LSE tensor in correct shape `[1, 2, 8]`. It makes me wonder why this is the behaviour, and if this makes it a requirement for [CP with efficient attention](https://github.com/pytorch/pytorch/blob/bc11afd41fc68637000dcd9c14128c75d9361443/torch/distributed/tensor/experimental/_attention.py#L247) to have seq_len % 32 == 0 (because otherwise the shape of sdpa output tensors and LSE will not match on [this](https://github.com/pytorch/pytorch/blob/bc11afd41fc68637000dcd9c14128c75d9361443/torch/distributed/tensor/experimental/_attention.py#L156) line). Although very simple to unpad out the extra values, any pointers to the code that automatically does this internally for flash/cudnn would be great!
Code:
```python
import torch
sdpa = torch.ops.aten._scaled_dot_product_efficient_attention
# sdpa = torch.ops.aten._scaled_dot_product_cudnn_attention
seq_len = 8
query = torch.randn(1, 2, seq_len, 8).to("cuda", dtype=torch.bfloat16)
key = torch.randn(1, 2, seq_len, 8).to("cuda", dtype=torch.bfloat16)
value = torch.randn(1, 2, seq_len, 8).to("cuda", dtype=torch.bfloat16)
output, lse, *_ = sdpa(
query=query,
key=key,
value=value,
attn_bias=None,
compute_log_sumexp=True,
dropout_p=0,
is_causal=False,
scale=0,
)
print(lse.shape)
print(lse)
```
Output:
```
torch.Size([1, 2, 32])
tensor([[[2.0794, 2.0794, 2.0794, 2.0794, 2.0794, 2.0794, 2.0794, 2.0794,
inf, inf, inf, inf, inf, inf, inf, inf,
inf, inf, inf, inf, inf, inf, inf, inf,
inf, inf, inf, inf, inf, inf, inf, inf],
[2.0794, 2.0794, 2.0794, 2.0794, 2.0794, 2.0794, 2.0794, 2.0794,
inf, inf, inf, inf, inf, inf, inf, inf,
inf, inf, inf, inf, inf, inf, inf, inf,
inf, inf, inf, inf, inf, inf, inf, inf]]],
device='cuda:0')
```
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.14 (main, Jul 10 2024, 22:05:36) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.3.52
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA DGX Display
GPU 4: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.129.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 2591.593
CPU max MHz: 2250,0000
CPU min MHz: 1500,0000
BogoMIPS: 4491.68
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] came-pytorch==0.1.3
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch_retinaface==0.1.0
[pip3] pytorch-triton==3.1.0+5fe38ffd73
[pip3] torch==2.6.0
[pip3] torch-optimi==0.2.1
[pip3] torch-tb-profiler==0.4.3
[pip3] torchao==0.10.0
[pip3] torchdata==0.10.1
[pip3] torchdiffeq==0.2.4
[pip3] torchmetrics==1.4.1
[pip3] torchsde==0.2.6
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @ptrblck @msaroufim @eqy @jerryzh168 @albanD @XilunWu
| true
|
3,042,681,757
|
ROCm: no HIP device available if device is already initialized
|
stefanozampini
|
open
|
[
"module: rocm",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
If I first initialize the HIP environment from `cupy`, `torch` does not detect it
```
$ python -c 'import cupy; print(cupy.cuda.is_available()); import torch; print(torch.cuda.is_available())'
True
False
```
However, as can be seen below, it should
```
$ python -c 'import cupy; print(cupy.cuda.is_available())'
True
$ python -c 'import torch; print(torch.cuda.is_available())'
True
$ python -c 'import cupy; import torch; print(torch.cuda.is_available())'
True
$ python -c 'for _ in range(2): import torch; print(torch.cuda.is_available()); import cupy; print(cupy.cuda.is_available())'
True
True
True
True
```
```
$ python -c 'import torch; print(torch.zeros(0).to("cuda").device)'
cuda:0
$ python -c 'import cupy; print(cupy.zeros(0).device)'
<CUDA Device 0>
```
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.7.0+rocm6.3
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-fa1d09cbd
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI100 (gfx908:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42131
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7713 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 43%
CPU max MHz: 3720.7029
CPU min MHz: 1500.0000
BogoMIPS: 4000.22
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63
NUMA node1 CPU(s): 64-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] pytorch-triton-rocm==3.3.0
[pip3] torch==2.7.0+rocm6.3
[pip3] torchaudio==2.7.0+rocm6.3
[pip3] torchvision==0.22.0+rocm6.3
[conda] Could not collect
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,042,513,699
|
[Don't merge] Debug
|
mengfei25
|
open
|
[
"triaged",
"open source",
"module: dynamo"
] | 3
|
CONTRIBUTOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.