id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,986,662,759
|
[map] make proxy mode re-dispatch to fake key
|
ydwu4
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150971
* __->__ #151034
* #150962
| true
|
2,986,643,209
|
[Inductor] add support for disabling atomic adds
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
CONTRIBUTOR
|
As title
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,986,530,792
|
make einsum unbacked friendly
|
ColinPeppler
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 14
|
CONTRIBUTOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,986,472,721
|
Reapply "ProcessGroupGloo: support lazy_init (#150801)"
|
d4l3k
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 7
|
MEMBER
|
This reverts commit 73f3d6d9aaa128d9917e8b3790933ba2855066cc.
Reapplies #150801
Test plan:
See #150801
submodule
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
2,986,466,721
|
Unexpected memory usage in FSDP 2 Hybrid Sharding (HSDP)
|
Craigacp
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 0
|
NONE
|
### 🐛 Describe the bug
FSDP 2 & torch.distributed.checkpoint has unexpected additional memory usage when saving, which caused our jobs to fail. I'm not clear if this is something I've misconfigured (our stack is built on pytorch & HuggingFace but we're not using HF's trainer or accelerate), or if this is intended behaviour from the checkpointing system.
We are doing a 64 GPU training run on 8 OCI BM.GPU.H100.8 machines, with sharding across the 8 GPUs in a node and data parallel replicas across the 8 machines, training a ~1B parameter transformer decoder. We ended up on hybrid sharding due to some external networking factors though it's not much faster than full sharding in our setup. GPU 0 has 7 extra processes running which steady state at 520MB VRAM, and when we checkpoint the model using `torch.distributed.checkpoint` they jump up to 2+GB each, for a total of 11+GB of additional VRAM. My understanding of how the checkpoint system works was that there is no communication between the GPUs when saving out a checkpoint as it is saved by each shard independently.
It looks like these additional processes aren't using the same pytorch memory allocator, as we checkpoint after gradients & activations have been cleared, so while there is lots of VRAM used it's all free in the caching allocator. We tested this by clearing the cache before checkpointing and the checkpoint is fine. Oddly the run crashes with OOM on the next immediate step, presumably due to some kind of fragmentation issue as all our batches are the same size.
The relevant code for sharding & checkpoint saving is excerpted below. I think we might be getting the state dict out of the model incorrectly, though I'm still unclear what the additional processes are for.
```python
def wrap_model(self, comp_env):
# comp_env is a data class which contains the job configuration from SLURM
# this method is on a data class which contains the FSDP 2 configuration
num_nodes = comp_env.world_size // comp_env.node_size
self.mesh = init_device_mesh(device_type=comp_env.device.type, mesh_shape=(num_nodes, comp_env.node_size), mesh_dim_names=("dp", "shard"))
config = {
"mp_policy": self.mixed_precision,
"offload_policy": self.offload_policy,
"mesh": self.mesh,
"reshard_after_forward": True, # when true implements full sharding, when false implements ZeRO-2
}
if self.mixed_precision:
top_level_mixed_precision = MixedPrecisionPolicy(param_dtype=self.mixed_precision.param_dtype,
reduce_dtype=self.mixed_precision.reduce_dtype,
output_dtype=torch.float32,
cast_forward_inputs=self.mixed_precision.cast_forward_inputs)
else:
top_level_mixed_precision = None
cls_list_to_wrap = self._transformer_cls(model)
def wrap_policy(module: torch.nn.Module) -> bool:
if self.layer_cls_to_wrap is None:
return False
return isinstance(module, tuple(cls_list_to_wrap))
model.compile() # does layer wise compilation like torchtitan's llama
stack = [model]
ordered_modules = []
while stack:
current_module = stack.pop()
for _, attr in current_module.named_children():
if isinstance(attr, torch.nn.Module):
stack.append(attr)
ordered_modules.append(current_module)
# wrap all the modules which match the policy aside from the top level module
for module in ordered_modules[::-1][:-1]:
if wrap_policy(module):
fully_shard(module, **config)
# wrap the top level module with a different MP config so it always emits fp32 outputs
top_level_config = config.copy()
top_level_config['mp_policy'] = top_level_mixed_precision
fully_shard(model, **top_level_config)
return model
def checkpoint_model_and_optimizer(self, comp_env, model, optimizer, output_dir):
state_dict = model.state_dict()
ckpt_dir = os.path.join(output_dir, f"{MODEL_NAME}")
os.makedirs(ckpt_dir, exist_ok=True)
logger.info(f"Saving model to {ckpt_dir}")
state_dict = {"model": state_dict}
# Only save the first replica
if comp_env.node_rank == 0:
dist_ckpt.save(
state_dict=state_dict,
storage_writer=dist_ckpt.FileSystemWriter(ckpt_dir),
process_group=self.mesh.get_group("shard")
)
comp_env.wait_for_everyone()
logger.info(f"Model saved to {ckpt_dir}")
from torch.distributed.checkpoint.state_dict import get_optimizer_state_dict, StateDictOptions
optim_state = get_optimizer_state_dict(model, optimizer, options=StateDictOptions(flatten_optimizer_state_dict=True))
ckpt_dir = os.path.join(output_dir, f"{OPTIMIZER_NAME}")
os.makedirs(ckpt_dir, exist_ok=True)
# Only save the first replica
if comp_env.node_rank == 0:
dist_ckpt.save(
state_dict={"optimizer": optim_state},
storage_writer=dist_ckpt.FileSystemWriter(ckpt_dir),
planner=DefaultSavePlanner(),
process_group=self.mesh.get_group("shard")
)
comp_env.wait_for_everyone()
logger.info(f"Optimizer state saved in {ckpt_dir}")
```
If this code is using FSDP 2 & dcp correctly, then what is causing the memory bump and is there any way to predict how big it will be? Alternatively if I've got some of this wrong then is there a source beyond looking at the torchtitan code for how to do hybrid sharding correctly? We didn't hit this issue when doing full sharding across the 64 GPUs, though that obviously had slightly lower memory pressure so we might have just been lucky.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Oracle Linux Server 8.10 (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-24.0.1)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.28
Python version: 3.12.7 (main, Mar 29 2025, 19:03:01) [GCC 8.5.0 20210514 (Red Hat 8.5.0-23.0.1)] (64-bit runtime)
Python platform: Linux-4.18.0-553.27.1.el8_10.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 560.35.05
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.9.7
/usr/lib64/libcudnn_adv_infer.so.8.9.7
/usr/lib64/libcudnn_adv_train.so.8.9.7
/usr/lib64/libcudnn_cnn_infer.so.8.9.7
/usr/lib64/libcudnn_cnn_train.so.8.9.7
/usr/lib64/libcudnn_ops_infer.so.8.9.7
/usr/lib64/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-111
Off-line CPU(s) list: 112-223
Thread(s) per core: 1
Core(s) per socket: 56
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8480+
Stepping: 8
CPU MHz: 2000.000
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 2048K
L3 cache: 107520K
NUMA node0 CPU(s): 0-55
NUMA node1 CPU(s): 56-111
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.11.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torch-tb-profiler==0.4.3
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360
| true
|
2,986,432,635
|
[fx] Filter stacktrace
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Filtering out the stacktrace so that the stacktrace on nodes when using fx.Tracer looks nicer. I just copied the filtering we have in [proxy_tensor.py](https://github.com/pytorch/pytorch/blob/6720d2396966c815463d90dd24fcae50b8f7fa2f/torch/fx/experimental/proxy_tensor.py#L1903-L1931).
Previously the stacktrace looked like:
```
File "/data/users/angelayi/pytorch/moo.py", line 3964, in <module>
run_tests()
File "/data/users/angelayi/pytorch/torch/testing/_internal/common_utils.py", line 1342, in run_tests
unittest.main(argv=argv)
File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/main.py", line 101, in __init__
self.runTests()
File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/main.py", line 271, in runTests
self.result = testRunner.run(self.test)
File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/runner.py", line 184, in run
test(result)
File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/suite.py", line 122, in run
test(result)
File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/suite.py", line 122, in run
test(result)
File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/case.py", line 650, in __call__
return self.run(*args, **kwds)
File "/data/users/angelayi/pytorch/torch/testing/_internal/common_utils.py", line 3324, in run
self._run_custom(
File "/data/users/angelayi/pytorch/torch/testing/_internal/common_utils.py", line 3296, in _run_custom
super_run(result=result)
File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/case.py", line 591, in run
self._callTestMethod(testMethod)
File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/case.py", line 549, in _callTestMethod
method()
File "/data/users/angelayi/pytorch/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/data/users/angelayi/pytorch/moo.py", line 1495, in test_stack_trace
gm = torch.fx.GraphModule(m, tracer.trace(m))
File "/data/users/angelayi/pytorch/torch/fx/_symbolic_trace.py", line 837, in trace
(self.create_arg(fn(*args)),),
File "/data/users/angelayi/pytorch/moo.py", line 1485, in forward
x = x * 2
File "/data/users/angelayi/pytorch/torch/fx/proxy.py", line 716, in impl
return tracer.create_proxy("call_function", target, args, kwargs)
File "/data/users/angelayi/pytorch/torch/fx/proxy.py", line 248, in create_proxy
proxy.node.stack_trace = "".join(CapturedTraceback.extract().format())
```
Now it looks like:
```
File "/data/users/angelayi/pytorch/moo.py", line 1485, in forward
x = x * 2
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,986,428,845
|
Allow OpaqueTensorImpl to be used for views
|
PatriceVignola
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Summary:
When creating an `OpaqueTensorImpl`, currently there's only an option to create it for a non-view tensor, but it can be useful to create one for view tensors as well.
View tensors should contain the same autograd parameters as the original tensor, whereas non-view tensors get created with whatever `inference_mode` option is currently enabled. For this reason, `TensorImpl` has a special view constructor that takes `TensorImpl::ImplType` as its first parameter, so adding a new constructor to `OpaqueTensorImpl` that does the same thing allows us to create views with it.
Test Plan: CI
Reviewed By: scottxu0730
Differential Revision: D71748460
| true
|
2,986,392,927
|
Add size/strides/alignment assertions for cpp_wrapper
|
shunting314
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"oncall: export",
"module: aotinductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
_No response_
### Error logs
_No response_
### Versions
Inductor generates size/strids/alignement assertions in the generated code to make sure the assumption made at compile time is met at runtime. (check [source code](https://github.com/pytorch/pytorch/blob/1250106630f1ba430de937b56063673ff775131c/torch/_inductor/ir.py#L5707-L5716) and [PR](https://github.com/pytorch/pytorch/pull/150804/files) ). But these assertions are not generated for cpp-wrapper so far. We'll need generate the corresponding C++ code to add those assertions for cpp-wrapper.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @yushangdi @benjaminglass1
| true
|
2,986,375,337
|
Back out "[AOTI] Always use oss schema for ExternKernelNodes serialization"
|
yiming0416
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Summary: Revert for FC breaking
Test Plan: CI
Differential Revision: D72802075
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,986,368,181
|
[easy] Add cache bypass traceback information to cache_info on autograd_cache_bypass
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151025
This will help us better debug pickling errors, etc, in internal models
| true
|
2,986,366,101
|
[ONNX] Add asdict method to VerificationInfo class
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 4
|
COLLABORATOR
|
This pull request introduces a new method to convert `VerificationInfo` objects to dictionaries and includes a corresponding test to ensure the method works correctly.
| true
|
2,986,255,318
|
Do not generate long log messages for suppressed data dependent errors.
|
laithsakka
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor",
"ci-no-td"
] | 48
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151023
TORCH_LOGS="all" python test/test_dynamic_shapes.py -k test_guard_or_true
before:
<img width="1065" alt="Screenshot 2025-04-10 at 9 55 27 AM" src="https://github.com/user-attachments/assets/3ee20de0-2902-4eb1-8ab0-80f1b974fb78" />
after:
<img width="1124" alt="Screenshot 2025-04-10 at 9 54 35 AM" src="https://github.com/user-attachments/assets/4e7e1f0c-856c-417f-8763-bfe183e2450d" />
Note: we actually do not expect to see a log at all, this is an orthogonal issue in recording where it logs each error seen
even when recording is not enabled? I will follow up with PR for that.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,986,238,947
|
Add basic unit test and noop config
|
Lucaskabela
|
closed
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150978
* #150885
* __->__ #151022
Tidy tests:
Adding initial config option
lintrunner
Minor renaming
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,986,197,476
|
Add `pad_to_multiple_of` to `pad_sequence` (C++ only)
|
ringohoffman
|
open
|
[
"triaged",
"open source",
"release notes: cpp"
] | 2
|
CONTRIBUTOR
|
Related:
* https://github.com/pytorch/pytorch/issues/150989
`pad_to_multiple_of=8` should be used to create sequences that take advantage of NVIDIA Tensor Cores when using mixed precision on GPUs with compute capability >= 7.5 (Volta).
| true
|
2,986,133,165
|
[logging] Separate cuda synchronize overhead in autotuning
|
masnesral
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151020
Summary: In order to more accurately debug the overhead of autotuning (and pad_mm), explicity do a cuda.synchronize before benchmarking and time that.
Test Plan: See internal test plan here: https://fburl.com/f365xfcj
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,986,108,133
|
DISABLED test_parity__foreach_acos_fastpath_inplace_cuda_float64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_acos_fastpath_inplace_cuda_float64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40312895319).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_acos_fastpath_inplace_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_acos_', keys=('aten::_foreach_acos_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.float64], Tensor[size=(19, 19), device="cuda:0", dtype=torch.float64], Tensor[size=(18, 18), device="cuda:0", dtype=torch.float64], Tensor[size=(17, 17), device="cuda:0", dtype=torch.float64], Tensor[size=(16, 16), device="cuda:0", dtype=torch.float64], Tensor[size=(15, 15), device="cuda:0", dtype=torch.float64], Tensor[size=(14, 14), device="cuda:0", dtype=torch.float64], Tensor[size=(13, 13), device="cuda:0", dtype=torch.float64], Tensor[size=(12, 12), device="cuda:0", dtype=torch.float64], Tensor[size=(11, 11), device="cuda:0", dtype=torch.float64], Tensor[size=(10, 10), device="cuda:0", dtype=torch.float64], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float64], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float64], Tensor[size=(7, 7), device="cuda:0", dtype=torch.float64], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float64], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float64], Tensor[size=(4, 4), device="cuda:0", dtype=torch.float64], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float64], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float64], Tensor[size=(1, 1), device="cuda:0", dtype=torch.float64]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_acos_fastpath_inplace_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,986,722,431
|
[release] CPU perf benchmark latency increase for 2.6->2.7 on c5.24xlarge and A100 instances
|
atalman
|
open
|
[
"module: performance",
"oncall: releng",
"module: cpu",
"triaged",
"module: intel",
"topic: performance"
] | 8
|
CONTRIBUTOR
|
Running torchbench userbenchmarks for CPU I see following results from different runs:
On C5.24xlarge CPU latency increase 10-30% . Please note we have to up to ~8% for noise. However looks like the signal we are getting is clear.
Running workflow:
https://github.com/pytorch/benchmark/blob/perf-release-2.7/.github/workflows/userbenchmark-c5-24xlarge.yml
C5.24xlarge
```
Run 1:
Benchmark;pytorch-2.6.0-cuda-12.6;pytorch-2.7.0-cuda-12.6
mnist-cpu_memory;847.316;866.195
mnist-gpu_memory;0.0;0.0
mnist-latency;74.46;93.13 -> 25% increase
Run 2:
Benchmark;pytorch-2.6.0-cuda-12.6;pytorch-2.7.0-cuda-12.6
mnist_hogwild-cpu_memory;616.598;634.773
mnist_hogwild-gpu_memory;0.0;0.0
mnist_hogwild-latency;46.64;48.20
wlm_cpu_lstm-cpu_memory;946.34;960.879
wlm_cpu_lstm-gpu_memory;0.0;0.0
wlm_cpu_lstm-latency;821.30;934.65 -> 14% increase
wlm_cpu_trans-cpu_memory;976.066;975.133
wlm_cpu_trans-gpu_memory;0.0;0.0
wlm_cpu_trans-latency;818.44;910.90 -> 11% increase
Run 3:
Benchmark;pytorch-2.6.0-cuda-12.6;pytorch-2.7.0-cuda-12.6
mnist-cpu_memory;1034.32;1030.47
mnist-gpu_memory;0.0;0.0
mnist-latency;66.29;92.53 ->39% increase
mnist_hogwild-cpu_memory;615.805;629.711
mnist_hogwild-gpu_memory;0.0;0.0
mnist_hogwild-latency;45.94;45.15
wlm_cpu_lstm-cpu_memory;959.113;951.457
wlm_cpu_lstm-gpu_memory;0.0;0.0
wlm_cpu_lstm-latency;832.84;977.05 -> 17% increase
wlm_cpu_trans-cpu_memory;953.859;980.09
wlm_cpu_trans-gpu_memory;0.0;0.0
wlm_cpu_trans-latency;822.26;918.15
Run 4:
Benchmark,pytorch-2.6.0-cuda-12.6,pytorch-2.7.0-cuda-12.6
mnist-cpu_memory,993.281,1113.65
mnist-gpu_memory,0.0,0.0
mnist-latency,70.27,90.28 -> 28% increase
mnist_hogwild-cpu_memory,614.816,629.562
mnist_hogwild-gpu_memory,0.0,0.0
mnist_hogwild-latency,44.73,46.65
wlm_cpu_lstm-cpu_memory,946.188,964.023
wlm_cpu_lstm-gpu_memory,0.0,0.0
wlm_cpu_lstm-latency,811.83,954.04. -> 17% increase
wlm_cpu_trans-cpu_memory,973.684,954.387
wlm_cpu_trans-gpu_memory,0.0,0.0
wlm_cpu_trans-latency,801.86,918.64. -> 14% increase
wlm_gpu_lstm-cpu_memory,482.219,488.016
wlm_gpu_lstm-gpu_memory,0.0,0.0
wlm_gpu_lstm-latency,3.18,3.30
wlm_gpu_trans-cpu_memory,482.23,488.312
wlm_gpu_trans-gpu_memory,0.0,0.0
wlm_gpu_trans-latency,3.23,3.24
Run 5: 2.5vs2.6 - No increase in latency
Benchmark;pytorch-2.5.1-cuda-12.4;pytorch-2.6.0-cuda-12.4
mnist-cpu_memory;1016.57;726.523
mnist-gpu_memory;0.0;0.0
mnist-latency;73.15;72.89
Run 6: 2.5vs2.6 - No increase in latency
Benchmark;pytorch-2.5.1-cuda-12.4;pytorch-2.6.0-cuda-12.4
mnist_hogwild-cpu_memory;596.617;591.684
mnist_hogwild-gpu_memory;0.0;0.0
mnist_hogwild-latency;47.48;46.37
wlm_cpu_lstm-cpu_memory;935.559;926.969
wlm_cpu_lstm-gpu_memory;0.0;0.0
wlm_cpu_lstm-latency;881.19;831.11
wlm_cpu_trans-cpu_memory;946.93;956.215
wlm_cpu_trans-gpu_memory;0.0;0.0
wlm_cpu_trans-latency;815.29;838.54
```
A100 (please note A100 cpu results are not reliable):
```
Run 1:
Benchmark;pytorch-2.6.0-cuda-12.6;pytorch-2.7.0-cuda-12.6
mnist-cpu_memory;1201.49;1262.79
mnist-gpu_memory;1093.0;1093.0
mnist-latency;38.46;37.13
mnist_hogwild-cpu_memory;613.566;631.934
mnist_hogwild-gpu_memory;4.0;4.0
mnist_hogwild-latency;600.82;567.12
wlm_cpu_lstm-cpu_memory;920.457;880.457
wlm_cpu_lstm-gpu_memory;4.0;4.0
wlm_cpu_lstm-latency;888.66;1007.18
wlm_cpu_trans-cpu_memory;927.922;886.551
wlm_cpu_trans-gpu_memory;4.0;4.0
wlm_cpu_trans-latency;938.17;1078.43 -> 16% Increase cpu latency
wlm_gpu_lstm-cpu_memory;1016.54;1044.67
wlm_gpu_lstm-gpu_memory;903.0;903.0
wlm_gpu_lstm-latency;52.99;52.87
wlm_gpu_trans-cpu_memory;1029.73;1100.92
wlm_gpu_trans-gpu_memory;911.0;911.0
wlm_gpu_trans-latency;55.06;55.15
Run 2:
Benchmark;pytorch-2.6.0-cuda-12.6;pytorch-2.7.0-cuda-12.6
mnist-cpu_memory;1201.49;1262.79
mnist-gpu_memory;1093.0;1093.0
mnist-latency;38.46;37.13
mnist_hogwild-cpu_memory;613.566;631.934
mnist_hogwild-gpu_memory;4.0;4.0
mnist_hogwild-latency;600.82;567.12
wlm_cpu_lstm-cpu_memory;920.457;880.457
wlm_cpu_lstm-gpu_memory;4.0;4.0
wlm_cpu_lstm-latency;888.66;1007.18
wlm_cpu_trans-cpu_memory;927.922;886.551
wlm_cpu_trans-gpu_memory;4.0;4.0
wlm_cpu_trans-latency;938.17;1078.43. ->14% Increase cpu latency
wlm_gpu_lstm-cpu_memory;1016.54;1044.67
wlm_gpu_lstm-gpu_memory;903.0;903.0
wlm_gpu_lstm-latency;52.99;52.87
wlm_gpu_trans-cpu_memory;1029.73;1100.92
wlm_gpu_trans-gpu_memory;911.0;911.0
wlm_gpu_trans-latency;55.06;55.15
```
cc @msaroufim @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @frank-wei
| true
|
2,986,031,995
|
[inductor] Triton generated kernel int -> float8 fails
|
IvanKobzarev
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"module: float8"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```
import torch
def fn(x):
return x.to(torch.float8_e5m2)
x = torch.ones(16, dtype=torch.int, device="cuda")
torch.compile(fn)(x)
```
### Error logs
```
ERROR: Triton compilation failed: triton_poi_fused__to_copy_0
def triton_poi_fused__to_copy_0(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 16
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.load(in_ptr0 + (x0), xmask)
tmp1 = tmp0.to(tl.float8e5)
tl.store(out_ptr0 + (x0), tmp1, xmask)
metadata: {'signature': {'in_ptr0': '*i32', 'out_ptr0': '*fp8e5', 'xnumel': 'i32'}, 'device': 0, 'constants': {'XBLOCK': 16}, 'configs': [AttrsDescriptor(divisible_by_16=(0, 1, 2), equal_to_1=())], 'device_type': 'cuda', 'num_warps': 1, 'num_stages': 1, 'debug': True, 'cc': 90}
Traceback (most recent call last):
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/triton/language/core.py", line 35, in wrapper
return fn(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/triton/language/core.py", line 993, in to
return semantic.cast(self, dtype, _builder, fp_downcast_rounding)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/triton/language/semantic.py", line 841, in cast
assert False, f'cannot cast {input} to {dst_ty}'
AssertionError: cannot cast int32[constexpr[16]] to <[16], fp8e5>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/data/users/ivankobzarev/a/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 615, in _precompile_config
binary = triton.compile(*compile_args, **compile_kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/triton/compiler/compiler.py", line 276, in compile
module = src.make_ir(options, codegen_fns, context)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/triton/compiler/compiler.py", line 113, in make_ir
return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns)
triton.compiler.errors.CompilationError: at 8:11:
def triton_poi_fused__to_copy_0(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 16
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.load(in_ptr0 + (x0), xmask)
tmp1 = tmp0.to(tl.float8e5)
^
Traceback (most recent call last):
File "/home/ivankobzarev/task_saved_hooks/int64_to_fp8.py", line 7, in <module>
torch.compile(fn)(x)
File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/eval_frame.py", line 671, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/data/users/ivankobzarev/a/pytorch/torch/_inductor/compile_fx.py", line 768, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/data/users/ivankobzarev/a/pytorch/torch/_inductor/compile_fx.py", line 753, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/data/users/ivankobzarev/a/pytorch/torch/_inductor/compile_fx.py", line 1357, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/_inductor/compile_fx.py", line 1246, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/data/users/ivankobzarev/a/pytorch/torch/_inductor/graph.py", line 2201, in compile_to_module
return self._compile_to_module()
File "/data/users/ivankobzarev/a/pytorch/torch/_inductor/graph.py", line 2248, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/data/users/ivankobzarev/a/pytorch/torch/_inductor/codecache.py", line 2889, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/data/users/ivankobzarev/a/pytorch/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_ivankobzarev/qa/cqankdumrn6oiuelxhkgtqmvmzzmjbh67l5gqbiyfjjifc2qjjtw.py", line 42, in <module>
triton_poi_fused__to_copy_0 = async_compile.triton('triton_poi_fused__to_copy_0', '''
File "/data/users/ivankobzarev/a/pytorch/torch/_inductor/async_compile.py", line 365, in triton
kernel.precompile(
File "/data/users/ivankobzarev/a/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 320, in precompile
self._precompile_worker()
File "/data/users/ivankobzarev/a/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 342, in _precompile_worker
compile_results.append(self._precompile_config(c))
File "/data/users/ivankobzarev/a/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 615, in _precompile_config
binary = triton.compile(*compile_args, **compile_kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/triton/compiler/compiler.py", line 276, in compile
module = src.make_ir(options, codegen_fns, context)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/triton/compiler/compiler.py", line 113, in make_ir
return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns)
torch._inductor.exc.InductorError: CompilationError: at 8:11:
def triton_poi_fused__to_copy_0(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 16
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.load(in_ptr0 + (x0), xmask)
tmp1 = tmp0.to(tl.float8e5)
^
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
### Versions
main Apr 10, 2025
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @yanbing-j @vkuzo @albanD
| true
|
2,986,023,152
|
[ONNX] Simple torch.nn.Identity onnx export with dynamo=True does not load
|
knowicki-nvidia
|
open
|
[
"module: onnx",
"triaged",
"onnx-triaged"
] | 15
|
NONE
|
### 🐛 Describe the bug
I got a simple test, where I try to export a simple model to ONNX using Dynamo and load it. Test works on 2.6, but it stopped working on `pytorch:25.03-py3`
```python
import torch
# creating onnx with dynamo
model = torch.nn.Identity()
x = torch.randn(1, 1)
exported_model = torch.onnx.export(model, (x,), dynamo=True)
exported_model.save("model.onnx")
# Trying to load
import onnxruntime as ort
ort.InferenceSession("model.onnx")
# Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# File ".venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 465, in __init__
# self._create_inference_session(providers, provider_options, disabled_optimizers)
# File ".venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 526, in _create_inference_session
# sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
# onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from test.pt failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:169 onnxruntime::Model::Model(onnx::ModelProto&&, const onnxruntime::PathString&, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Missing opset in the model. All ModelProtos MUST have at least one entry that specifies which version of the ONNX OperatorSet is being imported.
# or ...
import onnx
onnx_model = onnx.load("model.onnx")
onnx.checker.check_model(onnx_model)
# Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# File ".venv/lib/python3.10/site-packages/onnx/checker.py", line 179, in check_model
# C.check_model(
onnx.onnx_cpp2py_export.checker.ValidationError: model with IR version >= 3 must specify opset_import for ONNX
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250312+cu128
Also tested on 2.6 were it worked (pip install torch==2.6)
Docker Images:
nvcr.io/nvidia/pytorch:25.02-py3(torch==2.7.0a0+ecf3bae40a.nv25.02) - works
nvcr.io/nvidia/pytorch:25.03-py3(torch==2.7.0a0+7c8ec84dab.nv25.03) - fails
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX
Nvidia driver version: 550.120
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900K
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] onnx==1.17.0
[pip3] onnx_graphsurgeon==0.5.6
[pip3] onnxconverter-common==1.13.0
[pip3] onnxruntime-gpu==1.20.2
[pip3] onnxscript==0.2.4
[pip3] onnxsim==0.4.36
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.7.0.dev20250312+cu128
[pip3] torch_tensorrt==2.6.0
[pip3] triton==3.2.0
[pip3] triton-model-navigator==0.14.0
[pip3] tritonclient==2.56.0
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,985,973,930
|
[Dynamo][Export] Untraceable issues when exporting the Stable Diffusion 3.5 model
|
YufengShi-dudu
|
open
|
[
"oncall: pt2",
"module: dynamo",
"export-triaged",
"oncall: export"
] | 6
|
NONE
|
### 🐛 Describe the bug
We encountered some untraceable issues when exporting the stable diffusion model.
The package being used is [diffusers library](https://github.com/huggingface/diffusers/tree/main)
The model we are trying to export comes from [StableDiffusion3Pipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L147)
The StableDiffusion3Pipeline is not a subclass of `torch.nn.Module`. And its `__call__` method behaves like the `forward` we normally see.
To export the StableDiffusion3Pipeline, we first add it as a subclass of the `torch.nn.Module`. And then, we wrap it with some minor changes.
### Issues
We just list two places that couldn't be traced here.
1. https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_base.py#L1102
```
File "/Users/yufshi01/Projects/ml/diffusion_executorch/executorch/env/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 1253, in call_str
unimplemented(f"{type(arg.value)} has a C/C++ based str method")
File "/Users/yufshi01/Projects/ml/diffusion_executorch/executorch/env/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 438, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: <class 'tokenizers.AddedToken'> has a C/C++ based str method
......
File "/Users/yufshi01/Projects/ml/diffusion_executorch/executorch/env/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1098, in __getattr__
return str(value) if key != "additional_special_tokens" else [str(tok) for tok in value]
```
2. https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils.py#L360
```
Explanation: Dynamo does not know how to trace the builtin `unicodedata.category.` This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind).
......
File "/Users/yufshi01/Projects/ml/diffusion_executorch/executorch/env/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 361, in _is_control
cat = unicodedata.category(char)
```
### Reproduce the issue
You can reproduce the export issue with the following script:
```
import torch
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent / "diffusers/src"))
from diffusers import StableDiffusion3Pipeline
class StableDiffusion3PipelineWrapper(StableDiffusion3Pipeline):
# Please add the StableDiffusion3Pipeline class in
# diffusers/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py
# as a subclass of torch.bb.Module.
# e.g. class StableDiffusion3Pipeline(torch.nn.Module, DiffusionPipeline, ...)
# Skip the check_inputs in forward method
def check_inputs(self, *args, **kwargs):
return
def forward(self, prompt):
self.__call__(prompt)
if __name__ == "__main__":
pipeline_class = StableDiffusion3PipelineWrapper
repo_id = "stabilityai/stable-diffusion-3.5-medium"
pipeline = pipeline_class.from_pretrained(repo_id, torch_dtype=torch.float16)
pipeline_input = {"prompt": "A photo of a cat"}
model = pipeline
example_inputs = (pipeline_input["prompt"],)
exported_program = torch.export.export_for_training(
model, example_inputs, strict=True
)
```
Please organize the repos as follows:
Projects/
|- diffusers # You can get it from https://github.com/huggingface/diffusers/tree/main
|- script.py
### Package version
torch 2.7.0.dev20250311
transformers 4.47.1
huggingface-hub 0.29.3
It seems that the StableDiffusion3Pipeline, especially the tokenizers, is not trace-friendly. But from our point, we want to use the open source model and export it directly with no or only minor modifications.
Will it be possible to add support for exporting this model with Dynamo?
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250311
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 19.1.7
CMake version: version 3.31.6
Libc version: N/A
Python version: 3.10.16 (main, Dec 3 2024, 17:27:57) [Clang 16.0.0 (clang-1600.0.26.4)] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] executorch==0.6.0a0+8cd1b93
[pip3] flake8==6.1.0
[pip3] flake8-breakpoint==1.1.0
[pip3] flake8-bugbear==24.4.26
[pip3] flake8-comprehensions==3.14.0
[pip3] flake8-plugin-utils==1.3.3
[pip3] flake8-pyi==23.5.0
[pip3] mypy==1.14.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.4
[pip3] torch==2.7.0.dev20250311
[pip3] torchao==0.10.0+git7d879462
[pip3] torchaudio==2.6.0.dev20250311
[pip3] torchsr==1.0.4
[pip3] torchvision==0.22.0.dev20250311
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,985,965,664
|
update user defined triton kernel table to include strict vs non-strict difference
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"module: user triton"
] | 0
|
CONTRIBUTOR
|
https://pytorch.org/tutorials/recipes/torch_compile_user_defined_triton_kernel_tutorial.html#composability
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @oulgen @aakhundov @davidberard98
| true
|
2,985,882,746
|
Propagate callable parameter types using ParamSpec (#142306)
|
tommyadams5
|
closed
|
[
"oncall: distributed",
"module: typing",
"triaged",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"suppress-bc-linter",
"release notes: distributed (torchelastic)"
] | 8
|
CONTRIBUTOR
|
Partially addresses #142306
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @ezyang @malfet @xuzhao9 @gramster @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,985,795,621
|
Revert two recent prologue prs
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151013
These were landed in a bit of a rush to try to make the release.. Reverting, then will re-land with https://github.com/pytorch/pytorch/pull/151009 applied, and do full benchmark run with max-autotune.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D72791103](https://our.internmc.facebook.com/intern/diff/D72791103)
| true
|
2,985,787,373
|
[Inductor UT] Generalize device-bias code in `test_flex_decoding.py`
|
anmyachev
|
closed
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/xpu"
] | 3
|
COLLABORATOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Part of https://github.com/pytorch/pytorch/pull/143553
@etaf @davidberard98 @hoshibara could you take a look?
| true
|
2,985,777,346
|
[Inductor XPU][Quantization] NotImplementedError: 'onednn::qconv_pointwise'
|
etaf
|
closed
|
[
"triaged",
"module: xpu"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
The quantized conv now does not work on XPU after #150751 landed. The PR renamed the `qconv2d_pointwise` to `qconv_pointwise`.
To reproduce:
```
python test/inductor/test_mkldnn_pattern_matcher.py TestPatternMatcher.test_qconv2d_xpu
```
@ZhiweiYan-96 please fix this issue.
### Versions
pytorch commit: a6933a1c423261de4e0c47387b6b83869f869aa1
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,985,690,299
|
[torch.export] Exported LSTM cannot be move on CUDA device
|
Eldalie
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 3
|
NONE
|
### 🐛 Describe the bug
The following script raises:
```
RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu
```
except to work without exception.
It is expected to work without raising an exception.
The script exports an LSTM using torch.export, then tries to build a module from the exported model and move this module to the GPU. However, some tensors inside the LSTM (e.g., zeros used for the hidden state) are not moved to the correct device, which causes the crash.
```python
import torch
from torch.export import Dim, export_for_training
class CustomLSTM(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(input_size=9, hidden_size=128, num_layers=1, batch_first=True, bidirectional=True)
def forward(self, inputs: torch.Tensor) -> torch.Tensor:
lstm_out, (h_n, c_n) = self.lstm(inputs)
return torch.cat([h_n[-2], h_n[-1]], dim=1)
batch = Dim("batch", min=2, max=None)
dynamic_shapes = ({0: batch},)
exported = export_for_training(CustomLSTM(), args=(torch.randn((128, 1, 9)),), strict=True, dynamic_shapes=dynamic_shapes)
a = exported.module()
b = a.to(torch.device("cuda:0"))
b.forward(torch.randn((128, 1, 9), device=torch.device("cuda:0")))
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Laptop GPU
Nvidia driver version: 565.57.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 7745HX with Radeon Graphics
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 75%
CPU max MHz: 5151,0000
CPU min MHz: 400,0000
BogoMIPS: 7186,42
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc amd_ibpb_ret arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.11
[pip3] pynvjitlink-cu12==0.5.2
[pip3] torch==2.6.0
[pip3] torcheval==0.0.7
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.6.3
[pip3] torchvision==0.21.0
[pip3] torchviz==0.0.3
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,985,664,562
|
Fix index broadcast
|
eellison
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151009
* #150697
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,985,538,698
|
[dynamo] Deprecate enable_cpp_framelocals_guard_eval config variable - default: True
|
BartlomiejStemborowski
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 9
|
CONTRIBUTOR
|
[dynamo] Deprecate enable_cpp_framelocals_guard_eval config variable - default: True
Reading the feature enabling param `enable_cpp_framelocals_guard_eval `at the CPP level is time consuming and slows down the operation of the dynamo as it is done every time the function using this param is called. Reading the value only once at init isn’t an option as it would disable the modification of this param at the runtime. Since this feature is enabled by default for some time and it doesn’t cause known issues, the `enable_cpp_framelocals_guard_eval `configuration param will be deprecated by this commit and its value is hardcoded to true.
Local microbenchmark dynamo_guard_eval.py:
- 931.9 us -> 538.9 us (3.10)
@williamwen42 @jansel @anijain2305
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,985,535,200
|
[Openreg][PrivateUse1] Enable CI for openreg
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 28
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151091
* __->__ #151007
Changes:
- move test_openreg.py from test/cpp_extensions/open_registration_extension/ to test/
- update README.md for openreg
- enable CI
| true
|
2,985,275,083
|
[OpenReg][PrivateUse1] Refactoring the csrc files of pytorch_openreg
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"merging"
] | 16
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151091
* #151007
* __->__ #151005
As the title stated.
**Changes:**
- Remove unnecessary header file
- Remove unnecessary registry logic about PrivateUse1HooksRegistry,such as TORCH_DECLARE_REGISTRY, C10_DEFINE_REGISTRY, etc,.
- using static + global variable to do initialization instead of call_one
**Next Step:**
Enable test_openreg.py in CI/CD to guard the quality of PrivateUse1
| true
|
2,985,274,739
|
[Openreg][PrivateUse1] Refactor csrc files of Pytorch_openreg
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151091
* #151007
* #151005
* __->__ #151004
* #151000
I want to format and refactor the csrc file of pytorch_openreg. To make the code review clearer and easier to understand, I divide the code refactoring into two parts:
- Part 1: Code formatting
- Part 2: Code refactoring and optimization (Next PR)
| true
|
2,985,090,157
|
DISABLED test_parity__foreach_acos_fastpath_inplace_cuda_float32 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_acos_fastpath_inplace_cuda_float32&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40300344818).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 18 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_acos_fastpath_inplace_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_acos_', keys=('aten::_foreach_acos_', 'Unrecognized', 'cudaLaunchKernel', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.float32], Tensor[size=(19, 19), device="cuda:0", dtype=torch.float32], Tensor[size=(18, 18), device="cuda:0", dtype=torch.float32], Tensor[size=(17, 17), device="cuda:0", dtype=torch.float32], Tensor[size=(16, 16), device="cuda:0", dtype=torch.float32], Tensor[size=(15, 15), device="cuda:0", dtype=torch.float32], Tensor[size=(14, 14), device="cuda:0", dtype=torch.float32], Tensor[size=(13, 13), device="cuda:0", dtype=torch.float32], Tensor[size=(12, 12), device="cuda:0", dtype=torch.float32], Tensor[size=(11, 11), device="cuda:0", dtype=torch.float32], Tensor[size=(10, 10), device="cuda:0", dtype=torch.float32], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float32], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float32], Tensor[size=(7, 7), device="cuda:0", dtype=torch.float32], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float32], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float32], Tensor[size=(4, 4), device="cuda:0", dtype=torch.float32], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float32], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float32], Tensor[size=(1, 1), device="cuda:0", dtype=torch.float32]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_acos_fastpath_inplace_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,985,003,477
|
NCCL Upgrade to 2.26.2.post1 for CUDA 12 for blackwell support
|
tinglvv
|
closed
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 13
|
COLLABORATOR
|
Update NCCL for CUDA 12.8 to 2.26.2.post1 for Blackwell support. It's the same 2.26.2 release plus sm_100 and sm_120 support.
Updating for CUDA 12.6 as well since NCCL download now uses one common version read from .ci/docker/ci_commit_pins/nccl-cu12.txt.
Need to upload build to https://download.pytorch.org/whl/nightly/nvidia-nccl-cu12/
cc @ptrblck @atalman @malfet @eqy @nWEIdia
| true
|
2,984,930,052
|
[PT2] Model Functional Regression due to _insert_aten_to_metadata_assert_pass
|
leslie-fang-intel
|
closed
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 4
|
COLLABORATOR
|
### 🐛 Describe the bug
Met a functional regression, after searching the guilty commit, we found it's due to https://github.com/pytorch/pytorch/pull/149235
Here is a mini-repro
```
import torch
from torch.export import export_for_training
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear = torch.nn.Linear(3, 3)
def forward(self, x):
x = x.to(torch.float32)
x = self.linear(x)
x = torch.nn.functional.tanh(x)
return x
if __name__ == "__main__":
with torch.no_grad():
m = Model().eval()
shape = (2, 3)
x = torch.randn(*shape).to(torch.bfloat16)
x2 = torch.randn(*shape)
exported_model = export_for_training(
m,
(x,),
).module()
print("exported_model is: {}".format(exported_model), flush=True)
cfn = torch.compile(exported_model)
cfn(x2)
```
Generally, we will use this method to optimize a model after PT2E Quantization, invoking `torch.compile` after `export_for_training`. cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @pianpwk @jerryzh168 @Valentine233
### Versions
```
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] optree==0.13.0
[pip3] torch==2.8.0a0+git2e5d95a
[pip3] torchao==0.10.0+gitab3792e3
[pip3] torchmetrics==1.0.3
[pip3] torchrec==1.1.0a0+8211be7
[pip3] torchtune==0.6.0.dev20250124+cpu
[pip3] torchvision==0.22.0a0+fab1188
[pip3] triton==3.2.0
[conda] mkl 2025.0.0 h901ac74_941 conda-forge
[conda] mkl-include 2025.0.0 hf2ce2f3_941 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.8.0a0+git2e5d95a dev_0 <develop>
[conda] torchao 0.10.0+gitab3792e3 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchrec 1.1.0a0+8211be7 dev_0 <develop>
[conda] torchtune 0.6.0.dev20250124+cpu pypi_0 pypi
[conda] torchvision 0.22.0a0+fab1188 dev_0 <develop>
[conda] triton 3.2.0 pypi_0 pypi
```
| true
|
2,984,896,184
|
[Openreg][PrivateUse1] Improve openreg module capabilities
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151091
* #151007
* #151005
* #151004
* __->__ #151000
----
- Add more functionalities for openreg in openreg module
- Remove related functionalities from test_cpp_extensions_open_device_registration.py
| true
|
2,984,894,723
|
[XPU] skip a subprocess UT for Windows
|
LuFinch
|
closed
|
[
"open source",
"Merged",
"module: testing",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu",
"module: xpu"
] | 25
|
CONTRIBUTOR
|
This case creates subprocess in a subprocess. In Windows it can't load function at this scenario hence I have to skip it
```
File "C:\ProgramData\miniforge3\envs\lfq\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\ProgramData\miniforge3\envs\lfq\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
AttributeError: Can't get attribute 'run_model' on <module '__main__' (built-in)>
Traceback (most recent call last):
File "<string>", line 25, in <module>
File "<string>", line 16, in test_multi_process
AssertionError
```
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,984,871,857
|
Unsupported operand 118
|
radna0
|
open
|
[
"needs reproduction",
"module: serialization",
"triaged"
] | 1
|
NONE
|
This happens when doing `full_model = torch.load(model_path, map_location="cpu", weights_only=True)`
```
File "/home/kojoe/.local/lib/python3.10/site-packages/torch/serialization.py", line 1548, in load
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Weights only load failed. In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
Please file an issue with the following so that we can make `weights_only=True` compatible with your use case: WeightsUnpickler error: Unsupported operand 118
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
```
cc @mruberry @mikaylagawarecki
| true
|
2,984,817,372
|
[OpenReg][PrivateUse1] add device context for OpenReg Module
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151091
* #151007
* #151005
* #151004
* #151000
* __->__ #150997
Add device context support for OpenReg Module, which is depended by
some tests such as ``torch.serialization.default_restore_location``
| true
|
2,984,782,985
|
[Intel GPU] Avoid using fp32 in sdp math path when benchmark performance.
|
jianyizh
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 7
|
CONTRIBUTOR
|
sdp on xpu will fallback to math path in some cases (i.e. training). In dynamo benchmark, we prefer to use fp16 for better performance. Although `allow_fp16_bf16_reduction_math_sdp` is under backends.cuda, its implementation is for all device.
I didn't add if device == xpu here, I suppose cuda devices will not run into math path anyway
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,984,781,163
|
`torch.nn.functional.ctc_loss` inconsistent implementation and docs
|
zeshengzong
|
closed
|
[
"module: nn",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 📚 The doc issue
When fixing #150835 find out here's an inconsistency between doc and test of `torch.nn.functional.ctc_loss`.
The [doc](https://pytorch.org/docs/stable/generated/torch.nn.functional.ctc_loss.html) describe `targets` param `cannot be blank`

But there's a test case in `test_nn.py` do validate pass empty tensor as `targets` and compute loss result.
https://github.com/pytorch/pytorch/blob/4273e5d15cfcb282b2795684874ea439d8620999/test/test_nn.py#L11413-L11421
So it's confused whether `targets` is allowed to be blank or not.
### Suggest a potential alternative/fix
May change the doc description about `targets` param
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,984,768,876
|
`torch.export` and `torch.compile` in torch 2.7 RC fails some cases that work with torch 2.6
|
ydshieh
|
closed
|
[
"triage review",
"module: regression",
"has workaround",
"oncall: pt2",
"module: dynamo",
"oncall: export"
] | 10
|
NONE
|
### 🐛 Describe the bug
Originally discussed [in transformers](https://github.com/huggingface/transformers/issues/32253#issuecomment-2784714535).
@tugsbayasgalan mentioned it might be a regression of torch 2.7.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @guangy10 and @anijain2305.
## To Reproduce
### 1. Install torch 2.7 RC:
> pip install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu126
### 2. Install transformers
> git clone https://github.com/huggingface/transformers.git && cd transformers && git fetch origin && git checkout ci_with_torch_2.7_commit_0ef339ff1b63bb03a388c79bfbebec9085e10564 && pip install -e .[torch,testing]
### 3. Running test regarding torch.compile: get the error shown at the end, i.e. `7.` (works if running with torch 2.6)
> git checkout bcd1da9580ba1c6c4db019a91b1de9d88966e1fa && RUN_SLOW=1 python3 -m pytest -v tests/models/gemma3/test_modeling_gemma3.py::Gemma3Vision2TextModelTest::test_generate_compile_model_forward
### 4. Running test with a change in `transformers`: it works
> git checkout 95eb065772215cab276678c45daf933599cfd337&& RUN_SLOW=1 python3 -m pytest -v tests/models/gemma3/test_modeling_gemma3.py::Gemma3Vision2TextModelTest::test_generate_compile_model_forward
The commit changes is [here](https://github.com/huggingface/transformers/commit/95eb065772215cab276678c45daf933599cfd337)
### 5. another test with `torch.export` fails with torch 2.7 RC (it works if using `strict=False` in `torch.export`)
> git checkout bcd1da9580ba1c6c4db019a91b1de9d88966e1fa && RUN_SLOW=1 python3 -m pytest -v tests/models/dpt/test_modeling_dpt_hybrid.py::DPTModelTest::test_torch_export
### 6. a change that would work
> git checkout 49e7bd3e406e20beedec3c5d6d8be54aeb51daf5&& RUN_SLOW=1 python3 -m pytest -v tests/models/dpt/test_modeling_dpt_hybrid.py::DPTModelTest::test_torch_export
The commit changes is [here](https://github.com/huggingface/transformers/commit/49e7bd3e406e20beedec3c5d6d8be54aeb51daf5)
### 7. Error log from `3.`
```bash
E torch._dynamo.exc.Unsupported: Unexpected type in sourceless builder transformers.models.gemma3.configuration_gemma3.Gemma3TextConfig
E
E from user code:
E File "/transformers/src/transformers/utils/generic.py", line 965, in wrapper
E output = func(self, *args, **kwargs)
E File "/transformers/src/transformers/utils/deprecation.py", line 172, in wrapped_func
E return func(*args, **kwargs)
E File "/transformers/src/transformers/models/gemma3/modeling_gemma3.py", line 1323, in forward
E causal_mask = self._update_causal_mask(
E File "/transformers/src/transformers/models/gemma3/modeling_gemma3.py", line 1117, in _update_causal_mask
E if self.config.text_config._attn_implementation == "flash_attention_2":
E File "/transformers/src/transformers/configuration_utils.py", line 210, in __getattribute__
E return super().__getattribute__(key)
E
E Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
### Versions
--2025-04-10 07:41:28-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24440 (24K) [text/plain]
Saving to: 'collect_env.py'
collect_env.py 100%[====================================================================================================================================================================================>] 23.87K --.-KB/s in 0s
2025-04-10 07:41:28 (53.1 MB/s) - 'collect_env.py' saved [24440/24440]
root@d6af5b579bcb:/temp/transformers# python3 collect_env.py
Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.234-225.921.amzn2.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 550.144.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 7
BogoMIPS: 4999.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 ss
e4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 4 MiB (4 instances)
L3 cache: 35.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel-extension-for-pytorch==2.3.0
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.17.4+torch250cu121
[pip3] numpy==1.24.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxconverter-common==1.13.0
[pip3] onnxruntime==1.21.0
[pip3] onnxruntime-tools==1.7.0
[pip3] tf2onnx==1.16.1
[pip3] torch==2.7.0+cu126
[pip3] torchaudio==2.7.0+cu126
[pip3] torchvision==0.22.0+cu126
[pip3] triton==3.3.0
[conda] Could not collect
| true
|
2,984,686,892
|
torch.compile can compile the model that is not runnable under eager mode
|
syheliel
|
open
|
[
"module: error checking",
"triaged",
"oncall: pt2",
"module: fakeTensor",
"module: pt2-dispatcher"
] | 2
|
NONE
|
### 🐛 Describe the bug
in eager mode, following model will throw `RuntimeError: expected scalar type Float but found Half`. But it will run normally under torch.compile
```
import torch
import math
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.query = torch.nn.Linear(64, 64)
self.key = torch.nn.Linear(64, 64)
self.value = torch.nn.Linear(64, 64)
def forward(self, x1, x2, x3):
q = self.query(x1)
k = self.key(x2)
v = self.value(x3)
q = q.permute(0, 2, 1, 3)
k = k.permute(0, 2, 1, 3)
v = v.permute(0, 2, 1, 3)
q = q / math.sqrt(q.size(-1))
div = q @ k.transpose(-2, -1)
div = div.to(torch.float32)
attn_weight = torch.softmax(div, dim=-1)
attn_weight = attn_weight.to(torch.float16)
output = attn_weight @ v
return output
# Initializing the model
m = Model()
opt = torch.compile(m)
# Inputs to the model
x1 = torch.randn(1, 8, 64, 64) # query
x2 = torch.randn(1, 8, 64, 64) # key
x3 = torch.randn(1, 8, 64, 64) # value
__output__ = m(x1, x2, x3) # <----throw error
__output_opt__ = opt(x1, x2, x3) # run normally
```
### Error logs
traceback (most recent call last):
File "XXX/gencode-4/sfdp=9/sfdp=9-4.py", line 35, in <module>
__output__ = m(x1, x2, x3)
File "XXX/.cache/pypoetry/virtualenvs/whitefox-PzQOR4d6-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "XXX/.cache/pypoetry/virtualenvs/whitefox-PzQOR4d6-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "XXX/gencode-4/sfdp=9/sfdp=9-4.py", line 24, in forward
output = attn_weight @ v
RuntimeError: expected scalar type Float but found Half
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 18
On-line CPU(s) list: 0-17
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 5 125H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 9
Socket(s): 1
Stepping: 4
BogoMIPS: 5990.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 432 KiB (9 instances)
L1i cache: 576 KiB (9 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.6.0
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @malfet @chauhang @penguinwu @eellison @zou3519 @bdhirsh @ezyang
| true
|
2,984,662,402
|
[Intel GPU] Enable GQA and different head_dim of value for SDPA
|
LuFinch
|
open
|
[
"module: cpu",
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
In OneDNN v3.7, SDPA doesn't support num_head_q != num_head_kv (aka GQA) and head_dim_qk != head_dim_v.
In OneDNN v3.8, SDPA supports these two scenarios. Enable them in this PR. SDPA UTs pass in local test.
This PR is pending on OneDNN v3.8 upgrade, don't merge now.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,984,634,503
|
DISABLED test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False_grad_True (__main__.TestFxGraphCache)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False_grad_True&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40292689880).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False_grad_True`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 160, in test_cache_load_function
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 14 but got 21.
Absolute difference: 7
Relative difference: 0.5
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False_grad_True
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,984,627,482
|
Add `pad_to_multiple_of` to `pad_sequence`
|
ringohoffman
|
open
|
[
"open source",
"release notes: cpp"
] | 3
|
CONTRIBUTOR
|
Fixes #150989
`pad_to_multiple_of=8` should be used to create sequences that take advantage of NVIDIA Tensor Cores when using mixed precision on GPUs with compute capability >= 7.5 (Volta).
| true
|
2,984,613,544
|
Add `pad_to_multiple_of` to `torch.nn.utils.rnn.pad_sequence`
|
ringohoffman
|
open
|
[
"module: nn",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Hugging Face tokenizers support [`pad_to_multiple_of`](https://huggingface.co/docs/transformers/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.__call__.pad_to_multiple_of), which allows you to pad your sequence's length to a multiple of a number. This comes from a piece of [NVIDIA Tensor Core documentation](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html#tensor-core-shape):
> ## 4.1. Satisfying Tensor Core Shape Constraints
> Due to their design, Tensor Cores have shape constraints on their inputs. For **matrix multiplication**:
>
>* On FP16 inputs, all three dimensions (M, N, K) must be multiples of 8.
> * On INT8 inputs (Turing only), all three dimensions must be multiples of 16.
>
> For **convolution**:
>
> * On FP16 inputs, input and output channels must be multiples of 8.
> * On INT8 inputs (Turing only), input and output channels must be multiples of 16.
>
> In practice, for mixed precision training, our recommendations are:
>
> 1. Choose mini-batch to be a multiple of 8
> 2. Choose linear layer dimensions to be a multiple of 8
> 3. Choose convolution layer channel counts to be a multiple of 8
> 4. For classification problems, pad vocabulary to be a multiple of 8
> 5. **For sequence problems, pad the sequence length to be a multiple of 8**
It would be convenient for `torch.nn.utils.rnn.pad_sequence` to support `pad_to_multiple_of`.
### Alternatives
My hack right now is to manually create a sequence of my desired length and then discard it from my padded sequence, which is less efficient than the proposed solution.
```python
if pad_to_multiple_of is not None:
max_length = max(seq.shape[-1] for seq in sequences)
max_length = (max_length + pad_to_multiple_of - 1) // pad_to_multiple_of * pad_to_multiple_of
sequences.append(torch.empty(max_length, dtype=dtype, device=device))
```
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,984,538,852
|
Support C shim for customized OP
|
Valentine233
|
open
|
[
"module: cpp",
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 7
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
### Feature
Request the support of C shim for customized OPs defined in non-PyTorch libraries, e.g., TorchAO, TorchVision.
If we run a model with a customized OP using CPP wrapper, the model needs to go from CPP to Python, and then from Python to CPP in order to call this OP, which results in a non-negligible overhead.
Example of output code:
```
// Call the custom op in Python
RAIIPyObject py_buf12(PyObject_CallObject(custom_op_wrapper, py_args_5));
if (py_buf12.get() == NULL) {
if (PyErr_Occurred()) {
return;
}
throw std::runtime_error("PyObject_CallObject torch.ops.torchao.scaled_dot_product_int8.default failed");
}
buf13 = reinterpret_cast<AtenTensorHandle>(PyCapsule_GetPointer(py_buf12.get(), NULL));
```
### Motivation
We are implementing and enabling INT8 SDPA OP in TorchAO. Please see more background in https://github.com/pytorch/pytorch/issues/144941. Recently, we find that the overhead of lacking C shim for customized OP is obvious ~5% for E2E performance.
Here is the comparison data for VIT realtime mode:
| Wrapper | INT8 SDPA | E2E throughput (sentences/s) | Speedup |
| ----------- | ----------- | ------------ | ------------ |
| CPP | Implement in PyTorch with C shim | 3751 | 100% |
| CPP | Implement in TorchAO without C shim | 3553 | 94.72% |
| Python | Implement in PyTorch/TorchAO | 3490 | 93.04% |
cc @jbschlosser @chauhang @penguinwu @zou3519 @bdhirsh @desertfire @jansel @EikanWang @leslie-fang-intel @chunyuan-w @Guobing-Chen
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,984,516,880
|
[c10d][tcp_store] Fix connection reset caused by wrong socket close
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150987
While fixing the memory leak in https://github.com/pytorch/pytorch/pull/145757, we accidentally close the socket for the case when nread == 0 and thought it is the case when connection is closed. This is not true. According to libuv doc: https://docs.libuv.org/en/v1.x/stream.html#c.uv_read_cb.
> nread might be 0, which does not indicate an error or EOF. This is equivalent to EAGAIN or EWOULDBLOCK under read(2).
We found this bug when debugging a broken pipe issue when users first call a set and then wait for all keys right afterwards on 128 ranks. This might also cause other broken pipe issues we have seen in the prod jobs recently.
Added a unit test to test this case.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
2,984,385,718
|
Onnx Export failure : op for aten::full
|
kraza8
|
closed
|
[
"module: onnx",
"triaged",
"onnx-triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
Running into this issue when attempting to export this model into onnx. Model is downloaded from huggingface.
model: "openvla/openvla-7b"
onnx_model_path = "/onnx/model.onnx"
torch.onnx.export(model, (inputs["input_ids"], inputs["attention_mask"], inputs["pixel_values"]), onnx_model_path, input_names=["input_node"], output_names=["output_node"])
**Error:**
RuntimeError: 0 INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/ir/alias_analysis.cpp":617, please report a bug to PyTorch. We don't have an op for aten::full but it isn't a special case. Argument types: int[], bool, int, NoneType, Device, bool,
Candidates:
aten::full.names(int[] size, Scalar fill_value, *, str[]? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
aten::full(SymInt[] size, Scalar fill_value, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
aten::full.names_out(int[] size, Scalar fill_value, *, str[]? names, Tensor(a!) out) -> Tensor(a!)
aten::full.out(SymInt[] size, Scalar fill_value, *, Tensor(a!) out) -> Tensor(a!)
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (aarch64)
GCC version: (GCC) 13.3.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.29.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-1019-nvidia-64k-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GH200 480GB
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 72
On-line CPU(s) list: 0-71
Vendor ID: ARM
Model name: Neoverse-V2
Model: 0
Thread(s) per core: 1
Core(s) per socket: 72
Socket(s): 1
Stepping: r0p0
Frequency boost: disabled
CPU max MHz: 3375.0000
CPU min MHz: 81.0000
BogoMIPS: 2000.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh bti
L1d cache: 4.5 MiB (72 instances)
L1i cache: 4.5 MiB (72 instances)
L2 cache: 72 MiB (72 instances)
L3 cache: 114 MiB (1 instance)
NUMA node(s): 9
NUMA node0 CPU(s): 0-71
NUMA node1 CPU(s):
NUMA node2 CPU(s):
NUMA node3 CPU(s):
NUMA node4 CPU(s):
NUMA node5 CPU(s):
NUMA node6 CPU(s):
NUMA node7 CPU(s):
NUMA node8 CPU(s):
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] onnx==1.17.0
[pip3] onnxruntime-training==1.20.0+cpu
[pip3] onnxscript==0.2.3
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[conda] Could not collect
| true
|
2,984,369,390
|
DISABLED test_parity__foreach_acos_fastpath_inplace_cuda_float16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_acos_fastpath_inplace_cuda_float16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40285852744).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 18 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_acos_fastpath_inplace_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_acos_', keys=('aten::_foreach_acos_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.float16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.float16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.float16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.float16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.float16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.float16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.float16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.float16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.float16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.float16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.float16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.float16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.float16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.float16]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_acos_fastpath_inplace_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,984,357,967
|
Clean up duplicated code in lr_scheduler
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"suppress-bc-linter",
"release notes: optim"
] | 9
|
CONTRIBUTOR
|
## Changes
- Remove duplicated code in `ReduceLROnPlateau`
- Remove redundant `noqa` comment
## Test Result
```bash
pytest test/optim/test_lrscheduler.py
```

| true
|
2,984,348,065
|
[Test CI] Xccl cmake bak
|
Chao1Han
|
open
|
[
"open source",
"ciflow/xpu"
] | 4
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,984,336,367
|
[CI][CUDA] xfail grouped gemm unit tests on blackwell
|
nWEIdia
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
On SM100OrLater, Expect failures like:
RuntimeError: torch._grouped_mm is only supported on CUDA devices with compute capability = 9.0
To execute this test, run the following from the base repo dir:
python test/test_matmul_cuda.py TestMatmulCudaCUDA.test_grouped_gemm_3d_2d_strided_False_a_row_major_True_b_row_major_False_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
`
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_2d_strided_False_a_row_major_False_b_row_major_False_cuda SKIPPED [0.0005s] (Issue with numpy versi...) [ 2%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_2d_strided_False_a_row_major_False_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy versio...) [ 4%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_2d_strided_False_a_row_major_True_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy versio...) [ 6%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_2d_strided_False_a_row_major_True_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy version...) [ 8%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_2d_strided_True_a_row_major_False_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy versio...) [ 10%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_2d_strided_True_a_row_major_False_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy version...) [ 12%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_2d_strided_True_a_row_major_True_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy version...) [ 14%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_2d_strided_True_a_row_major_True_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy version ...) [ 16%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_3d_strided_False_a_row_major_False_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy versi...) [ 18%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_3d_strided_False_a_row_major_False_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy versio...) [ 20%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_3d_strided_False_a_row_major_True_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy versio...) [ 22%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_3d_strided_False_a_row_major_True_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy version...) [ 25%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_3d_strided_True_a_row_major_False_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy versio...) [ 27%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_3d_strided_True_a_row_major_False_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy version...) [ 29%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_3d_strided_True_a_row_major_True_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy version...) [ 31%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_2d_3d_strided_True_a_row_major_True_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy version ...) [ 33%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_2d_strided_False_a_row_major_False_b_row_major_False_cuda SKIPPED [0.0002s] (Issue with numpy versi...) [ 35%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_2d_strided_False_a_row_major_False_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy versio...) [ 37%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_2d_strided_False_a_row_major_True_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy versio...) [ 39%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_2d_strided_False_a_row_major_True_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy version...) [ 41%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_2d_strided_True_a_row_major_False_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy versio...) [ 43%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_2d_strided_True_a_row_major_False_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy version...) [ 45%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_2d_strided_True_a_row_major_True_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy version...) [ 47%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_2d_strided_True_a_row_major_True_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy version ...) [ 50%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_3d_strided_False_a_row_major_False_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy versi...) [ 52%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_3d_strided_False_a_row_major_False_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy versio...) [ 54%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_3d_strided_False_a_row_major_True_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy versio...) [ 56%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_3d_strided_False_a_row_major_True_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy version...) [ 58%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_3d_strided_True_a_row_major_False_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy versio...) [ 60%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_3d_strided_True_a_row_major_False_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy version...) [ 62%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_3d_strided_True_a_row_major_True_b_row_major_False_cuda SKIPPED [0.0001s] (Issue with numpy version...) [ 64%]
test/test_matmul_cuda.py::TestMatmulCudaCUDA::test_grouped_gemm_3d_3d_strided_True_a_row_major_True_b_row_major_True_cuda SKIPPED [0.0001s] (Issue with numpy version ...) [ 66%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_2d_2d_fast_accum_False_strided_False_cuda XFAIL [0.8166s] [ 68%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_2d_2d_fast_accum_False_strided_True_cuda XFAIL [0.0017s] [ 70%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_2d_2d_fast_accum_True_strided_False_cuda XFAIL [0.0012s] [ 72%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_2d_2d_fast_accum_True_strided_True_cuda XFAIL [0.0012s] [ 75%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_2d_3d_fast_accum_False_strided_False_cuda XFAIL [0.0033s] [ 77%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_2d_3d_fast_accum_False_strided_True_cuda XFAIL [0.0012s] [ 79%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_2d_3d_fast_accum_True_strided_False_cuda XFAIL [0.0015s] [ 81%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_2d_3d_fast_accum_True_strided_True_cuda XFAIL [0.0012s] [ 83%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_3d_2d_fast_accum_False_strided_False_cuda XFAIL [0.0012s] [ 85%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_3d_2d_fast_accum_False_strided_True_cuda XFAIL [0.0012s] [ 87%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_3d_2d_fast_accum_True_strided_False_cuda XFAIL [0.0011s] [ 89%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_3d_2d_fast_accum_True_strided_True_cuda XFAIL [0.0012s] [ 91%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_3d_3d_fast_accum_False_strided_False_cuda XFAIL [0.0014s] [ 93%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_3d_3d_fast_accum_False_strided_True_cuda XFAIL [0.0012s] [ 95%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_3d_3d_fast_accum_True_strided_False_cuda XFAIL [0.0011s] [ 97%]
test/test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_scaled_grouped_gemm_3d_3d_fast_accum_True_strided_True_cuda XFAIL [0.0011s] [100%]
`
cc @ptrblck @eqy @tinglvv @atalman @malfet @ngimel
| true
|
2,984,320,853
|
Add check for ctc_loss targets param
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn"
] | 11
|
CONTRIBUTOR
|
Fixes #150835
## Test Result
```python
# cuda
>>> import torch
>>> import torch.nn.functional as F
>>> device = "cuda" # "cpu" is fine
>>> num_classes = 4
>>> log_probs = torch.rand(0, 0, num_classes, device=device)
>>> targets = torch.tensor([], device=device, dtype=torch.long)
>>> input_lengths = torch.tensor([], device=device, dtype=torch.long)
>>> target_lengths = torch.tensor([], device=device, dtype=torch.long)
>>> result = F.ctc_loss(log_probs, targets, input_lengths, target_lengths, reduction='none')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zong/code/pytorch/torch/nn/functional.py", line 3079, in ctc_loss
return torch.ctc_loss(
^^^^^^^^^^^^^^^
RuntimeError: log_probs tensor must not be empty
# cpu
>>> device = "cpu"
>>> num_classes = 4
>>> log_probs = torch.rand(0, 0, num_classes, device=device)
>>> targets = torch.tensor([], device=device, dtype=torch.long)
>>> input_lengths = torch.tensor([], device=device, dtype=torch.long)
>>> target_lengths = torch.tensor([], device=device, dtype=torch.long)
>>> result = F.ctc_loss(log_probs, targets, input_lengths, target_lengths, reduction='none')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zong/code/pytorch/torch/nn/functional.py", line 3079, in ctc_loss
return torch.ctc_loss(
^^^^^^^^^^^^^^^
RuntimeError: log_probs tensor must not be empty
```
| true
|
2,984,317,658
|
Torch compile issue, AttributeError: 'NoneType' object has no attribute 'store_cubin'
|
shahizat
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"Blackwell"
] | 4
|
NONE
|
### 🐛 Describe the bug
Hello,
I successfully built the sgl-kernel(https://github.com/sgl-project/sglang/tree/main/sgl-kernel) with sm_120 (NVIDIA RTX 50 series) and CUDA 12.8, but encountered the following issue when running sglang.launch_server command using `--enable-torch-compile`. Please help.
Suspicious log
`[rank0]:E0410 07:52:47.507000 255947 torch/_inductor/select_algorithm.py:2134] [18/3] No valid triton configs. OutOfResources: out of resource: shared memory, Required: 110592, Hardware limit: 101376. Reducing block sizes or `num_stages` may help
`
Full error logs:
```
python3 -m sglang.launch_server \
--model-path meta-llama/Llama-3.1-8B-Instruct \
--dtype bfloat16 \
--context-length 8192 \
--enable-torch-compile
INFO 04-10 07:51:29 [__init__.py:256] Automatically detected platform cuda.
[2025-04-10 07:51:31] server_args=ServerArgs(model_path='meta-llama/Llama-3.1-8B-Instruct', tokenizer_path='meta-llama/Llama-3.1-8B-Instruct', tokenizer_mode='auto', skip_tokenizer_init=False, load_format='auto', trust_remote_code=False, dtype='bfloat16', kv_cache_dtype='auto', quantization=None, quantization_param_path=None, context_length=8192, device='cuda', served_model_name='meta-llama/Llama-3.1-8B-Instruct', chat_template=None, completion_template=None, is_embedding=False, revision=None, host='127.0.0.1', port=30000, mem_fraction_static=0.88, max_running_requests=None, max_total_tokens=None, chunked_prefill_size=8192, max_prefill_tokens=16384, schedule_policy='fcfs', schedule_conservativeness=1.0, cpu_offload_gb=0, page_size=1, tp_size=1, stream_interval=1, stream_output=False, random_seed=386970438, constrained_json_whitespace_pattern=None, watchdog_timeout=300, dist_timeout=None, download_dir=None, base_gpu_id=0, gpu_id_step=1, log_level='info', log_level_http=None, log_requests=False, log_requests_level=0, show_time_cost=False, enable_metrics=False, decode_log_interval=40, api_key=None, file_storage_path='sglang_storage', enable_cache_report=False, reasoning_parser=None, dp_size=1, load_balance_method='round_robin', ep_size=1, dist_init_addr=None, nnodes=1, node_rank=0, json_model_override_args='{}', lora_paths=None, max_loras_per_batch=8, lora_backend='triton', attention_backend=None, sampling_backend='flashinfer', grammar_backend='xgrammar', speculative_algorithm=None, speculative_draft_model_path=None, speculative_num_steps=None, speculative_eagle_topk=None, speculative_num_draft_tokens=None, speculative_accept_threshold_single=1.0, speculative_accept_threshold_acc=1.0, speculative_token_map=None, enable_double_sparsity=False, ds_channel_config_path=None, ds_heavy_channel_num=32, ds_heavy_token_num=256, ds_heavy_channel_type='qk', ds_sparse_decode_threshold=4096, disable_radix_cache=False, disable_cuda_graph=False, disable_cuda_graph_padding=False, enable_nccl_nvls=False, disable_outlines_disk_cache=False, disable_custom_all_reduce=False, disable_mla=False, disable_overlap_schedule=False, enable_mixed_chunk=False, enable_dp_attention=False, enable_ep_moe=False, enable_deepep_moe=False, deepep_mode=None, enable_torch_compile=True, torch_compile_max_bs=32, cuda_graph_max_bs=160, cuda_graph_bs=None, torchao_config='', enable_nan_detection=False, enable_p2p_check=False, triton_attention_reduce_in_fp32=False, triton_attention_num_kv_splits=8, num_continuous_decode_steps=1, delete_ckpt_after_loading=False, enable_memory_saver=False, allow_auto_truncate=False, enable_custom_logit_processor=False, tool_call_parser=None, enable_hierarchical_cache=False, hicache_ratio=2.0, enable_flashinfer_mla=False, enable_flashmla=False, flashinfer_mla_disable_ragged=False, warmups=None, n_share_experts_fusion=0, disable_shared_experts_fusion=False, debug_tensor_dump_output_folder=None, debug_tensor_dump_input_file=None, debug_tensor_dump_inject=False, disaggregation_mode='null', disaggregation_bootstrap_port=8998)
INFO 04-10 07:51:34 [__init__.py:256] Automatically detected platform cuda.
INFO 04-10 07:51:34 [__init__.py:256] Automatically detected platform cuda.
[2025-04-10 07:51:37 TP0] Attention backend not set. Use flashinfer backend by default.
[2025-04-10 07:51:37 TP0] Init torch distributed begin.
[W410 07:51:37.867666368 ProcessGroupNCCL.cpp:959] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS is the default now, this environment variable is thus deprecated. (function operator())
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[2025-04-10 07:51:37 TP0] Init torch distributed ends. mem usage=0.00 GB
[2025-04-10 07:51:37 TP0] Load weight begin. avail mem=30.83 GB
[2025-04-10 07:51:38 TP0] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards: 0% Completed | 0/4 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 25% Completed | 1/4 [00:00<00:01, 1.80it/s]
Loading safetensors checkpoint shards: 50% Completed | 2/4 [00:01<00:01, 1.52it/s]
Loading safetensors checkpoint shards: 75% Completed | 3/4 [00:01<00:00, 2.02it/s]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:02<00:00, 1.89it/s]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:02<00:00, 1.84it/s]
[2025-04-10 07:51:41 TP0] Load weight end. type=LlamaForCausalLM, dtype=torch.bfloat16, avail mem=15.72 GB, mem usage=15.12 GB.
[2025-04-10 07:51:41 TP0] KV Cache is allocated. #tokens: 98436, K size: 6.01 GB, V size: 6.01 GB
[2025-04-10 07:51:41 TP0] Memory pool end. avail mem=3.40 GB
2025-04-10 07:51:42,094 - INFO - flashinfer.jit: Prebuilt kernels not found, using JIT backend
[2025-04-10 07:51:42 TP0] Capture cuda graph begin. This can take up to several minutes. avail mem=2.89 GB
Capturing batches (avail_mem=2.89 GB): 0%| | 0/23 [00:00<?, ?it/s]2025-04-10 07:51:42,595 - INFO - flashinfer.jit: Loading JIT ops: batch_decode_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False
2025-04-10 07:51:42,624 - INFO - flashinfer.jit: Finished loading JIT ops: batch_decode_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False
Capturing batches (avail_mem=1.76 GB): 83%|█████████████████████████████████████████████████████████████████▎ | 19/23 [00:46<00:36, 9.21s/it][rank0]:E0410 07:52:46.205000 255947 torch/_inductor/select_algorithm.py:1905] [18/3] Exception No valid triton configs. OutOfResources: out of resource: shared memory, Required: 110592, Hardware limit: 101376. Reducing block sizes or `num_stages` may help. for benchmark choice TritonTemplateCaller(/tmp/torchinductor_admin2/2v/c2v53oqtsrcythafq3wmf7ttbffclvmwuddkrnvu6qbu34humvb5.py, ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=4, num_warps=4)
[rank0]:E0410 07:52:47.507000 255947 torch/_inductor/select_algorithm.py:2134] [18/3] Runtime error during autotuning:
[rank0]:E0410 07:52:47.507000 255947 torch/_inductor/select_algorithm.py:2134] [18/3] No valid triton configs. OutOfResources: out of resource: shared memory, Required: 110592, Hardware limit: 101376. Reducing block sizes or `num_stages` may help..
[rank0]:E0410 07:52:47.507000 255947 torch/_inductor/select_algorithm.py:2134] [18/3] Ignoring this choice.
AUTOTUNE mm(8x4096, 4096x128256)
strides: [4096, 1], [1, 4096]
dtypes: torch.bfloat16, torch.bfloat16
triton_mm_11 0.6508 ms 100.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=3, num_warps=4
triton_mm_1 0.6527 ms 99.7% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=2, num_warps=2
triton_mm_8 0.6529 ms 99.7% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=4
triton_mm_16 0.6529 ms 99.7% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=8
triton_mm_7 0.6533 ms 99.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=3, num_warps=4
triton_mm_4 0.6548 ms 99.4% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=2
mm 0.6564 ms 99.1%
triton_mm_2 0.8950 ms 72.7% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=4
triton_mm_3 0.9093 ms 71.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=2
triton_mm_13 0.9093 ms 71.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=3, num_warps=4
SingleProcess AUTOTUNE benchmarking takes 0.6529 seconds and 3.4570 seconds precompiling for 18 choices
Capturing batches (avail_mem=1.74 GB): 87%|████████████████████████████████████████████████████████████████████▋ | 20/23 [01:06<00:38, 12.72s/it][rank0]:E0410 07:52:55.281000 255947 torch/_inductor/select_algorithm.py:1905] [5/4_1] Exception No valid triton configs. OutOfResources: out of resource: shared memory, Required: 110592, Hardware limit: 101376. Reducing block sizes or `num_stages` may help. for benchmark choice TritonTemplateCaller(/tmp/torchinductor_admin2/dq/cdqvg3b47v3ynrgzorcj3cxhzuakj73ueg6kiu3b3duuqvyoucuu.py, ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=4, num_warps=4)
[rank0]:E0410 07:52:56.331000 255947 torch/_inductor/select_algorithm.py:2134] [5/4_1] Runtime error during autotuning:
[rank0]:E0410 07:52:56.331000 255947 torch/_inductor/select_algorithm.py:2134] [5/4_1] No valid triton configs. OutOfResources: out of resource: shared memory, Required: 110592, Hardware limit: 101376. Reducing block sizes or `num_stages` may help..
[rank0]:E0410 07:52:56.331000 255947 torch/_inductor/select_algorithm.py:2134] [5/4_1] Ignoring this choice.
AUTOTUNE mm(4x4096, 4096x6144)
strides: [4096, 1], [1, 4096]
dtypes: torch.bfloat16, torch.bfloat16
triton_mm_33 0.0342 ms 100.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=8
triton_mm_25 0.0348 ms 98.3% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=4
triton_mm_21 0.0348 ms 98.3% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=2
triton_mm_19 0.0411 ms 83.3% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=4
mm 0.0420 ms 81.5%
triton_mm_18 0.0430 ms 79.5% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=2, num_warps=2
triton_mm_24 0.0430 ms 79.5% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=3, num_warps=4
triton_mm_20 0.0444 ms 77.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=2
triton_mm_28 0.0485 ms 70.5% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=3, num_warps=4
triton_mm_31 0.0518 ms 66.1% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=4, num_warps=4
SingleProcess AUTOTUNE benchmarking takes 0.3201 seconds and 3.5530 seconds precompiling for 18 choices
[rank0]:E0410 07:53:02.558000 255947 torch/_inductor/select_algorithm.py:1905] [15/4] Exception No valid triton configs. OutOfResources: out of resource: shared memory, Required: 110592, Hardware limit: 101376. Reducing block sizes or `num_stages` may help. for benchmark choice TritonTemplateCaller(/tmp/torchinductor_admin2/3k/c3k437xehmkbwej7ef7t5iacnc2xa4usgxigf2wf2lbrv5rdqpmt.py, ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=4, num_warps=4)
[rank0]:E0410 07:53:03.597000 255947 torch/_inductor/select_algorithm.py:2134] [15/4] Runtime error during autotuning:
[rank0]:E0410 07:53:03.597000 255947 torch/_inductor/select_algorithm.py:2134] [15/4] No valid triton configs. OutOfResources: out of resource: shared memory, Required: 110592, Hardware limit: 101376. Reducing block sizes or `num_stages` may help..
[rank0]:E0410 07:53:03.597000 255947 torch/_inductor/select_algorithm.py:2134] [15/4] Ignoring this choice.
AUTOTUNE mm(4x4096, 4096x4096)
strides: [4096, 1], [1, 4096]
dtypes: torch.bfloat16, torch.bfloat16
triton_mm_42 0.0239 ms 100.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=4
triton_mm_38 0.0245 ms 97.5% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=2
triton_mm_50 0.0281 ms 85.3% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=8
triton_mm_35 0.0410 ms 58.4% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=2, num_warps=2
triton_mm_41 0.0410 ms 58.4% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=3, num_warps=4
mm 0.0420 ms 57.0%
triton_mm_36 0.0423 ms 56.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=4
triton_mm_37 0.0424 ms 56.5% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=2
triton_mm_45 0.0424 ms 56.5% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=3, num_warps=4
triton_mm_48 0.0444 ms 53.9% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=4, num_warps=4
SingleProcess AUTOTUNE benchmarking takes 0.3096 seconds and 3.4381 seconds precompiling for 18 choices
[rank0]:E0410 07:53:08.888000 255947 torch/_inductor/select_algorithm.py:1905] [16/4] Exception No valid triton configs. OutOfResources: out of resource: shared memory, Required: 110592, Hardware limit: 101376. Reducing block sizes or `num_stages` may help. for benchmark choice TritonTemplateCaller(/tmp/torchinductor_admin2/if/cifharvgs2sarmdz3enrviovyllg7mgznsjr5hs2nmukl35pefhj.py, ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=4, num_warps=4)
[rank0]:E0410 07:53:10.461000 255947 torch/_inductor/select_algorithm.py:2134] [16/4] Runtime error during autotuning:
[rank0]:E0410 07:53:10.461000 255947 torch/_inductor/select_algorithm.py:2134] [16/4] No valid triton configs. OutOfResources: out of resource: shared memory, Required: 110592, Hardware limit: 101376. Reducing block sizes or `num_stages` may help..
[rank0]:E0410 07:53:10.461000 255947 torch/_inductor/select_algorithm.py:2134] [16/4] Ignoring this choice.
AUTOTUNE mm(4x4096, 4096x28672)
strides: [4096, 1], [1, 4096]
dtypes: torch.bfloat16, torch.bfloat16
triton_mm_58 0.1489 ms 100.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=3, num_warps=4
triton_mm_62 0.1489 ms 100.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=3, num_warps=4
triton_mm_67 0.1495 ms 99.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=8
triton_mm_52 0.1509 ms 98.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=2, num_warps=2
triton_mm_59 0.1526 ms 97.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=4
triton_mm_55 0.1530 ms 97.3% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=2
mm 0.1695 ms 87.9%
triton_mm_65 0.1761 ms 84.5% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=4, num_warps=4
triton_mm_53 0.1859 ms 80.1% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=4
triton_mm_64 0.1879 ms 79.2% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=3, num_warps=4
SingleProcess AUTOTUNE benchmarking takes 0.5421 seconds and 5.2288 seconds precompiling for 18 choices
[rank0]:E0410 07:53:10.590000 255947 torch/_inductor/select_algorithm.py:1905] [16/4] Exception No valid triton configs. OutOfResources: out of resource: shared memory, Required: 110592, Hardware limit: 101376. Reducing block sizes or `num_stages` may help. for benchmark choice TritonTemplateCaller(/tmp/torchinductor_admin2/ei/ceiw3uwk25mdxmun4j6oxjcvx5dl6iewuutgnwxz3kdr4erjmbj7.py, ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=4, num_warps=4)
[rank0]:E0410 07:53:10.945000 255947 torch/_inductor/select_algorithm.py:2134] [16/4] Runtime error during autotuning:
[rank0]:E0410 07:53:10.945000 255947 torch/_inductor/select_algorithm.py:2134] [16/4] No valid triton configs. OutOfResources: out of resource: shared memory, Required: 110592, Hardware limit: 101376. Reducing block sizes or `num_stages` may help..
[rank0]:E0410 07:53:10.945000 255947 torch/_inductor/select_algorithm.py:2134] [16/4] Ignoring this choice.
AUTOTUNE mm(4x14336, 14336x4096)
strides: [14336, 1], [1, 14336]
dtypes: torch.bfloat16, torch.bfloat16
triton_mm_76 0.0813 ms 100.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=4
triton_mm_72 0.0814 ms 99.8% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=2
mm 0.0876 ms 92.8%
triton_mm_84 0.0976 ms 83.3% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=8
triton_mm_69 0.1255 ms 64.8% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=2, num_warps=2
triton_mm_75 0.1284 ms 63.3% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=3, num_warps=4
triton_mm_79 0.1366 ms 59.5% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=16, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=3, num_warps=4
triton_mm_71 0.1368 ms 59.5% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=32, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=2
triton_mm_70 0.1407 ms 57.8% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=5, num_warps=4
triton_mm_82 0.1572 ms 51.7% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=16, BLOCK_N=64, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=False, num_stages=4, num_warps=4
SingleProcess AUTOTUNE benchmarking takes 0.4711 seconds and 0.0004 seconds precompiling for 18 choices
Capturing batches (avail_mem=1.74 GB): 87%|████████████████████████████████████████████████████████████████████▋ | 20/23 [01:32<00:13, 4.62s/it]
[2025-04-10 07:53:14 TP0] Scheduler hit an exception: Traceback (most recent call last):
File "/home/admin2/Projects/sglang/python/sglang/srt/managers/scheduler.py", line 1999, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank)
File "/home/admin2/Projects/sglang/python/sglang/srt/managers/scheduler.py", line 249, in __init__
self.tp_worker = TpWorkerClass(
File "/home/admin2/Projects/sglang/python/sglang/srt/managers/tp_worker_overlap_thread.py", line 63, in __init__
self.worker = TpModelWorker(server_args, gpu_id, tp_rank, dp_rank, nccl_port)
File "/home/admin2/Projects/sglang/python/sglang/srt/managers/tp_worker.py", line 74, in __init__
self.model_runner = ModelRunner(
File "/home/admin2/Projects/sglang/python/sglang/srt/model_executor/model_runner.py", line 177, in __init__
self.initialize(min_per_gpu_memory)
File "/home/admin2/Projects/sglang/python/sglang/srt/model_executor/model_runner.py", line 215, in initialize
self.init_cuda_graphs()
File "/home/admin2/Projects/sglang/python/sglang/srt/model_executor/model_runner.py", line 933, in init_cuda_graphs
self.cuda_graph_runner = CudaGraphRunner(self)
File "/home/admin2/Projects/sglang/python/sglang/srt/model_executor/cuda_graph_runner.py", line 267, in __init__
self.capture()
File "/home/admin2/Projects/sglang/python/sglang/srt/model_executor/cuda_graph_runner.py", line 351, in capture
) = self.capture_one_batch_size(bs, forward)
File "/home/admin2/Projects/sglang/python/sglang/srt/model_executor/cuda_graph_runner.py", line 443, in capture_one_batch_size
run_once()
File "/home/admin2/Projects/sglang/python/sglang/srt/model_executor/cuda_graph_runner.py", line 436, in run_once
logits_output = forward(input_ids, forward_batch.positions, forward_batch)
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
return fn(*args, **kwargs)
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 70, in inner
return fn(*args, **kwargs)
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/admin2/Projects/sglang/python/sglang/srt/models/llama.py", line 420, in forward
hidden_states = self.model(
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/admin2/Projects/sglang/python/sglang/srt/models/llama.py", line 309, in forward
hidden_states, residual = layer(
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/admin2/Projects/sglang/python/sglang/srt/models/llama.py", line 239, in forward
def forward(
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 850, in _fn
return fn(*args, **kwargs)
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1207, in forward
return compiled_fn(full_args)
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 331, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 692, in inner_fn
outs = compiled_fn(args)
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 498, in wrapper
return compiled_fn(runtime_args)
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 558, in __call__
return self.current_callable(inputs)
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/_inductor/utils.py", line 2441, in run
return model(new_inputs)
File "/tmp/torchinductor_admin2/cn/ccn2ppbcgn45hf2ub63ainr4rid5owhgdhmtnl3ikzbqhw4nuvpz.py", line 149, in call
triton_red_fused__to_copy_add_mean_mul_pow_rsqrt_0.run(arg1_1, arg0_1, arg2_1, buf1, buf2, 4, 4096, stream=stream0)
File "/home/admin2/.virtualenvs/compile/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1014, in run
if launcher.store_cubin and (not benchmark_run or not self.cuda_kernel_saved):
AttributeError: 'NoneType' object has no attribute 'store_cubin'
[2025-04-10 07:53:14] Received sigquit from a child process. It usually means the child failed.
Killed
```
### Versions
PyTorch version: 2.8.0.dev20250407+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 5090
Nvidia driver version: 570.124.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 3970X 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4549,1211
CPU min MHz: 2200,0000
BogoMIPS: 7399.85
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.8.0.87
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] torch==2.8.0.dev20250407+cu128
[pip3] torchao==0.9.0
[pip3] torchaudio==2.6.0.dev20250407+cu128
[pip3] torchvision==0.22.0.dev20250407+cu128
[pip3] triton==3.3.0+git61cb963f
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,984,308,251
|
[CI][CUDA][UCC] Update test_c10d_ucc.py - remove xfailIfLinux because it now succeeds
|
nWEIdia
|
open
|
[
"oncall: distributed",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
pytest -v test/distributed/test_c10d_ucc.py -k test_save_load
============================================================================================== test session starts ==============================================================================================
platform linux -- Python 3.12.3, pytest-8.1.1, pluggy-1.5.0 -- /usr/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase(PosixPath('/opt/pytorch/pytorch/.hypothesis/examples'))
rootdir: /opt/pytorch/pytorch
configfile: pytest.ini
plugins: anyio-4.9.0, hypothesis-6.130.13, flakefinder-1.1.0, rerunfailures-15.0, xdist-3.6.1, xdoctest-1.0.2, typeguard-4.3.0
collected 63 items / 62 deselected / 1 selected
Running 1 items in this shard
test/distributed/test_c10d_ucc.py::DistributedDataParallelTest::test_save_load_checkpoint PASSED [65.2581s] [100%]
================================================================================== 1 passed, 62 deselected in 68.78s (0:01:08)
@ptrblck @eqy @tinglvv @atalman @malfet
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,984,265,452
|
Turn on for export and add export specific tests
|
Lucaskabela
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150978
* #150885
* #151022
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,984,225,742
|
Gracefully handle optree less than minimum version
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150956
Summary:
- We are saying the minimum version of pytree that PyTorch can use is
0.13.0
- If a user imports torch.utils._cxx_pytree, it will raise an
ImportError if optree doesn't exist or exists and is less than the
minimum version.
Fixes https://github.com/pytorch/pytorch/issues/150889. There are
actually two parts to that issue:
1. dtensor imports torch.utils._cxx_pytree, but the optree installed in
the environment might be too old. Instead, raising ImportError in
torch.utils._cxx_pytree solves the issue.
2. We emit an "optree too low version" warning. I've deleted the
warning in favor of the more explicit ImportError.
Test Plan:
- code reading
| true
|
2,984,216,294
|
[export] check tuple length mismatch for dynamic_shapes spec
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Summary: weren't checking this
Test Plan: test_export
Differential Revision: D72761995
| true
|
2,984,214,323
|
Fix `torch.autograd.backward` `inputs` validation
|
ValerianRey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: autograd",
"module: dynamo"
] | 23
|
CONTRIBUTOR
|
- Fixes #150883
- Fixes #70504
This is my first PR to pytorch, so please tell me if I'm forgetting anything.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,984,212,598
|
feature: tlparse summary
|
zou3519
|
open
|
[
"module: logging",
"triaged",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
I was debugging https://github.com/pytorch/pytorch/issues/150714. I wanted to know "what custom operators does deepseek-v3 x sglang use?"
My [tlparse](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpE6JVUu/rank_7/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000) for this had like 100 (rough estimate) different frames.
In each frame's compilation metrics you can indeed see what custom operators are used in the frame. But I wanted to know across all 100+ frames, what custom operators are used, and I didn't want to click into each compilation metrics.
So I ended up writing a grep | sort | uniq command over the torch_trace directory to tell me this... but it would have been nice if the tlparse just showed me a summary of it on the main page.
cc @chauhang @penguinwu
| true
|
2,984,197,278
|
[cutlass backend] Add and fix logs, fix types, and make cutlass generator only generate GEMM
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150973
Differential Revision: [D72760205](https://our.internmc.facebook.com/intern/diff/D72760205/)
We hardcoded to only use GEMM anyway.
This also raises the problem with high instantiation level. As the instantiation level goes higher (here it is 3333), the time it takes to list the configs might be long already (here it is >3 minutes).
If we know exactly what configs we care, we should have a way to generate them without calling generators. But let's see if we need that.
using this script
```
import os
os.environ["TORCH_LOGS"] = "inductor"
import torch
import torch._inductor.config
torch._inductor.config.max_autotune = True
torch._inductor.config.force_disable_caches = True
torch._inductor.config.max_autotune_gemm_backends = "Aten,CUTLASS"
# intentionally use no cutlass ops
torch._inductor.config.cuda.cutlass_max_profiling_configs = 0
torch._inductor.config.cuda.cutlass_instantiation_level = "3333"
def main():
M = 128
dtype = torch.float16
A = torch.randn(M, M, device="cuda", dtype=dtype)
B = torch.randn(M, M, device="cuda", dtype=dtype)
compiled_model = torch.compile(torch.mm)
_ = compiled_model(A, B)
print("done")
if __name__ == "__main__":
main()
```
before, with logs:
```
CUTLASS library generated 7 operations in 235.03 seconds
Got cutlass configs: total number of ops: 4753. Filtering took 10.51 seconds
```
after:
```
CUTLASS library generated 1 operations in 207.39 seconds
Got cutlass configs: total number of ops: 4753. Filtering took 9.53 seconds
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,984,191,248
|
Escape hatch: way to dynamically add or remove tags from custom operators
|
zou3519
|
open
|
[
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher",
"internal ramp-up task"
] | 1
|
CONTRIBUTOR
|
this is extremely useful during debugging and as a general workaround tool for when you cannot touch the definition of the custom operator or its usage in a model
cc @chauhang @penguinwu @bdhirsh @BoyuanFeng
| true
|
2,984,135,073
|
[map] add inductor support by lowering to while_loop
|
ydwu4
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150971
* #151034
* #150962
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,984,128,354
|
[CUDA][TF32] Account for TF32 in `test_alexnet_prefix`
|
eqy
|
closed
|
[
"module: cuda",
"open source",
"Merged",
"module: tf32",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Mainly seems to be an issue on Blackwell with e.g.,
```
Mismatched elements: 1 / 746496 (0.0%)
Greatest absolute difference: 0.005461275577545166 at index (2, 32, 11, 9)
```
cc @ptrblck @msaroufim @zasdfgbnm @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,984,125,894
|
c10d/Store: add queues
|
d4l3k
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 8
|
MEMBER
|
This adds queue operations as described in https://github.com/pytorch/pytorch/issues/150943.
This works by adding two new operations `queue_push` and `queue_pop`. The semantics are designed to be blocking with a timeout. Pushing will always succeed as the queue is infinite size. Popping will first call `wait` until the key is ready and then pop the value from the queue.
This implements queues for only: HashStore, TCPStore w/ libuv. FileStore and the legacy backends are not supported.
`wait` and `check` work for queue operations though queue_push will only wake up the first waiter rather than all of them.
This also has a few cleanups to error types/documentation in related code.
Example trace:
```
[I409 16:51:43.963833529 TCPStoreLibUvBackend.cpp:829] [c10d - trace] validate magic:1015412686 address:[localhost]:55816
[I409 16:51:43.963845838 TCPStoreLibUvBackend.cpp:842] [c10d - trace] ping nonce:2840795 address:[localhost]:55816
[I409 16:51:43.963902914 TCPStoreLibUvBackend.cpp:911] [c10d - trace] add key:init/ val:1 address:[localhost]:55816
[I409 16:51:43.963939389 TCPStoreLibUvBackend.cpp:977] [c10d - trace] wait key_count:1 keys[0]:init/ address:[localhost]:55816
[I409 16:51:43.963974842 TCPStoreLibUvBackend.cpp:893] [c10d - trace] get key:init/ address:[localhost]:55816
[I409 16:51:43.964071909 TCPStoreLibUvBackend.cpp:1121] [c10d - trace] queue_push key:/test_prefix/test_queue_support address:[localhost]:55816
[I409 16:51:43.964080221 TCPStoreLibUvBackend.cpp:940] [c10d - trace] check key_count:1 keys[0]:/test_prefix/foo address:[localhost]:55816
[I409 16:51:43.964108584 TCPStoreLibUvBackend.cpp:1121] [c10d - trace] queue_push key:/test_prefix/foo address:[localhost]:55816
[I409 16:51:43.964123207 TCPStoreLibUvBackend.cpp:1121] [c10d - trace] queue_push key:/test_prefix/foo address:[localhost]:55816
[I409 16:51:43.964128194 TCPStoreLibUvBackend.cpp:940] [c10d - trace] check key_count:1 keys[0]:/test_prefix/foo address:[localhost]:55816
[I409 16:51:43.964156347 TCPStoreLibUvBackend.cpp:977] [c10d - trace] wait key_count:1 keys[0]:/test_prefix/foo address:[localhost]:55816
[I409 16:51:43.964187493 TCPStoreLibUvBackend.cpp:977] [c10d - trace] wait key_count:1 keys[0]:/test_prefix/foo address:[localhost]:55816
[I409 16:51:43.964217709 TCPStoreLibUvBackend.cpp:1133] [c10d - trace] queue_pop key:/test_prefix/foo address:[localhost]:55816
[I409 16:51:43.964324300 TCPStoreLibUvBackend.cpp:977] [c10d - trace] wait key_count:1 keys[0]:/test_prefix/foo address:[localhost]:55816
[I409 16:51:43.964354495 TCPStoreLibUvBackend.cpp:1133] [c10d - trace] queue_pop key:/test_prefix/foo address:[localhost]:55816
[I409 16:51:43.964416299 TCPStoreLibUvBackend.cpp:940] [c10d - trace] check key_count:1 keys[0]:/test_prefix/foo address:[localhost]:55816
[I409 16:51:43.964458733 TCPStoreLibUvBackend.cpp:977] [c10d - trace] wait key_count:1 keys[0]:/test_prefix/non_existant address:[localhost]:55816
[W409 16:51:43.974516585 socket.cpp:460] [c10d] waitForInput: poll for socket SocketImpl(fd=75, addr=[localhost]:55816, remote=[localhost]:46641) returned 0, likely a timeout
[W409 16:51:43.974559169 socket.cpp:485] [c10d] waitForInput: socket SocketImpl(fd=75, addr=[localhost]:55816, remote=[localhost]:46641) timed out after 10ms
[I409 16:51:43.974600451 TCPStoreLibUvBackend.cpp:1101] [c10d - trace] cancel_wait address:[localhost]:55816
```
Test plan:
```
$ pytest test/distributed/test_store.py -k queue -v -s
test/distributed/test_store.py::FileStoreTest::test_queues SKIPPED [0.4351s] (Store does not support queues)
test/distributed/test_store.py::HashStoreTest::test_queues PASSED [0.0009s]
test/distributed/test_store.py::PrefixFileStoreTest::test_queues SKIPPED [0.0006s] (Store does not support queues)
test/distributed/test_store.py::TCPStoreTest::test_queues SKIPPED [0.0012s] (Store does not support queues)
test/distributed/test_store.py::LibUvTCPStoreTest::test_queues PASSED [0.0014s]
test/distributed/test_store.py::PrefixTCPStoreTest::test_queues PASSED [0.0014s]
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
2,984,125,231
|
move set_rotate_method to public namespace
|
XilunWu
|
open
|
[
"oncall: distributed",
"ciflow/inductor",
"module: context parallel",
"release notes: context parallel"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150968
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,984,082,159
|
[MPS] `where`: silent incorrectness when cond is not contiguous
|
qqaatw
|
closed
|
[
"triaged",
"module: correctness (silent)",
"module: mps"
] | 4
|
COLLABORATOR
|
### 🐛 Describe the bug
```python
device = "mps"
diff = torch.tensor([[True, True], [True, True]], dtype=torch.bool)
diff = diff.T
target = torch.tensor([[0, 0], [0, 1]])
rcpu = torch.where(diff, target, 0)
diffmps = diff.to(device)
targetmps = target.to(device)
rmps = torch.where(diffmps, targetmps, 0)
print(rcpu)
print(rmps)
```
```
tensor([[0, 0],
[0, 1]])
tensor([[0, 0],
[0, 0]], device='mps:0')
```
### Versions
Nightly
```
PyTorch version: 2.8.0a0+git00c921c
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.7.1 (arm64)
GCC version: Could not collect
Clang version: 18.1.5
CMake version: version 4.0.0
Libc version: N/A
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 17:00:33) [Clang 13.0.1 ] (64-bit runtime)
Python platform: macOS-13.7.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Apple M1 Max
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,984,024,907
|
c10d/Store: add clone feature
|
d4l3k
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ci-no-td"
] | 8
|
MEMBER
|
This adds a new `clone()` method to Store which will return a new Store instance that can be used from a different thread.
This is intended to better support multiple threads with stores such as when ProcessGroupNCCL needs a store to do error propagation.
Related issue: https://github.com/pytorch/pytorch/issues/150943
Test plan:
```
pytest test/distributed/test_store.py -k PythonStore
pytest test/distributed/test_store.py -k clone
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
2,984,009,422
|
[dynamo] unpack sequence lazily for list extend/deque extendleft
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150965
Fixes https://github.com/pytorch/pytorch/issues/133063.
We were unpacking generators/iterators eagerly when we should be unpacking them one-by-one.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,983,997,133
|
Add additional MacOS test runners for MPS
|
skotapati
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: mps",
"ciflow/mps"
] | 11
|
COLLABORATOR
|
Add additional Mac MPS test runners, as part of an effort to eventually add all supported Mac configs
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,983,973,843
|
[ued] `torch.compile` yields lower latency when compiling transformer blocks only for ComfyUI GGUF Flux
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"empathy-day"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro is a bit hard since ComfyUI is a GUI. I'll look into this.
### Error logs
_No response_
### Versions
main.
cc @chauhang @penguinwu
| true
|
2,983,972,431
|
[map] always turn on dynamo for map
|
ydwu4
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150971
* #151034
* __->__ #150962
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,983,971,050
|
[ued] `torch.compile` cause more than 2x slow down with HF diffuser GGUF Auraflow
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"empathy-day"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```python
import torch
from diffusers import (
AuraFlowPipeline,
GGUFQuantizationConfig,
AuraFlowTransformer2DModel,
)
transformer = AuraFlowTransformer2DModel.from_single_file(
"https://huggingface.co/city96/AuraFlow-v0.3-gguf/blob/main/aura_flow_0.3-Q2_K.gguf",
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
)
pipeline = AuraFlowPipeline.from_pretrained(
"fal/AuraFlow-v0.3",
torch_dtype=torch.bfloat16,
transformer=transformer,
).to("cuda")
torch._dynamo.config.nontraceable_tensor_subclasses.add(diffusers.quantizers.gguf.utils.GGUFParameter)
pipeline.transformer = torch.compile(pipeline.transformer, fullgraph=False)
print("warmup")
pipeline("A cute pony", width=256, height=256, num_inference_steps=50)
print("benchmark")
start = time.time()
for i in range(5):
pipeline(prompt, width=width, height=height, num_inference_steps=50)
end = time.time()
print(f"avg latency={(end - start) / 5}s")
```
### Error logs
_No response_
### Versions
bb987492302, Python 3.12
cc @chauhang @penguinwu
| true
|
2,983,970,321
|
DISABLED test_parity__foreach_acos_fastpath_inplace_cuda_complex64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_acos_fastpath_inplace_cuda_complex64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40274150586).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 24 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_acos_fastpath_inplace_cuda_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,983,970,118
|
DISABLED test_linalg_solve_triangular_large_cuda_complex128 (__main__.TestLinalgCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"module: linear algebra",
"skipped"
] | 2
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_linalg_solve_triangular_large_cuda_complex128&suite=TestLinalgCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40265201646).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 8 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_linalg_solve_triangular_large_cuda_complex128`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_linalg.py", line 4370, in test_linalg_solve_triangular_large
for A, B, left, upper, uni in gen_inputs(shape, dtype, device, well_conditioned=True):
File "/var/lib/jenkins/workspace/test/test_linalg.py", line 4293, in _gen_shape_inputs_linalg_triangular_solve
PLU = torch.linalg.lu(make_fullrank(*size_a))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4708, in make_fullrank_matrices_with_distinct_singular_values
u, _, vh = torch.linalg.svd(t, full_matrices=False)
RuntimeError: cusolver error: CUSOLVER_STATUS_EXECUTION_FAILED, when calling `cusolverDnZgesvdj( handle, jobz, econ, m, n, reinterpret_cast<cuDoubleComplex*>(A), lda, S, reinterpret_cast<cuDoubleComplex*>(U), ldu, reinterpret_cast<cuDoubleComplex*>(V), ldv, reinterpret_cast<cuDoubleComplex*>(work), lwork, info, params)`. If you keep seeing this error, you may use `torch.backends.cuda.preferred_linalg_library()` to try linear algebra operators with other supported backends. See https://pytorch.org/docs/stable/backends.html#torch.backends.cuda.preferred_linalg_library
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_SLOW=1 PYTORCH_TEST_SKIP_FAST=1 python test/test_linalg.py TestLinalgCUDA.test_linalg_solve_triangular_large_cuda_complex128
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_linalg.py`
cc @clee2000 @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,983,924,193
|
[graph partition] support graphsafe_run_with_rng_state
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Prior to this PR, `rng_state` is in `V.graph.graph_inputs` but not in read_writes of any IRNode. As a result, it is not identified as a partition inputs:
```python
def partition_0(args):
primals_2, primals_1 = args
...
buf0 = torch.ops.higher_order.graphsafe_run_with_rng_state(torch.ops.aten.rand.default, [4, 4], dtype=torch.float32, device=device(type='cuda', index=1), pin_memory=False, rng_state=fwd_rng_state_0)
# <----- access fwd_rng_state_0 but it's not an input
...
def call(self, args):
primals_1, primals_2, fwd_rng_state_0 = args
...
partition0_args = [primals_2, primals_1]
(buf2, primals_2, primals_1) = self.partitions[0](partition0_args)
# <---- fwd_rng_state_0 is graph_inputs but is not passed to partitions[0]
...
```
This PR fixes this issue.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,983,877,925
|
[profiler] don't disable CUPTI_LAZY_REINIT for cuda >= 12.6
|
davidberard98
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: profiler",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150957
Credit to @mgmtea who wrote the initial version of this PR: https://github.com/pytorch/pytorch/pull/146604
Context: CUPTI is the NVIDIA library that Kineto uses for collecting GPU-side info during profiling. The intended usage is to register a callback while you want profiling to occur, and then unregister the callback when you want profiling to stop. But a bug would cause crashes if CUPTI callbacks were de-registered when used with cudagraphs. The workaround was to disable "CUPTI_LAZY_REINIT" and "CUPTI_TEARDOWN" in Kineto - which prevents crashes, but can result in slower execution after profiling has occurred and completed.
This bug is believed to be fixed in CUDA >= 12.6, so this PR qualifies that DISABLE_CUPTI_LAZY_REINIT=1 and CUPTI_TEARDOWN=0 should only be applied if CUDA >= 12.6. Additionally, `profiler_allow_cudagraph_cupti_lazy_reinit_cuda12()` is added as an escape hatch so that we can add a killswitch in case we see more crashes related to this.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D72745929](https://our.internmc.facebook.com/intern/diff/D72745929)
| true
|
2,983,877,314
|
Gracefully handle optree less than minimum version
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"topic: binaries"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150956
Summary:
- We are saying the minimum version of pytree that PyTorch can use is
0.13.0
- If a user imports torch.utils._cxx_pytree, it will raise an
ImportError if optree doesn't exist or exists and is less than the
minimum version.
Fixes https://github.com/pytorch/pytorch/issues/150889. There are
actually two parts to that issue:
1. dtensor imports torch.utils._cxx_pytree, but the optree installed in
the environment might be too old. Instead, raising ImportError in
torch.utils._cxx_pytree solves the issue.
2. We emit an "optree too low version" warning. I've deleted the
warning in favor of the more explicit ImportError.
Test Plan:
- code reading
| true
|
2,983,837,988
|
Fix issue in optimized_add issue: make_optimized should be called on non args only
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150955
PR https://github.com/pytorch/pytorch/pull/149665 did a change to the optimized_add that is causing an issue internally.
In general make_optimized should be only be called with valid new_args, new_args can become None
when elements already exists also, we should break out of the loop in that case.
Note that I also only maintained the optimized summation when both lhs and rhs lengths are <=2.
This is ok because the optimization is based on the inductive property of adding one symbol at a time.
the [2]+[2] here is serving as base case ( i feel we can also remove it ) .
Note that keeping it for all sizes while correct, I am not sure if tis as efficient (we will do N log(n) insertions).
there is no current justification for that.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,983,765,030
|
[dynamo][fsdp] Do not consider fsdp modules as specialized
|
anijain2305
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
As Title
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,983,754,063
|
Update triton wheel build, setuptools pin
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Observing failure in release workflow:
https://github.com/pytorch/pytorch/actions/runs/14346340202/job/40216804374
```
Traceback (most recent call last):
File "/opt/python/cp311-cp311/lib/python3.11/site-packages/wheel/bdist_wheel.py", line 11, in <module>
from setuptools.command.bdist_wheel import bdist_wheel as bdist_wheel
ModuleNotFoundError: No module named 'setuptools.command.bdist_wheel'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/tmppwpqef_x/triton/python/setup.py", line 27, in <module>
from wheel.bdist_wheel import bdist_wheel
File "/opt/python/cp311-cp311/lib/python3.11/site-packages/wheel/bdist_wheel.py", line 13, in <module>
raise ImportError(ERROR) from exc
ImportError: The 'wheel.bdist_wheel' module has been removed.
Please update your setuptools to v70.1 or later.
If you're explicitly importing 'wheel.bdist_wheel', please update your import to point to 'setuptools.command.bdist_wheel' instead.
```
| true
|
2,983,749,759
|
Add some autograd producer consumer stream sync tests
|
soulitzer
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151079
* __->__ #150952
Thanks @ngimel and @albanD for some ideas on test cases
cc @majing921201 @gujinghui @guangyey
| true
|
2,983,724,403
|
[Feature Request] Implement complex.pow(2) as complex * complex on GPU
|
kheyer
|
open
|
[
"triaged",
"module: complex",
"enhancement"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
To compute powers of complex numbers on GPU, pytorch [currently uses](https://github.com/pytorch/pytorch/blob/d3a2872c676b1c67ee47170422f247d429e22241/aten/src/ATen/native/cuda/PowKernel.cu#L32) the identity `pow(a, b) = exp(log(a) * b)`.
This can lead to numeric issues. For example:
```
x1=torch.tensor([-5 + 0.j], device='cuda:0')
x1.pow(2)
> tensor([25.0000+4.3711e-06j], device='cuda:0') # imaginary component should be zero
```
While eliminating numeric issues for the general case is not possible, we can avoid numeric issues for the specific case of `complex.pow(2)` by implementing it as `complex * complex`.
```
x1=torch.tensor([-5 + 0.j], device='cuda:0')
x1*x1
> tensor([25.-0.j], device='cuda:0') # imaginary component is correct
```
Note that this special case is already implemented for the [cpu kernel](https://github.com/pytorch/pytorch/blob/d3a2872c676b1c67ee47170422f247d429e22241/aten/src/ATen/native/cpu/PowKernel.cpp#L57)
```
x1=torch.tensor([-5 + 0.j], device='cpu')
x1.pow(2)
> tensor([25.-0.j]) # no numeric issues for complex.pow(2) on CPU
```
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames
| true
|
2,983,711,701
|
[ONNX] Migrate DORT to use the new exporter
|
justinchuby
|
open
|
[
"open source",
"release notes: onnx"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,983,662,018
|
Fix 32-bit indexing overflows in ReducedPrecisionGemV
|
malfet
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150949
By chaining `lda` type from `int` to ~~`long`~~ `int64_t`
Add regression test (but probably restrict it to CPUs (or may be skip float32 testing on GPUs)
Fixes https://github.com/pytorch/pytorch/issues/150637
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,983,617,659
|
Add real_tensor to the FakeTensor in node.meta["val"]
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Summary: We need real_tensor on the FakeTensor in node.meta["val"] in order to aot_compile the draft exported programs. Otherwise, we cannot propagate real tensors even when fake_mode.propagate_real_tensors = True.
This also fixes real tensor propagation in `run_decomposition()`.
Test Plan:
```
buck2 run @mode/dev-nosan caffe2/test:test_export -- -r test_dedup_data_dependent_failure
```
Differential Revision: D72732714
| true
|
2,983,614,783
|
Add complex logaddexp2
|
zklaus
|
open
|
[
"module: cpu",
"open source",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150947
* #150946
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,983,614,589
|
Add complex logaddexp
|
zklaus
|
open
|
[
"open source",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
This aims to fill a gap in the cuda coverage for complex dtypes, namely it adds an implementation of the complex `logaddexp` operator.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150947
* __->__ #150946
| true
|
2,983,589,839
|
Torch nightly install fails.
|
crinard
|
closed
|
[
"needs reproduction",
"module: binaries",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
Trying to install pytorch nightly on blackwell inside a venv using command
```
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128 --no-cache-dir --force-reinstall
```
When doing so, I get the following error:
```
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128 --no-cache-dir --force-reinstall
Looking in indexes: https://download.pytorch.org/whl/nightly/cu128
Collecting torch
Downloading https://download.pytorch.org/whl/nightly/cu128/torch-2.8.0.dev20250409%2Bcu128-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (28 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/cu128/torchvision-0.22.0.dev20250409%2Bcu128-cp312-cp312-linux_x86_64.whl.metadata (6.2 kB)
Collecting torchaudio
Downloading https://download.pytorch.org/whl/nightly/cu128/torchaudio-2.6.0.dev20250409%2Bcu128-cp312-cp312-linux_x86_64.whl.metadata (6.6 kB)
Collecting filelock (from torch)
Downloading https://download.pytorch.org/whl/nightly/filelock-3.16.1-py3-none-any.whl (16 kB)
Collecting typing-extensions>=4.10.0 (from torch)
Downloading https://download.pytorch.org/whl/nightly/typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Collecting setuptools (from torch)
Downloading https://download.pytorch.org/whl/nightly/setuptools-70.2.0-py3-none-any.whl (930 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 930.8/930.8 kB 50.4 MB/s eta 0:00:00
Collecting sympy>=1.13.3 (from torch)
Downloading https://download.pytorch.org/whl/nightly/sympy-1.13.3-py3-none-any.whl (6.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2/6.2 MB 96.9 MB/s eta 0:00:00
Collecting networkx (from torch)
Downloading https://download.pytorch.org/whl/nightly/networkx-3.4.2-py3-none-any.whl (1.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 111.0 MB/s eta 0:00:00
Collecting jinja2 (from torch)
Downloading https://download.pytorch.org/whl/nightly/jinja2-3.1.4-py3-none-any.whl (133 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.3/133.3 kB 282.1 MB/s eta 0:00:00
Collecting fsspec (from torch)
Downloading https://download.pytorch.org/whl/nightly/fsspec-2024.10.0-py3-none-any.whl (179 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 179.6/179.6 kB 257.4 MB/s eta 0:00:00
Collecting nvidia-cuda-nvrtc-cu12==12.8.61 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_cuda_nvrtc_cu12-12.8.61-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl.metadata (1.7 kB)
Collecting nvidia-cuda-runtime-cu12==12.8.57 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_cuda_runtime_cu12-12.8.57-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.7 kB)
Collecting nvidia-cuda-cupti-cu12==12.8.57 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_cuda_cupti_cu12-12.8.57-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.7 kB)
Collecting nvidia-cudnn-cu12==9.8.0.87 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_cudnn_cu12-9.8.0.87-py3-none-manylinux_2_27_x86_64.whl.metadata (1.8 kB)
Collecting nvidia-cublas-cu12==12.8.3.14 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_cublas_cu12-12.8.3.14-py3-none-manylinux_2_27_x86_64.whl.metadata (1.7 kB)
Collecting nvidia-cufft-cu12==11.3.3.41 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_cufft_cu12-11.3.3.41-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-curand-cu12==10.3.9.55 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_curand_cu12-10.3.9.55-py3-none-manylinux_2_27_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cusolver-cu12==11.7.2.55 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_cusolver_cu12-11.7.2.55-py3-none-manylinux_2_27_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparse-cu12==12.5.7.53 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_cusparse_cu12-12.5.7.53-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparselt-cu12==0.6.3 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_cusparselt_cu12-0.6.3-py3-none-manylinux2014_x86_64.whl.metadata (6.8 kB)
Collecting nvidia-nccl-cu12==2.26.2 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_nccl_cu12-2.26.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.0 kB)
Collecting nvidia-nvtx-cu12==12.8.55 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_nvtx_cu12-12.8.55-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-nvjitlink-cu12==12.8.61 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_nvjitlink_cu12-12.8.61-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl.metadata (1.7 kB)
Collecting nvidia-cufile-cu12==1.13.0.11 (from torch)
Downloading https://download.pytorch.org/whl/nightly/cu128/nvidia_cufile_cu12-1.13.0.11-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting pytorch-triton==3.3.0+git96316ce5 (from torch)
Downloading https://download.pytorch.org/whl/nightly/pytorch_triton-3.3.0%2Bgit96316ce5-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (1.4 kB)
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
pytorch-triton==3.3.0+git96316ce5 from https://download.pytorch.org/whl/nightly/pytorch_triton-3.3.0%2Bgit96316ce5-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (from torch):
Expected sha256 e80048137d346a548fec7896d130d3cf43f2f07be2a2be7678e478e9985e63bf
Got df3748a2adc73798728fd39459e3c6ec714149cc1a1f4740cf49ba1c121fd9fa
```
I have tried cleaning cache and upgrading pip, no difference.
### Versions
python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 4.0.0
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.39
Is CUDA available: N/A
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 5090
Nvidia driver version: 570.124.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 14
On-line CPU(s) list: 0-13
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 5 245K
CPU family: 6
Model: 198
Thread(s) per core: 1
Core(s) per socket: 14
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 41%
CPU max MHz: 4800.0000
CPU min MHz: 800.0000
BogoMIPS: 1536.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni lam wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 416 KiB (10 instances)
L1i cache: 640 KiB (10 instances)
L2 cache: 26 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-13
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] optree==0.15.0
[conda] Could not collect
cc @seemethere @malfet @osalpekar @atalman
| true
|
2,983,522,196
|
Update auto-tuning support for _scaled_grouped_mm
|
alexsamardzic
|
open
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150944
1. Enable strided inputs
2. Implement "2d/2d", "3d/2d" and "3d/3d" combinations of inputs
3. Fix non-TMA load variant
4. Replace experimental_device_tensormap_create2d with _experimental_make_tensor_descriptor
5. Fix cases when group size along K dimension is not multiple of block size along K
6. Implement meta registration
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,983,510,450
|
[RFC][TCPStore] advanced store operations (queues, pub/sub)
|
d4l3k
|
open
|
[
"oncall: distributed",
"feature",
"triaged",
"module: c10d"
] | 0
|
MEMBER
|
TCPStore (and the Store abstraction) currently is a very basic KV store and still provides significant value for doing things like distributed barriers, metadata exchange, etc.
Redis -- a very popular KV store -- has a number of additional operations that allow for making some very complex applications. We want to incorporate some of those features into PyTorch so you can get access to those types of very useful primitives in any PyTorch job.
# Operations
## queue_push/queue_pop
These operations are typically used for distributed work queues. For PyTorch specifically, this has quite a few use cases:
* Distributed dataloader work queues
* batch assignment for async training
* batch assignment with variable world size (torchft)
* Distributed inference work queues
* assigning work to a pool of inference workers
* trainer -> inference workers for RL
* Online training w/ live data coming in
* error propagation
Equivalent redis operations:
* https://redis.io/docs/latest/commands/brpop/
* https://redis.io/docs/latest/commands/lpush/
## publish/subscribe
This would implement a generic pub/sub mechanism. This could be used for a few things:
* model publishing (metadata notificiations)
* online training -> inference workers
* RL updating inference workers to a new checkpoint
* Notifying workers when errors occurred to trigger dump states etc
Equivalent redis operations:
* https://redis.io/docs/latest/commands/publish/
* https://redis.io/docs/latest/commands/subscribe/
## clone
Currently we don't have a way to clone a store -- in ProcessGroupNCCL this means we end up sharing an existing store in a somewhat unsafe way and only using non-blocking operations rather than being able to WAIT on a specific key.
A clone command would return a new thread safe object pointing to the same underlying store.
```py
a = PrefixStore("foo/", TCPStore(...))
b = a.clone()
```
# Performance
Redis is designed in a similar way to TCPStore as a single threaded in-memory KV store and thus our implementation of those primitives likely will have similar performance and be sufficient for most use cases. TCPStore has been benchmarked at ~200k QPS which is likely sufficient for most use cases where these operations would be useful. For tensor exchange, it would be recommended to use the ProcessGroups to send that data via send/recv.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
2,983,468,823
|
all_reduce autograd
|
jinyouzhi
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"release notes: distributed (c10d)"
] | 3
|
CONTRIBUTOR
|
This adds `all_reduce_autograd` to the functional_collectives library and follows #123599 & #123989, which is motivated by https://github.com/pytorch/pytorch/issues/58005#issuecomment-2670227180.
Test plan:
```
pytest test/distributed/test_functional_api.py -k Autograd
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,983,450,220
|
[BC-breaking] Set NonStrict as default for export_for_training
|
gmagogsfm
|
closed
|
[
"module: bc-breaking",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: bc breaking",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Summary:
- Flip default value of `strict` argument from True to False on torch.export.export_for_training API
- All callsites have been updated to provide this argument explicitly to avoid behavior change.
- If you see any breakages, that means you may have a new callsite that is missed, please set `strict=True` explicitly to the callsite to mitigage.
Test Plan: CI
Differential Revision: D72724975
cc @ezyang @gchanan
| true
|
2,983,440,132
|
[ONNX] Improve dynamic_axes to dynamic_shapes conversion in exporter
|
titaiwangms
|
open
|
[
"module: onnx",
"triaged",
"onnx-triaged"
] | 0
|
COLLABORATOR
|
To improve the backward compatibility of torch.onnx.export dynamo=True/False (torchscript-based and torch.export-based), dynamic_axes needs to be converted to dynamic_shapes.
ONNX has a naive approach to convert dynamic_axes to dynamic_shapes.
https://github.com/pytorch/pytorch/blob/6fb089f2a2eea75a45ac2340f0e68736524e20bf/torch/onnx/_internal/exporter/_dynamic_shapes.py#L20
To avoid the ordering issue (torch.export dynamic_shapes requires None to mark optional inputs), the implementation uses [model.signature](https://github.com/pytorch/pytorch/blob/6fb089f2a2eea75a45ac2340f0e68736524e20bf/torch/onnx/_internal/exporter/_dynamic_shapes.py#L85-L92) to tuplify inputs and dynamic_shapes, which results in lower coverage.
After https://github.com/pytorch/pytorch/pull/150583, there should be spaces to improve, where we should be able to unflatten dynamic_axes by following args/kwargs tree structure.
| true
|
2,983,332,113
|
[ONNX] Cannot export Depth-Anything-v2 (likely `interpolate_pos_encoding_new` function)
|
FabianSchuetze
|
closed
|
[
"module: onnx",
"triaged",
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I'm trying to export the depth anything model and made a few changes to the codebase. One main problem that I identified was the `interpolate_pos_encoding` function, https://github.com/DepthAnything/Depth-Anything-V2/blob/main/depth_anything_v2/dinov2.py#L179 . I have replaced that with a variant that I can trace:
```
def interpolate_pos_encoding_new(
self,
embeddings: torch.Tensor,
orig_img,
) -> torch.Tensor:
"""
Adapted from hf transformers
"""
num_positions = self.pos_embed.shape[1] - 1
pos_embed = self.pos_embed.float()
class_pos_embed = pos_embed[:, 0]
patch_pos_embed = pos_embed[:, 1:]
dim = embeddings.shape[-1]
patch_size = torch.tensor([14, 14]).to(torch.float32)
orig_hw = torch.tensor(orig_img.shape[2:]).to(torch.float32)
new_size = orig_hw // patch_size
sqrt_num_positions = torch.tensor(num_positions**0.5).to(torch.int64)
patch_pos_embed = patch_pos_embed.reshape(
1, sqrt_num_positions, sqrt_num_positions, dim
)
patch_pos_embed = patch_pos_embed.permute(0, 3, 1, 2)
target_dtype = patch_pos_embed.dtype
val = patch_pos_embed.to(torch.float32)
out_size = torch.cat([torch.tensor([1, dim]), new_size]).to(torch.int64)
if torch.onnx.is_in_onnx_export():
patch_pos_embed = (
torch.onnx.ops.symbolic(
"Resize", # Uses onnx::Resize op
[val, torch.tensor([]), torch.tensor([]), out_size],
{},
dtype=val.dtype,
shape=out_size,
version=1,
)
.to(dtype=target_dtype)
.to(orig_img.device)
)
else:
patch_pos_embed = torch.nn.functional.interpolate(
val,
size=(int(new_size[0].item()), int(new_size[1].item())),
mode="bicubic",
antialias=False,
).to(dtype=target_dtype).to(orig_img.device)
patch_pos_embed = patch_pos_embed.permute(0, 2, 3, 1).view(1, -1, dim)
return torch.cat((class_pos_embed.unsqueeze(0), patch_pos_embed), dim=1)
```
When converting the entire model, I get the following error:
```
RuntimeError: [93m
###################################################################################################
WARNING: 1 issue(s) found during export, and it was not able to soundly produce a graph.
Please follow the instructions to fix the errors.
###################################################################################################
1. Data dependent error.
When exporting, we were unable to evaluate the value of `u0`.
This was encountered 8 times.
This occurred at the following user stacktrace:
File /home/fabian/Documents/work/spiegelball/model-conversion/repos/depth-anything-v2/depth_anything_v2/dinov2.py, lineno 335, in _get_intermediate_layers_not_chunked
File /home/fabian/Documents/work/spiegelball/model-conversion/repos/depth-anything-v2/depth_anything_v2/dinov2.py, lineno 279, in prepare_tokens_with_masks
File /home/fabian/Documents/work/spiegelball/model-conversion/repos/depth-anything-v2/depth_anything_v2/dinov2.py, lineno 214, in interpolate_pos_encoding_new
torch.onnx.ops.symbolic(
Locals:
val: Tensor(shape: torch.Size([1, 384, 37, 37]), stride: (526080, 1, 14208, 384), storage_offset: 0)
out_size: Tensor(shape: torch.Size([4]), stride: (1,), storage_offset: 0)
And the following framework stacktrace:
File /home/fabian/.local/lib/python3.12/site-packages/torch/_ops.py, lineno 756, in __call__
File /home/fabian/.local/lib/python3.12/site-packages/torch/_ops.py, lineno 756, in __call__
return self._op(*args, **kwargs)
As a result, it was specialized to a constant (e.g. `1` in the 1st occurrence), and asserts were inserted into the graph.
Please add `torch._check(...)` to the original code to assert this data-dependent assumption.
Please refer to https://docs.google.com/document/d/1kZ_BbB3JnoLbUZleDT6635dHs88ZVYId8jT-yTFgf3A/edit#heading=h.boi2xurpqa0o for more details.
```
First, I'm not sure this is the error responsible for breaking the conversion, because the onnx conversion report states many errors, and the message states `RuntimeError` and `Warning`. The full log is attached. Also, I extracted the function into a self contained example, and the conversion worked.
[onnx_export_2025-04-09_17-55-28-774528_conversion.md](https://github.com/user-attachments/files/19670388/onnx_export_2025-04-09_17-55-28-774528_conversion.md)
Does anybody have a suggestion about how to resolve the error?
### Error logs
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.8.0.dev20250326+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 14.2.0-4ubuntu2~24.04) 14.2.0
Clang version: 19.1.1 (1ubuntu1~24.04.2)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 500 Ada Generation Laptop GPU
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 22
On-line CPU(s) list: 0-21
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 7 155H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 4
CPU(s) scaling MHz: 27%
CPU max MHz: 4800.0000
CPU min MHz: 400.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 896 KiB (14 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-21
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] fast_pytorch_kmeans==0.2.2
[pip3] flake8==7.1.2
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.2.2
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250326+cu126
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.22.0.dev20250326+cu126
[pip3] triton==3.2.0
[pip3] types-flake8-2020==1.8
[pip3] types-flake8-bugbear==23.9.16
[pip3] types-flake8-builtins==2.2
[pip3] types-flake8-docstrings==1.7
[pip3] types-flake8-plugin-utils==1.3
[pip3] types-flake8-rst-docstrings==0.3
[pip3] types-flake8-simplify==0.21
[pip3] types-flake8-typing-imports==1.15
[pip3] types-mypy-extensions==1.0
[conda] Could not collect
cc @chauhang @penguinwu
| true
|
2,983,292,932
|
Turn optree warning into error
|
atalman
|
open
|
[
"release notes: devx"
] | 2
|
CONTRIBUTOR
|
Related to: https://github.com/pytorch/pytorch/issues/150889
| true
|
2,983,280,432
|
update benchamark result due to <1% regression
|
laithsakka
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150937
<img width="1503" alt="Screenshot 2025-04-09 at 9 07 13 AM" src="https://github.com/user-attachments/assets/e16f31b0-c5dc-4dd6-8adb-aac11ed988db" />
PR https://hud.pytorch.org/pr/148104
which is acceptable but we have to update this to avoid flakiness in the future .
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,983,270,831
|
[dynamo] Allow guards to be dropped with custom filter functions.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 10
|
CONTRIBUTOR
|
Summary: A follow up of https://github.com/pytorch/pytorch/pull/150689.
Test Plan: test_dynamo -k test_guard_filter_fn
Differential Revision: D72722322
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,983,262,608
|
[async TP] Fix handling of case where scatter dim = 0 for 2D output tensor
|
danielvegamyhre
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
## Summary of changes
1. Change assertion to a warning, when no all gather or reduce scatter patterns are found, and remove the corresponding unit test. It seems some valid TP graphs may not have any pattern matches, from what I can see.
2. Fix wrong variable name being referenced (`A_with_scatter_dim_0` instead of just `A`)
3. Simplify reshaping to target output shape (don't need to recalculate output shape)
4. When "A" tensor is 2D, so we are doing doing a 2D x 2D scaled mm, we need to fix our handling of the case where the scatter dim is 0. When scatter dim is 0 for the 2D scaled mm output shape, this is actually dim 1 in the unreduced stacked partial scaled mm outputs, which has a (logical) shape of `(group_size, M//group_size, N)`. To summarize:
- Unreduced stacked partials are of shape `(M, N)`
- We view as `(group size, M//group_size, N)` and reduce along the scatter dim (`group_size` / dim 0).
- Reduced output (`reduced_out`) has shape (M//group_size, N)
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.