id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2 values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4 values | body stringlengths 7 62.5k ⌀ | is_title bool 1 class |
|---|---|---|---|---|---|---|---|---|
2,814,784,130 | [dynamo][builtin-skipfiles-cleanup] Remove re | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145559
* #145804
* #145828
* __->__ #145826
* #145753
* #145744
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,814,753,085 | [Inductor-CPU] Codegened flex attention kernels don't appear in profiler profiling results | sanchitintel | closed | [
"oncall: cpu inductor"
] | 1 | COLLABORATOR | ### 🐛 Describe the bug
### Problem statement
Codegened Inductor-CPU Flex attention kernels don't appear in profiling results because they're missing calls to `RECORD_FUNCTION`.
### Steps to reproduce
1. Clear Inductor cache (on Linux, it can be done via command line - with `rm -rf /tmp/torchinductor_$(whoami)/*`)
2. Run `TORCH_LOGS="+inductor" TORCH_COMPILE_DEBUG=1 TORCHINDUCTOR_FREEZING=1 python test/inductor/test_flex_attention.py -v` after adding enabling this Inductor config in the UT file:
```
import torch._inductor.config as inductor_config
inductor_config.profiler_mark_wrapper_call = True
inductor_config.cpp.enable_kernel_profile = True
```
Could even try adding the following line as well
```
inductor_config.cpp.descriptive_names = "inductor_node"
```
3. Either add code to enable profiling with PyTorch Profiler in the UT code & then view profiling results.
Or, there's another way to verify - to be present in profiling results, the generated C++ kernels should include a `RECORD_FUNCTION` call. So, after UTs finish running, go to `/tmp/torchinductor_$(whoami)/*`, and search for `RECORD_FUNCTION` with `grep -nr RECORD_FUNCTION`. Observation: Flex attention kernels don't include calls to `RECORD_FUNCTION`, but they should.
cc @jianan-gu
### Versions
Main branch commit [0f5a68344aab1edb68ee79c5cb2e0051e10f93f4](https://github.com/pytorch/pytorch/commit/0f5a68344aab1edb68ee79c5cb2e0051e10f93f4) dated Jan 27. | true |
2,814,723,579 | Back out "Revert "pickler for GraphModule (#141659)"" | aorenste | closed | [
"oncall: distributed",
"release notes: distributed (fsdp)",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145824
Original commit changeset: 2de53b3b6592
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,814,609,618 | Fix environment deployment spam | huydhn | closed | [
"module: rocm",
"Merged",
"topic: not user facing",
"test-config/default",
"ciflow/rocm"
] | 3 | CONTRIBUTOR | With https://github.com/pytorch-labs/pytorch-gha-infra/pull/598 in place, the environment can now be removed.
Fixes https://github.com/pytorch/pytorch/issues/145704
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,814,547,384 | Remove unneeded CUDA logic from _create_build_env | cyyever | closed | [
"triaged",
"open source",
"Merged",
"topic: not user facing"
] | 3 | COLLABORATOR | Because FindCUDAToolkit.cmake has that logic.
| true |
2,814,541,264 | add test for capture_dynamic_output_shape_ops=True changing expected output between eager and compiled versions | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 10 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145821
Followup from https://github.com/pytorch/pytorch/issues/130290
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,814,538,013 | Simplify handling of max jobs in CMake builds | cyyever | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,814,532,025 | Replace distutils.version with copied looseversion | cyyever | open | [
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 3 | COLLABORATOR | distutils was deprecated.
| true |
2,814,515,296 | Update cutlass pin from 3.6 to 3.7 | henrylhtsang | closed | [
"topic: not user facing"
] | 1 | CONTRIBUTOR | ghstack-source-id: 1211a4456d2f5fe7871fef29a78f7e9f884a7bcf
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145817
Testing right now
| true |
2,814,514,808 | Update cutlass pin from 3.6 to 3.7 | henrylhtsang | closed | [
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145817
| true |
2,814,512,759 | [DTensor] Add pointwise ops strategy for `aten.minimum` | wz337 | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"module: dtensor"
] | 4 | CONTRIBUTOR | Need it for Shampoo optimizer.
https://github.com/facebookresearch/optimizers/blob/9c5700ad5ee81c28dc565c1a49c4b940da28eb8d/matrix_functions.py#L240-L242
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu | true |
2,814,484,553 | [AOTI] Cache treespec_loads calculation | henryhu6 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8 | CONTRIBUTOR | Summary: Treespec can be reused instead of calculated from str every AOTI module call. Using cached result saves 0.2ms for each module call.
Test Plan:
Before:
{F1974751578}
After:
{F1974751667}
Differential Revision: D68749539
| true |
2,814,460,881 | [BE][Inductor] Simplify `custom_op` tests | malfet | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145814
Not sure what were the motivation behind repeating the same function over and over again for different backends
Change `test_custom_op_[123]` from acceptig separate (but identical) implementations for CPU, CUDA and XPU, to take just `fn` and `fn_meta` args
Test that it also extendable to MPS
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,814,457,290 | test oss ci whether will run cutlass backend test or not | henrylhtsang | closed | [
"fb-exported",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Differential Revision: D68748965
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,814,453,099 | [cutlass backend] check against arch >= 100 | henrylhtsang | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10 | CONTRIBUTOR | Summary:
Want to add a guard against silent fallback to SM90.
GenerateSM100 was just added 3 days ago. https://github.com/NVIDIA/cutlass/blame/main/python/cutlass_library/generator.py#L8896
It should show up in CUTLASS 3.8 (not pinned yet).
Test Plan: ci
Differential Revision: D68748705
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,814,449,989 | [AsyncMM] re-enable and adapt to cutlass 3.6.0 (#144011) | yifuwang | open | [
"oncall: distributed",
"triaged",
"open source",
"fb-exported",
"release notes: distributed (c10d)"
] | 5 | COLLABORATOR | Summary:
cc H-Huang awgu kwen2501 wanchaol fegin fduwjj wz337 wconstab d4l3k c-p-i-o
imported-using-ghimport
Test Plan: Imported from OSS
Differential Revision: D68734003
Pulled By: yifuwang
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,814,408,650 | [export] Add tlparse to draft-export | angelayi | closed | [
"Merged",
"ciflow/trunk",
"release notes: export"
] | 4 | CONTRIBUTOR | Dependent on https://github.com/ezyang/tlparse/pull/87/files | true |
2,814,405,636 | [MPS] Extend `torch.mm`/`torch.bmm` to integral types | malfet | closed | [
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145809
By using `naive_mm` kernel, but make sure that accumulation is done over int32 for smaller int types (and float for half and bfloat) as well as adding `navie_bmm` that follows the same pattern.
Remove stale restriction on `torch.dot` (which works fine on MacOS-14/15)
This also enables integer op flavors for:
- `addmv`
- `einsum`
- `inner`
- `linalg.multi_dot`
- `matmul`
- `mv`
- `tensordot` | true |
2,814,405,562 | [MPS] Add `op_math_t` | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145809
* __->__ #145808
Similar to `at::opmath_t` to be used for reduction (and int mms) | true |
2,814,379,746 | DISABLED test_distributed_checkpoint_state_dict_type0_cuda (__main__.TestDistributedCheckpointCUDA) | pytorch-bot[bot] | open | [
"oncall: distributed",
"module: flaky-tests",
"skipped"
] | 7 | NONE | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_distributed_checkpoint_state_dict_type0_cuda&suite=TestDistributedCheckpointCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36247589446).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 7 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_distributed_checkpoint_state_dict_type0_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 597, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 837, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 886, in _check_return_codes
raise RuntimeError(error)
RuntimeError: Process 1 exited with error code 10 and exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 726, in run_test
getattr(self, test_name)()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 599, in wrapper
fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 474, in instantiated_test
raise rte
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 199, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/checkpoint_utils.py", line 152, in wrapper
func(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/distributed/fsdp/test_distributed_checkpoint.py", line 71, in test_distributed_checkpoint
state_dict = model.state_dict()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2226, in state_dict
module.state_dict(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2226, in state_dict
module.state_dict(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2226, in state_dict
module.state_dict(
[Previous line repeated 1 more time]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2232, in state_dict
hook_result = hook(self, destination, prefix, local_metadata)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 715, in _post_state_dict_hook
processed_state_dict = _post_state_dict_hook_fn[fsdp_state._state_dict_type](
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 433, in _local_post_state_dict_hook
sharded_tensor = init_from_local_shards(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/__init__.py", line 407, in init_from_local_shards
return ShardedTensor._init_from_local_shards(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/api.py", line 755, in _init_from_local_shards
dist.all_gather_object(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3037, in all_gather_object
input_tensor.resize_(max_object_size)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate more than 1EB memory.
To execute this test, run the following from the base repo dir:
python test/distributed/fsdp/test_distributed_checkpoint.py TestDistributedCheckpointCUDA.test_distributed_checkpoint_state_dict_type0_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `distributed/fsdp/test_distributed_checkpoint.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @wdvr | true |
2,814,379,707 | DISABLED test_distributed_checkpoint_state_dict_type0_cuda (__main__.TestDistributedCheckpointCUDA) | pytorch-bot[bot] | closed | [
"oncall: distributed",
"module: flaky-tests",
"skipped"
] | 1 | NONE | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_distributed_checkpoint_state_dict_type0_cuda&suite=TestDistributedCheckpointCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36250725055).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 10 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_distributed_checkpoint_state_dict_type0_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 597, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 837, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 891, in _check_return_codes
raise RuntimeError(
RuntimeError: Process 0 terminated or timed out after 300.09524369239807 seconds
```
</details>
Test file path: `distributed/fsdp/test_distributed_checkpoint.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @wdvr | true |
2,814,379,657 | DISABLED test_return_advanced_contextmanager (__main__.ContextlibContextManagerTests) | pytorch-bot[bot] | closed | [
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5 | NONE | Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_return_advanced_contextmanager&suite=ContextlibContextManagerTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36249687168).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_return_advanced_contextmanager`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_ctx_manager.py", line 2400, in test_return_advanced_contextmanager
with self.assertRaises(InternalTorchDynamoError):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: InternalTorchDynamoError not raised
To execute this test, run the following from the base repo dir:
python test/dynamo/test_ctx_manager.py ContextlibContextManagerTests.test_return_advanced_contextmanager
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_ctx_manager.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,814,372,438 | [dynamo][builtin-skipfiles-cleanup] Remove random | anijain2305 | closed | [
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145804
* #145876
* #145958
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,814,348,767 | Revert D68278174 | desertfire | closed | [
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9 | CONTRIBUTOR | Summary:
This diff reverts D68278174
New failures like T213574743 . Need more time to figure things out.
Test Plan: NA
Differential Revision: D68744749
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov | true |
2,814,339,003 | [export] allow bit shift builtin ops | ColinPeppler | closed | [
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145898
* __->__ #145802
| true |
2,814,337,482 | torch.linalg.eigh fails on CPU | atalman | closed | [
"triaged",
"module: regression",
"module: third_party",
"module: linear algebra"
] | 7 | CONTRIBUTOR | ### 🐛 Describe the bug
Based on this issue https://github.com/pytorch/pytorch/issues/94772 we see failure on CPU since PyTorch 2.4.0 Release.
Minumum test, requires [fc_layer_tensor.pt.zip](https://github.com/user-attachments/files/18566154/fc_layer_tensor.pt.zip) :
```python
import torch
t = torch.load('fc_layer_tensor.pt', weights_only=True, map_location='cpu').flatten()
torch.linalg.eigh(torch.outer(t, t))
```
Output:
```
python3 test5.py
/home/ubuntu/test5.py:12: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
fc_layer.weight.grad = loaded_tensor = torch.load('fc_layer_tensor.pt')
Intel oneMKL ERROR: Parameter 8 was incorrect on entry to SSYEVD.
Traceback (most recent call last):
File "/home/ubuntu/test5.py", line 15, in <module>
evals_adagrad, evecs_adagrad = torch.linalg.eigh(precond_adagrad.cpu())
RuntimeError: false INTERNAL ASSERT FAILED at "../aten/src/ATen/native/BatchLinearAlgebra.cpp":1538, please report a bug to PyTorch. linalg.eigh: Argument 8 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
Full test
```python
import torch
from torchvision import datasets, transforms
SEED = 123
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
batch_size = 512
num_classes = 10
num_features = 28**2
loss_fn = torch.nn.CrossEntropyLoss()
tforms = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
dataset = datasets.MNIST("~/data/", download=False, train=True, transform=tforms)
train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=False)
fc_layer = torch.nn.Linear(in_features=num_features, out_features=num_classes, bias=False).to(DEVICE)
for batch_ix, (inputs, targets) in enumerate(train_loader):
inputs, targets = inputs.to(DEVICE), targets.to(DEVICE)
fc_layer.weight.grad = None
logits = fc_layer(inputs.view(inputs.shape[0], -1))
loss = loss_fn(logits, targets)
loss.backward()
vec_grad = torch.flatten(fc_layer.weight.grad)
precond_adagrad = torch.outer(vec_grad, vec_grad)
# CPU computation works fine
evals_adagrad, evecs_adagrad = torch.linalg.eigh(precond_adagrad.cpu())
# But eigh computation on GPU fails
evals_adagrad, evecs_adagrad = torch.linalg.eigh(precond_adagrad)
```
### Versions
2.7.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | true |
2,814,337,060 | config: Don't spam warnings about reference type configs | c00w | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 18 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145800
Summary:
https://github.com/pytorch/pytorch/issues/145755
The is_dynamic check for reference types was subtly broken, causing log spam
after it was accessed
Added an explicit type for is_default for reference types to make sure this
behaviour is correct | true |
2,814,302,820 | inductor: Test for crash when nn.module has a bad getattr call. | c00w | closed | [
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145799
Note that I can't get this working, so I'd love some advice on how to write
this test.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,814,298,140 | [will-not-merge] tuning | yifuwang | open | [
"oncall: distributed",
"open source",
"Stale",
"release notes: distributed (c10d)"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,814,298,074 | [Async-TP] improve algo selection | yifuwang | open | [
"oncall: distributed",
"open source",
"Stale"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145798
* __->__ #145797
* #145796
* #145795
* #145794
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,814,297,998 | [Async-TP] _pipelined_multi_all_gather_and_consume reduce overhead | yifuwang | open | [
"oncall: distributed",
"open source",
"Stale"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145798
* #145797
* __->__ #145796
* #145795
* #145794
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,814,297,639 | [AsyncMM] preliminary tuning | yifuwang | open | [
"oncall: distributed",
"open source",
"Stale",
"release notes: distributed (c10d)"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145798
* #145797
* #145796
* __->__ #145795
* #145794
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,814,297,550 | [Async-TP] Port _fused_all_gather_matmul_native to cpp to reduce launching overhead | yifuwang | open | [
"oncall: distributed",
"open source",
"Stale",
"release notes: distributed (c10d)"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145798
* #145797
* #145796
* #145795
* __->__ #145794
`_fused_all_gather_matmul_native` schedules multiple tasks (e.g., kernel, copy engine transfers, and stream_write_value32_) onto the GPU. Previously, `fused_all_gather_matmul_native` was implemented in Python, and issuing most of these tasks incurred dispatcher overhead. When the problem size is small, the CPU overhead can exceed the GPU’s execution time. While this may be acceptable in workloads where the CPU runs ahead of the GPU, it still isn’t ideal.
This PR reduces CPU overhead by porting `_fused_all_gather_matmul_native` to C++. Specifically, it eliminates dispatcher overhead for:
- `aten.split` (calling `aten.narrow` × `world_size` times)
- `symm_mem::stream_write_value32_` × `world_size` times
<img width="842" alt="image" src="https://github.com/user-attachments/assets/176ebc89-a2e1-4c07-b340-c2d4422def09" />
<img width="455" alt="image" src="https://github.com/user-attachments/assets/3768f0c1-876f-4c66-bd22-89aa80d0889c" />
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,814,287,859 | [CI][CUDA][cuSPARSELt] cusparselt 0.6.3 and cu121 related cleanups | nWEIdia | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7 | COLLABORATOR | Make ci cusparselt installation be consistent with nightly binary
Remove cu121 related docker build jobs and inductor runs Update test failures relating to cu121
cc @atalman @malfet @ptrblck @eqy @tinglvv @Skylion007 @huydhn
Retry of https://github.com/pytorch/pytorch/pull/145696 | true |
2,814,269,328 | Add CUDA 12.8 manywheel x86 Builds to Binaries Matrix | tinglvv | closed | [
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 18 | COLLABORATOR | https://github.com/pytorch/pytorch/issues/145570
Adding cuda 12.8.0 x86 builds first
TODO: resolve libtorch build failure and add build in https://github.com/pytorch/pytorch/pull/146084
cc @atalman @malfet @ptrblck @nWEIdia | true |
2,814,258,366 | cpp_wrapper: enable in aarch64 and x86 nightly dashboard performance runs | benjaminglass1 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145791
Adds `cpp_wrapper` mode to the nightly inductor benchmark runs, as well as optionally for manually triggered runs. This is justified by `aot_inductor` already being in those runs.
Additionally, re-enables `aot_inductor` in the nightly aarch64 runs. It was disabled 5 months ago to deal with a performance instability, which has likely gone away at this point. | true |
2,814,233,818 | Move ROCm MI300 jobs to unstable to make CI green | ZainRizvi | closed | [
"module: rocm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/unstable",
"ciflow/rocm"
] | 6 | CONTRIBUTOR | This is a temporary change to reduce intermittent tests failures. Jobs can be moved back once those machines get better runner isolation.
This also sneaks in a small fix to all the rocm job's build step to be run on Linux Foundation runners (the get-label-type dependency). The inductor-rocm-mi300 workflow already had it, but it was missing in the rocm-mi300 workflow.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @jjukukuknfdtjujbbghataylo | true |
2,814,221,537 | Add CUDA 12.8 libtorch image | tinglvv | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | COLLABORATOR | https://github.com/pytorch/pytorch/issues/145570
Builds 12.8 libtorch docker/deprecate 12.1 meanwhile
cc @atalman @ptrblck @nWEIdia
| true |
2,814,210,855 | [MPS] masked_fill_ tiling for large tensors | Isalia20 | closed | [
"open source",
"release notes: mps"
] | 2 | COLLABORATOR | Fixes #143477 | true |
2,814,205,072 | Add ignorable commits on run_test.py to git blame ignore | janeyx99 | closed | [
"Merged",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Chanced upon it while searching through cpp_extension related code. | true |
2,814,167,997 | [dynamo] Properly branch on an unspecialized NN module | StrongerXi | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145786
User defined NN module might have their own `__len__` or `__bool__`
methods which Dynamo needs to trace through, so that side effects and/or
reads to buffered writes are properly handled.
This patch removes the special `UnspecializedNNModuleVariable` branch in
Dynamo's branch handling, and lets these cases fall into the
`UserDefinedObjectVariable` branch, which handles the aforementioned
cases correctly.
Fixes #145284.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,814,163,331 | export serde turn hop's tuple arg into list | ydwu4 | open | [
"oncall: pt2",
"export-triage-review",
"oncall: export"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
See below repro:
```
import io
import torch
class Simple(torch.nn.Module):
def forward(self, ci, a, b):
def cond_fn(i, x, y):
return i > 0
def body_fn(i, x, y):
return i - 1, x + y, y - x
return torch._higher_order_ops.while_loop(cond_fn, body_fn, (ci, a, b))
example_inputs = (
torch.tensor(1),
torch.randn(10, 20),
torch.randn(10, 20),
)
ep = torch.export.export(Simple(), example_inputs)
print(ep)
buffer = io.BytesIO()
torch.export.save(ep, buffer)
buffer.seek(0)
loaded_ep = torch.export.load(buffer)
print(loaded_ep)
```
Before:
```
while_loop = torch.ops.higher_order.while_loop(while_loop_cond_graph_0, while_loop_body_graph_0, (ci, a, b), ()); while_loop_cond_graph_0 = while_loop_body_graph_0 = ci = a = b = None
```
after serde:
```
while_loop = torch.ops.higher_order.while_loop(while_loop_cond_graph_0, while_loop_body_graph_0, [ci, a, b], []); while_loop_cond_graph_0 = while_loop_body_graph_0 = ci = a = b = None
```
### Versions
on master
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo | true |
2,814,153,916 | Run AOT custom extensions tests on Windows | janeyx99 | closed | [
"ciflow/trunk",
"topic: not user facing"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145784
* #145764
| true |
2,814,132,243 | [BE]: Update typing of OrderedSet ancestor | Skylion007 | closed | [
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | COLLABORATOR | Now that we are on python 3.9 minimum version we can properly use Generics in the superclass | true |
2,814,127,167 | Behavior Difference with Flex Attention + Sequence Packing | zaptrem | open | [
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 1 | NONE | ### 🐛 Describe the bug
### Issue:
I'm observing odd attention map patterns in my flex attention sequence packing runs that aren't there on (what should be) identical F.SDPA runs:
F.SDPA maps at step 24k

Flex Attention maps a step 24K

I created these maps with this function: https://gist.github.com/zaptrem/932adb082755574409e0084e8647757c
It calculates attention manually (so we can see the attention maps), compares the final result to that of Flex Attention to verify the maps are accurate, then plots them.
Here is the image it outputs while running it on my model:

And here is the text it prints:
https://gist.github.com/zaptrem/cd8147ae21b569287dfa841eba519148
### Background:
- I'm training a causal transformer. In order to enable efficient training on variable sequence lengths I decided to add sequence packing with flex attention + BlockMasks.
- I trained two models for comparison: F.SDPA with batch size 44 where each sequence is 512 long, and Flex Attention batch size 22 where each sequence is length 1024 (two 512-long sequences packed together). Causal+document based BlockMasks are applied as specified here: https://gist.github.com/zaptrem/ddf6fb358104dda3866597ba1c34fa40
The losses are similar (flex attention is consistently slightly lower), but the attention masks are not and though it's hard to tell so early in training I believe the outputs are worse from the flex attention model.
### Versions
poetry run python collect_env.py
Collecting environment information...
PyTorch version: 2.7.0.dev20250127+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-1023-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 104
On-line CPU(s) list: 0-103
Vendor ID: GenuineIntel
Model name: Intel Xeon Processor (SapphireRapids)
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 104
Socket(s): 1
Stepping: 4
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3.3 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 416 MiB (104 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-103
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] audio-diffusion-pytorch==0.1.3
[pip3] ema-pytorch==0.6.5
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnxruntime==1.18.1
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-metric-learning==2.5.0
[pip3] pytorch-triton==3.2.0+gitb2684bf3
[pip3] torch==2.7.0.dev20250127+cu124
[pip3] torch-audiomentations==0.11.1
[pip3] torch-optimi==0.2.1
[pip3] torch-pitch-shift==1.2.4
[pip3] torch-stoi==0.2.3
[pip3] torchaudio==2.6.0.dev20250127+cu124
[pip3] torchcde==0.2.5
[pip3] torchcfm==1.0.5
[pip3] torchdiffeq==0.2.4
[pip3] torchdyn==1.0.6
[pip3] torchmetrics==1.4.0.post0
[pip3] torchsde==0.2.6
[pip3] torchvision==0.22.0.dev20250127+cu124
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,814,124,686 | [dynamo] Properly prune dead input cell object | StrongerXi | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145781
This patch models input cell object as "newly created" rather than
"pre-existing" python object (see added documentation for why this
actually captures the semantics more accurately).
This enables the `SideEffects.prune_dead_object_new` algorithm to prune
away writes to input cell objects which are no longer relevant; this
didn't happen prior to this patch because we modelled them as
pre-existing objects, which forces us to codegen their attribute
mutations.
Fixes #145564.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,814,107,379 | Update CUDNN frontend submodule to 1.10.0 | Skylion007 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: cudnn"
] | 3 | COLLABORATOR | Update to CUDNN 1.10. Most of this is release is about supporting some new APIs needed for Blackwell integration and new features in the corresponding CUDNN version | true |
2,814,104,725 | [CUDNN][CUDNN V8 API] Allow user-specified CUDNN V8 API benchmarking technique | eqy | closed | [
"module: cudnn",
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 2 | COLLABORATOR | Useful for debugging apparent "regressions" when using cuDNN autotuning ("benchmarking")
cc @csarofeen @ptrblck @xwang233 | true |
2,814,095,969 | NJT support for cat() on the ragged dim | jbschlosser | open | [
"topic: not user facing"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149892
* __->__ #145778
Requested [here](https://github.com/pytorch/pytorch/issues/118107#issuecomment-2615705795).
There's still a fair amount of work left. TODO:
* Fix the backwards pass (need NJT-specific derivative formula, possibly `narrow()` on the ragged dim)
* Fix data-dependency errors in forward + torch.compile() due to `unbind()` usage | true |
2,814,076,536 | relax assertion to warning for unbacked binding names | avikchaudhuri | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4 | CONTRIBUTOR | Summary:
Quick fix following up on https://github.com/pytorch/pytorch/pull/144894 to unblock internal tests.
Will keep investigating a more principled fix.
Test Plan: Failures in T213563826 now pass
Differential Revision: D68731710
| true |
2,814,076,483 | Update to NCCL 2.25.1 for 12.8 | tinglvv | closed | [
"triaged",
"open source",
"topic: not user facing"
] | 4 | COLLABORATOR | https://github.com/pytorch/pytorch/issues/145570
follow up for https://github.com/pytorch/pytorch/pull/145567/files | true |
2,814,052,976 | Error instead of silent specializing when an unbacked dimension has a value range cardinality of one | bobrenjc93 | closed | [
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 0 | CONTRIBUTOR | ```
import torch
torch._dynamo.config.automatic_dynamic_local_pgo = False
@torch.compile()
def fn(x):
return torch.cat([x, torch.ones(5, 5)])
x = torch.ones(5, 5)
torch._dynamo.decorators.mark_unbacked(x, 0)
torch._dynamo.decorators.mark_unbacked(x, 1)
fn(x)
```
Results in
```
L_x_: "f32[u0, 5][5, 1]cpu"
```
even though we explicitly marked x.size()[1] as unbacked.
We should error similar to the constraint tests in mark_dynamic
cc @chauhang @penguinwu @ezyang | true |
2,814,024,160 | [POC] flat_apply HOP | zou3519 | closed | [
"Stale",
"release notes: fx",
"fx"
] | 3 | CONTRIBUTOR | [no-ci]
Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,813,990,519 | Dynamo performance test benchmarks reuse model state between eager and compiled | xmfan | open | [
"triaged",
"module: benchmark",
"oncall: pt2",
"module: dynamo"
] | 0 | MEMBER | ### 🐛 Describe the bug
See `model`:
https://github.com/pytorch/pytorch/blob/64cd81712ddf867d1c4dc46ba4554d40d6d7d610/benchmarks/dynamo/common.py#L3431-L3470
Later, it is also used by the `speedup_experiment`. This can cause issues with stateful models like convit_base: https://github.com/huggingface/pytorch-image-models/blob/d81da93c1640a504977b0ee494791e5c634ec63c/timm/models/convit.py#L67-L70, or DDP/FSDP wrappers.
If we deepcopy the model, a few cudagraphs perf benchmarks start to fail e.g. convit_base, llama, cm3leon_generate
`RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run.`
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,813,969,497 | Implement serializable getattr support for tensor subclasses | tugsbayasgalan | closed | [
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 8 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145772
builtins.getattr is not serializable, so we replace it with a custom op that has more refined schema.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D68899421](https://our.internmc.facebook.com/intern/diff/D68899421) | true |
2,813,967,002 | Revert D68232274 | avikchaudhuri | closed | [
"fb-exported",
"ciflow/inductor",
"release notes: export"
] | 4 | CONTRIBUTOR | Summary:
This diff reverts D68232274
broken multiple tests - T213563826
Test Plan: NA
Differential Revision: D68725051
| true |
2,813,956,992 | [torch][distributed] re-merge NCCLComm::split impl | suo | closed | [
"oncall: distributed",
"fb-exported",
"Stale",
"release notes: distributed (c10d)"
] | 3 | MEMBER | Summary:
These were originally forked between fbcode and oss. This has led to some drift, as bugfixes related to ncclCommSplit + non-blocking never made it to internal. Now we're hitting these bugs in monarch so it would be nice to fix.
Just upstream the forked code and delete the fb-only version.
Test Plan: Unit tests
Differential Revision: D68727854
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,813,952,858 | Revert D68232274 | avikchaudhuri | closed | [
"fb-exported",
"ciflow/inductor",
"release notes: export"
] | 3 | CONTRIBUTOR | Summary:
This diff reverts D68232274
broken multiple tests - T213563826
Test Plan: NA
Differential Revision: D68725051
| true |
2,813,950,210 | Log info for AOTAutogradCache bypasses instead of warning | jamesjwu | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145768
Fixes #145767
FxGraphCache also logs to info instead of warning so lets do that
| true |
2,813,935,628 | Spammy Aot autograd cache warning | eellison | closed | [] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
```with-proxy python benchmarks/dynamo/torchbench.py --backend inductor --device cuda --only
basic_gnn_edgecnn --amp --cold-start-latency --print-compilation-time --training --performance 2>&1
```
Gives
```
loading model: 0it [00:02, ?it/s]
cuda train basic_gnn_edgecnn
[WARNING]:Bypassing autograd cache due to: Cannot cache a graph with functional tensor
[WARNING]:Bypassing autograd cache due to: Cannot cache a graph with functional tensor
[WARNING]:Bypassing autograd cache due to: Cannot cache a graph with functional tensor
```
We're not similarly raising a warning with fx graph cache bypasses. I think this is a bit spammy but open to other opinions. It also is not user actionable. cc @ezyang @masnesral @jamesjwu
### Versions
master | true |
2,813,917,063 | [cond] remove warning for unsupported tuple returns | pianpwk | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | CONTRIBUTOR | I guess this is supported now | true |
2,813,916,554 | Add magma cuda build 12.8 | tinglvv | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7 | COLLABORATOR | https://github.com/pytorch/pytorch/issues/145570
cc @atalman @malfet @ptrblck @nWEIdia | true |
2,813,914,356 | Set -DPy_LIMITED_API flag for py_limited_api=True extensions | janeyx99 | closed | [
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 3 | CONTRIBUTOR | This could be BC breaking, because there was a period of time when we use py_limited_api=True but don't enforce the flag, and now that we will start enforcing the flag, people's custom extensions may fail to build.
This is strictly still better behavior, as it is sketchy to claim CPython agnosticism without the flag, but calling this out as potential people yelling at us. Ways to mitigate this risk + reasons this may not be too big a deal:
- People haven't known about py_limited_api for extensions much due to lack of docs from python so usage is low right now
- My current tutorial is in store to make new users of py_limited_api pass this flag, so it'd be a noop for them.
Test plan:
* Locally i'm confident as I tried rebuilding ao with this change and it reliably failed (cuz importing torch/extension.h is a nono)
* Unit test wise, the normal python_agnostic one I added should work
______
## BC-breaking note - C++ Extensions `py_limited_api=True` is now built with `-DPy_LIMITED_API`
We formally began respecting the `py_limited_api=True` kwarg in 2.6 and stopped linking libtorch_python.so when the flag was specified, as libtorch_python.so does not guarantee using APIs from from the stable Python limited API. In 2.7, we go further by specifying the `-DPy_LIMITED_API` flag which will enforce that the extension is buildable with the limited API. As a result of this enforcement, **custom extensions that set `py_limited_api=True` but do not abide by the limited API may fail to build**. For an example, see #152243.
This is strictly better behavior as it is sketchy to claim CPython agnosticism without enforcing with the flag. If you run into this issue, please ensure that the extension you are building does not use any APIs which are outside of the Python limited API, e.g., `pybind`.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145784
* __->__ #145764
| true |
2,813,912,780 | Run inductor perf benchmark on ROCm | huydhn | closed | [
"module: rocm",
"Merged",
"release notes: releng",
"test-config/default",
"ciflow/rocm"
] | 3 | CONTRIBUTOR | This requires https://github.com/pytorch/pytorch/pull/144594. The test run on PT2 dashboard is at https://hud.pytorch.org/benchmark/compilers?dashboard=torchinductor&startTime=Mon%2C%2020%20Jan%202025%2019%3A46%3A14%20GMT&stopTime=Mon%2C%2027%20Jan%202025%2019%3A46%3A14%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=rocm&lBranch=144594&lCommit=9f5cb037965aa2990b2e4593610bca92526ebb3b&rBranch=144594&rCommit=9f5cb037965aa2990b2e4593610bca92526ebb3b
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,813,903,406 | [export] fix non-strict pre_dispatch exporting while_loop | ydwu4 | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"keep-going"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145762
fix https://github.com/pytorch/pytorch/issues/145737.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,813,888,696 | dynamo cannot trace global op_set .__contains__ | ydwu4 | open | [
"high priority",
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 7 | CONTRIBUTOR | ### 🐛 Describe the bug
This gives a graph break:
```python
import torch
op_set = {
torch._C._set_grad_enabled,
torch.amp._enter_autocast,
torch.amp._exit_autocast,
}
def f(x):
if torch.ops.aten.add in op_set:
return x.sin()
return x.cos()
torch.compile(f, fullgraph=True, backend="eager")(torch.randn(3,4))
```
error message:
```
Traceback (most recent call last):
File "/data/users/yidi/pytorch/test.py", line 13, in <module>
torch.compile(f, fullgraph=True, backend="eager")(torch.randn(3,4))
File "/data/users/yidi/pytorch/torch/_dynamo/eval_frame.py", line 566, in _fn
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/yidi/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/yidi/pytorch/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 2914, in run
super().run()
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 1084, in run
while self.step():
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 994, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 2197, in CONTAINS_OP
self.push(right.call_method(self, "__contains__", [left], {}))
File "/data/users/yidi/pytorch/torch/_dynamo/variables/user_defined.py", line 818, in call_method
return super().call_method(tx, name, args, kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/variables/base.py", line 413, in call_method
unimplemented(f"call_method {self} {name} {args} {kwargs}")
File "/data/users/yidi/pytorch/torch/_dynamo/exc.py", line 361, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: call_method UserDefinedObjectVariable(set) __contains__ [TorchInGraphFunctionVariable(aten.add)] {}
from user code:
File "/data/users/yidi/pytorch/test.py", line 10, in f
if torch.ops.aten.add in op_set:
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
However, when the op_set is a local variable it works fine.
### Versions
on master
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames | true |
2,813,877,447 | Performance degradation in scaled_dot_product_attention with attn_mask and sequence length multiples of 16 | alexanderb14 | closed | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 1 | NONE | ### 🐛 Describe the bug
There's a performance degradation when using `scaled_dot_product_attention` with the `attn_mask` argument, when the sequence length is a multiple of 16. This issue can be reproduced using the following code snippet.
**Reproducer Code**
```
import torch
from torch._inductor.runtime.benchmarking import benchmarker
from torch.nn import functional as F
def run(seqlen):
with torch.device("cuda"):
def f(q, k, v, mask):
return F.scaled_dot_product_attention(
q, k, v, attn_mask=mask, dropout_p=0.0
)
f_compiled = torch.compile(f)
# Create inputs
bsz = 32
q = torch.randn(bsz, 16, seqlen, 64, dtype=torch.bfloat16)
k = torch.randn(bsz, 16, seqlen, 64, dtype=torch.bfloat16)
v = torch.randn(bsz, 16, seqlen, 64, dtype=torch.bfloat16)
mask = torch.ones([bsz, 1, seqlen, seqlen], dtype=torch.bool)
inputs = [q, k, v, mask]
# Benchmark
time = benchmarker.benchmark_gpu(lambda: f_compiled(*inputs), warmup=5, rep=50)
return time
for seqlen_start in [1008, 1024, 2048, 4096]:
for offset in range(-1, 2):
seqlen = seqlen_start + offset
torch._dynamo.reset()
time = run(seqlen)
print(seqlen, time)
print()
```
**Output on H100 GPU**
```
1007 1.569983959197998
1008 2.0037760734558105
1009 1.5577600002288818
1023 1.553056001663208
1024 2.000607967376709
1025 1.7111680507659912
2047 6.071455955505371
2048 8.064703941345215
2049 6.349376201629639
4095 23.773408889770508
4096 35.05900955200195
4097 24.331039428710938
```
**Analysis**
The results show that incrementing the sequence length from multiples of 16 (1024, 2048, 4096) to non-multiples of 16 (1025, 2049, 4097) results in up to 1.43x speedup. This is counterintuitive and suggests that the selected/generated kernel for sequence lengths of multiples of 16 could be improved.
**Expected**
I'd expect multiples of 16 to perform equal, or even better than neighboring sizes, because of e.g. better divisibility for tiling.
### Error logs
_No response_
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.0
/usr/lib64/libcudnn_adv_infer.so.8.8.0
/usr/lib64/libcudnn_adv_train.so.8.8.0
/usr/lib64/libcudnn_cnn_infer.so.8.8.0
/usr/lib64/libcudnn_cnn_train.so.8.8.0
/usr/lib64/libcudnn_ops_infer.so.8.8.0
/usr/lib64/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 184
On-line CPU(s) list: 0-183
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 184
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 11.5 MiB (184 instances)
L1i cache: 11.5 MiB (184 instances)
L2 cache: 92 MiB (184 instances)
L3 cache: 2.9 GiB (184 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-183
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @eellison @yanboliang @ezyang @BoyuanFeng | true |
2,813,873,581 | [chore][ez] change alloc buffer size from 4000 to 4096 | c-p-i-o | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145759
* #145757
* #145756
Summary:
Allocations typically happen as a power of 2 anyway.
Change the default alloc size to 4096 so eek out a bit more perf.
Test:
unit tests
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k | true |
2,813,873,445 | [bug] fix memory leaks for failed queries | c-p-i-o | closed | [
"oncall: distributed",
"release notes: distributed (c10d)"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Summary:
Correctly handle freeing allocated buffers if query does not parse.
This would only happen in atypical scenarios.
1. Call stream.commit() for failure cases. This will free up the buffer
that's allocated in the `read1` command at the beginning.
2. For the default case, free the buffer after calling close().
Test Plan:
Testing using VM's and noted that there are no leaks.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k | true |
2,813,873,322 | [bug] handle case when remote peer closes connection | c-p-i-o | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145759
* __->__ #145757
* #145756
Summary:
In the case where remote peer closes the connection, nread returns 0. In
this case, we still want to free up the allocated buffer.
Also, reorder the if so that the likely success cases (nread > 0) is at
the top of the function with an early return.
Test Plan:
unit tests
Differential Revision: [D68733192](https://our.internmc.facebook.com/intern/diff/D68733192)
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k | true |
2,813,873,136 | [chore] fix new linter | c-p-i-o | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145759
* #145757
* __->__ #145756
Summary:
Fix new linter that's complaining when I made changes to this file:
class 'LibUVStoreDaemon' defines a non-default destructor but does not
define a copy constructor, a copy assignment operator, a move
constructor or a move assignment operator
Test Plan:
make lint passes
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
Differential Revision: [D68733191](https://our.internmc.facebook.com/intern/diff/D68733191) | true |
2,813,838,969 | Spam: UserWarning: Skipping serialization of skipfiles_inline_module_allowlist value | ezyang | open | [
"module: logging",
"triaged",
"oncall: pt2"
] | 3 | CONTRIBUTOR | ### 🐛 Describe the bug
When I run
```
import torch
@torch.compile(dynamic=True)
def rmsnorm_without_weight(hidden_states, eps=1e-6, dtype=torch.bfloat16):
variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
hidden_states = hidden_states * torch.rsqrt(variance + eps)
hidden_states = hidden_states.to(dtype)
return hidden_states
torch.empty(1, device='cuda', requires_grad=True).backward()
a = torch.rand((31250, 1, 6144), device='cuda')
for flag in [False, True]:
a_ = a.detach().requires_grad_(flag)
# 1: context trigger
with torch.no_grad():
aa_ = rmsnorm_without_weight(a_)
# 2: requires_grad trigger
aa_ = rmsnorm_without_weight(a_)
if flag:
aa_.sum().backward()
# 3: specify size[0] is 1 and storage_offset >=2
b_ = a[-1:, ...].detach().requires_grad_(flag)
b_ = rmsnorm_without_weight(b_)
if flag:
b_.sum().backward()
# 4: specify size[0] is 1 and w/o storage_offset
b_ = a[:1, ...].detach().requires_grad_(flag)
b_ = rmsnorm_without_weight(b_)
if flag:
b_.sum().backward()
```
with
```
TORCH_TRACE=/tmp/wag python wag.py
```
I get spam:
```
/data/users/ezyang/a/pytorch/torch/utils/_config_module.py:459: UserWarning: Skipping serialization of skipfiles_inline_module_allowlist value
{}
warnings.warn(
/data/users/ezyang/a/pytorch/torch/utils/_config_module.py:459: UserWarning: Skipping serialization of skipfiles_inline_module_allowlist value
{}
warnings.warn(
/data/users/ezyang/a/pytorch/torch/utils/_config_module.py:459: UserWarning: Skipping serialization of skipfiles_inline_module_allowlist value
{}
warnings.warn(
/data/users/ezyang/a/pytorch/torch/utils/_config_module.py:459: UserWarning: Skipping serialization of skipfiles_inline_module_allowlist value
{}
warnings.warn(
/data/users/ezyang/a/pytorch/torch/utils/_config_module.py:459: UserWarning: Skipping serialization of skipfiles_inline_module_allowlist value
{}
warnings.warn(
```
This is pointless and I shouldn't be warning on this.
cc @chauhang @penguinwu @eellison @c00w
### Versions
main | true |
2,813,829,222 | [OSS] Add no dist as an argument to DCP top level apis | ankitageorge | closed | [
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Summary: No-dist, for a non-distributed checkpoint, was a top level param in the past, but was removed. This was requested back in https://github.com/pytorch/pytorch/issues/125777 and will be needed for our torchtune changes to use DCP
Test Plan: existing tests pass
Differential Revision: D68714246
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,813,821,316 | [dynamo][builtin-skipfile-cleanup] Remove signal | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145559
* #145804
* #145828
* #145826
* __->__ #145753
* #145744
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,813,819,640 | initialize device when pinning memory on this device, short circuit i… | ngimel | closed | [
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 8 | COLLABORATOR | …s_pinned if device is not initialized
Do not land
RFC
potential fix for #144687
Now `.is_pinned(device="cuda")` does not initialize device and thus doesn't poison the fork (but it complains about `device` arg being deprecated). To not need `device=` arg we'd need to fix get_accelerator to not initialize device.
cc @malfet, @albanD | true |
2,813,816,802 | [OSS] Update FileSystem methods to properly handle a string argument | ankitageorge | closed | [
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Summary: When testing, I tried to pass in a string argument to the FileSystem class' methods, which is a valid input, but the cast() that casted the string to a path wasn't working as was likely expected and was leading all the methods to fail with a string arg. Instead of a cast, a proper constructor should be used.
Test Plan: N6475361 methods don't throw an error with a string arg like they were previously
Differential Revision: D68713937
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,813,812,422 | [dynamo] save/restore system random state more carefully | williamwen42 | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 16 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145750
Reattempt of https://github.com/pytorch/pytorch/pull/145435 since the state of the linked internal diff appears to be messed up.
Note: I have verified that the previously failing internal tests now pass internally.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D68918404](https://our.internmc.facebook.com/intern/diff/D68918404) | true |
2,813,806,978 | [checkpointing] Add support for HuggingFace filesystem to read and write DCP checkpoints | ankitageorge | closed | [
"oncall: distributed",
"fb-exported",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Summary: This diff adds a new filesystem _HfFileSystem, built on top of the existing HfFileSystem. We can't use the existing class that HF has, because we need a few more methods so that it works with our existing save and load APIs. We also had fs as an argument to the FileSystemWriter and FileSystemReader so that it can work with other filesystems like this hf one.
Test Plan: N6475167 --> load and save works and writes a DCP checkpoint to huggingface
Differential Revision: D68440250
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,813,796,905 | Set USE_CUFILE=1 by default and add pypi package to binary build matrix | mikaylagawarecki | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 8 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145748
| true |
2,813,795,919 | [Torch] Extract arange_out resizing logic into a helper function that can be used by other devices | PatriceVignola | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Summary: We want to use the resizing implementation for arange_out in other devices (in this case MTIA), to make sure that the computations match and to avoid off-by-one-errors.
Test Plan: Existing CI tests pass.
Differential Revision: D68694489
| true |
2,813,782,830 | [ATen][CUDA] Implement 128 bit vectorization v2 | Aidyn-A | closed | [
"module: cuda",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td",
"module: core aten"
] | 20 | COLLABORATOR | This is a re-base PR to my previous one #141959.
Description from the original PR:
This PR implements 128-bit vectorization. It improves the performance of contiguous elementwise ops by 4-10% on Hopper H100.
<details>
<summary>The benchmark code used </summary>
```Python
import time
import torch
from torch.profiler import profile, ProfilerActivity
def benchmark(function, dtype=torch.float32, check_numerics=True, print_profile=False):
device = torch.device("cuda")
shapes = []
for p in range(24, 30):
shape = 1<<p
shapes.append(shape)
for shape in shapes:
for _ in range(6):
x = torch.randn(shape, device=device, dtype=dtype)
y = function(x)
if print_profile:
x = torch.randn(shape, device=device, dtype=dtype)
with profile(activities=[ProfilerActivity.CUDA], record_shapes=True) as prof:
y = function(x)
print(prof.key_averages().table(sort_by="cuda_time_total", row_limit=10))
x = torch.randn(shape, device=device, dtype=dtype)
torch.cuda.synchronize()
t1 = time.perf_counter()
for _ in range(6):
y = function(x)
torch.cuda.synchronize()
t2 = time.perf_counter()
perf_time = (t2 - t1) / 6
print(f"{function.__name__}, {dtype}, {shape}, {perf_time}")
if check_numerics:
x_cpu = x.cpu()
y_cpu = function(x_cpu).cuda()
try:
torch.testing.assert_allclose(y_cpu, y)
except AssertionError as error:
print("An exception occurred:", error)
def main():
ops = [
torch.relu,
torch.sigmoid,
torch.tanh,
torch.nn.functional.gelu,
torch.sin,
torch.exp,
]
dtypes = [
torch.float16,
torch.bfloat16,
torch.float32,
]
for op in ops:
for dtype in dtypes:
benchmark(op, dtype=dtype)
torch.cuda.empty_cache()
if __name__ == "__main__":
main()
```
</details>
<details>
<summary> Results </summary>
| op | dtype | size | time after | time before | % improvement |
| ---- | ---- | ---- | ---- | ---- | ---- |
| relu | torch.float16 | 33554432 | 4.84E-05 | 5.06E-05 | 4.66296539127052 |
| relu | torch.float16 | 67108864 | 9.22E-05 | 9.64E-05 | 4.56491432752297 |
| relu | torch.float16 | 134217728 | 0.000180343495837102 | 0.000187981834945579 | 4.23543919508829 |
| relu | torch.float16 | 268435456 | 0.000355071155354381 | 0.000370856161074092 | 4.44558942107169 |
| relu | torch.float16 | 536870912 | 0.000704489842367669 | 0.000736006341564159 | 4.47366268483987 |
| relu | torch.bfloat16 | 16777216 | 3.03E-05 | 3.04E-05 | 0.166504085842689 |
| relu | torch.bfloat16 | 33554432 | 4.89E-05 | 5.06E-05 | 3.45848238875716 |
| relu | torch.bfloat16 | 67108864 | 9.32E-05 | 9.65E-05 | 3.56122651631445 |
| relu | torch.bfloat16 | 134217728 | 0.000180805509444326 | 0.000187998676362137 | 3.97840029317567 |
| relu | torch.bfloat16 | 268435456 | 0.000356242332297067 | 0.000371279485989362 | 4.22104627356745 |
| relu | torch.bfloat16 | 536870912 | 0.000708114336399982 | 0.000736773828975856 | 4.04729732229083 |
| relu | torch.float32 | 16777216 | 5.61E-05 | 5.61E-05 | 0.0442587268354941 |
| relu | torch.float32 | 33554432 | 9.33E-05 | 9.30E-05 | -0.259070913799022 |
| relu | torch.float32 | 67108864 | 0.000181321326332788 | 0.000181289506144822 | -0.0175490597877115 |
| relu | torch.float32 | 134217728 | 0.000356896334172537 | 0.000356570177245885 | -0.0913870206618981 |
| relu | torch.float32 | 268435456 | 0.000709421835684528 | 0.000707465515006334 | -0.275762681635911 |
| relu | torch.float32 | 536870912 | 0.00141372415237129 | 0.00141036518228551 | -0.237597276678471 |
| sigmoid | torch.float16 | 16777216 | 3.10E-05 | 3.16E-05 | 2.10012593866895 |
| sigmoid | torch.float16 | 33554432 | 4.91E-05 | 5.23E-05 | 6.37710600666122 |
| sigmoid | torch.float16 | 67108864 | 9.30E-05 | 0.000100057009452333 | 7.61866144555331 |
| sigmoid | torch.float16 | 134217728 | 0.000180928347011407 | 0.000194982004662355 | 7.76752669390248 |
| sigmoid | torch.float16 | 268435456 | 0.000355658994521946 | 0.00038468533117945 | 8.16128288742412 |
| sigmoid | torch.float16 | 536870912 | 0.000705982849467546 | 0.000764021339515845 | 8.22094900634937 |
| sigmoid | torch.bfloat16 | 16777216 | 3.08E-05 | 3.17E-05 | 2.90965915673149 |
| sigmoid | torch.bfloat16 | 33554432 | 4.87E-05 | 5.24E-05 | 7.63503884668234 |
| sigmoid | torch.bfloat16 | 67108864 | 9.33E-05 | 0.000100019678939134 | 7.21238137428013 |
| sigmoid | torch.bfloat16 | 134217728 | 0.000180786165098349 | 0.000194868014659733 | 7.78922964250206 |
| sigmoid | torch.bfloat16 | 268435456 | 0.000355564659306159 | 0.000384909333661199 | 8.25297835063321 |
| sigmoid | torch.bfloat16 | 536870912 | 0.000705831005082776 | 0.000764102345177283 | 8.2557070566308 |
| sigmoid | torch.float32 | 16777216 | 4.93E-05 | 5.65E-05 | 14.5314136197766 |
| sigmoid | torch.float32 | 33554432 | 9.32E-05 | 9.31E-05 | -0.120169865610833 |
| sigmoid | torch.float32 | 67108864 | 0.000181328505277634 | 0.000180455681402236 | -0.481349512069855 |
| sigmoid | torch.float32 | 134217728 | 0.000357362829769651 | 0.000356093340087682 | -0.35523831137877 |
| sigmoid | torch.float32 | 268435456 | 0.000708921831877281 | 0.000707052337626616 | -0.263709504574663 |
| sigmoid | torch.float32 | 536870912 | 0.00141358317341656 | 0.0014090768333214 | -0.318788464654745 |
| tanh | torch.float16 | 16777216 | 3.03E-05 | 3.03E-05 | -0.0912564658661808 |
| tanh | torch.float16 | 33554432 | 4.90E-05 | 5.07E-05 | 3.46644442974484 |
| tanh | torch.float16 | 67108864 | 9.30E-05 | 9.68E-05 | 3.99871369815531 |
| tanh | torch.float16 | 134217728 | 0.00018052199933057 | 0.000188717152923346 | 4.53969799978138 |
| tanh | torch.float16 | 268435456 | 0.000355684508879979 | 0.000373026006855071 | 4.8755280430115 |
| tanh | torch.float16 | 536870912 | 0.000706660988119741 | 0.000740105014604827 | 4.73268328765002 |
| tanh | torch.bfloat16 | 16777216 | 2.99E-05 | 3.03E-05 | 1.21049563135981 |
| tanh | torch.bfloat16 | 33554432 | 4.89E-05 | 5.06E-05 | 3.48836101041744 |
| tanh | torch.bfloat16 | 67108864 | 9.28E-05 | 9.69E-05 | 4.39944918036626 |
| tanh | torch.bfloat16 | 134217728 | 0.000180710999605556 | 0.000189167990659674 | 4.67984299382829 |
| tanh | torch.bfloat16 | 268435456 | 0.000356062994493792 | 0.000372666652159144 | 4.66312363882606 |
| tanh | torch.bfloat16 | 536870912 | 0.000707100164921333 | 0.000740134331863374 | 4.67178040408393 |
| tanh | torch.float32 | 16777216 | 5.61E-05 | 5.64E-05 | 0.439595755746353 |
| tanh | torch.float32 | 33554432 | 9.31E-05 | 9.31E-05 | 0.00287633090228212 |
| tanh | torch.float32 | 67108864 | 0.000181465332085888 | 0.000180895323865116 | -0.31411411437098 |
| tanh | torch.float32 | 134217728 | 0.000356963835656643 | 0.000356073161431899 | -0.249513854283251 |
| tanh | torch.float32 | 268435456 | 0.000709201170442005 | 0.00070707315656667 | -0.300057862849997 |
| tanh | torch.float32 | 536870912 | 0.00141367283261692 | 0.00141030051357423 | -0.238550176877922 |
| gelu | torch.float16 | 16777216 | 2.73E-05 | 3.17E-05 | 15.921079070745 |
| gelu | torch.float16 | 33554432 | 5.06E-05 | 5.55E-05 | 9.76345374333098 |
| gelu | torch.float16 | 67108864 | 9.65E-05 | 0.000106600326641152 | 10.4308039074712 |
| gelu | torch.float16 | 134217728 | 0.000187776672343413 | 0.000208565829476962 | 11.0712139447915 |
| gelu | torch.float16 | 268435456 | 0.000370216167842348 | 0.000412251994324227 | 11.3544005187205 |
| gelu | torch.float16 | 536870912 | 0.000737301345604161 | 0.000819394170927505 | 11.1342296895002 |
| gelu | torch.bfloat16 | 16777216 | 3.02E-05 | 3.08E-05 | 1.78405479367653 |
| gelu | torch.bfloat16 | 33554432 | 5.13E-05 | 5.69E-05 | 10.9929393318302 |
| gelu | torch.bfloat16 | 67108864 | 9.76E-05 | 0.00010968199543034 | 12.3420807512356 |
| gelu | torch.bfloat16 | 134217728 | 0.000189661824454864 | 0.000214487663470209 | 13.0895287371091 |
| gelu | torch.bfloat16 | 268435456 | 0.000374197009174774 | 0.000423670164309442 | 13.2211519391275 |
| gelu | torch.bfloat16 | 536870912 | 0.000743675006863972 | 0.000842577001700799 | 13.299088166737 |
| gelu | torch.float32 | 16777216 | 5.06E-05 | 5.04E-05 | -0.413385894716413 |
| gelu | torch.float32 | 33554432 | 9.31E-05 | 9.32E-05 | 0.134157041722546 |
| gelu | torch.float32 | 67108864 | 0.000181480175039421 | 0.000180836669945469 | -0.354586992112075 |
| gelu | torch.float32 | 134217728 | 0.000356874331676712 | 0.000356305002545317 | -0.159532104402047 |
| gelu | torch.float32 | 268435456 | 0.000708909006789327 | 0.000706991491218408 | -0.270488250615287 |
| gelu | torch.float32 | 536870912 | 0.00141321367118508 | 0.00140937082081412 | -0.271922813181618 |
| sin | torch.float16 | 16777216 | 3.04E-05 | 3.11E-05 | 2.21834939018859 |
| sin | torch.float16 | 33554432 | 4.85E-05 | 5.23E-05 | 7.72165512511596 |
| sin | torch.float16 | 67108864 | 9.31E-05 | 9.98E-05 | 7.24947099480072 |
| sin | torch.float16 | 134217728 | 0.000180371008658161 | 0.000194791161144773 | 7.99471744039613 |
| sin | torch.float16 | 268435456 | 0.000355454161763191 | 0.000384903668115536 | 8.28503630574026 |
| sin | torch.float16 | 536870912 | 0.000705183832906187 | 0.000764360166310022 | 8.39161799270973 |
| sin | torch.bfloat16 | 16777216 | 3.11E-05 | 3.10E-05 | -0.257677954940036 |
| sin | torch.bfloat16 | 33554432 | 4.89E-05 | 5.24E-05 | 7.34808420323539 |
| sin | torch.bfloat16 | 67108864 | 9.26E-05 | 0.000100248667877167 | 8.22347488801205 |
| sin | torch.bfloat16 | 134217728 | 0.000180674154156198 | 0.00019567032965521 | 8.30012215584937 |
| sin | torch.bfloat16 | 268435456 | 0.000355360486234228 | 0.000386023331278314 | 8.62865913118873 |
| sin | torch.bfloat16 | 536870912 | 0.00070483615854755 | 0.000766805159704139 | 8.79197248964745 |
| sin | torch.float32 | 16777216 | 5.67E-05 | 5.64E-05 | -0.441348534920039 |
| sin | torch.float32 | 33554432 | 9.34E-05 | 9.30E-05 | -0.496458540364117 |
| sin | torch.float32 | 67108864 | 0.000181706990891447 | 0.000180556671693921 | -0.633062708199702 |
| sin | torch.float32 | 134217728 | 0.000356894995396336 | 0.000356046327700218 | -0.237791985616354 |
| sin | torch.float32 | 268435456 | 0.000708777321657787 | 0.000707602652255446 | -0.165731798471427 |
| sin | torch.float32 | 536870912 | 0.00141263716310884 | 0.00140912582476934 | -0.248566187496451 |
| exp | torch.float16 | 16777216 | 3.00E-05 | 3.04E-05 | 1.40099098901014 |
| exp | torch.float16 | 33554432 | 4.86E-05 | 5.03E-05 | 3.44611943643906 |
| exp | torch.float16 | 67108864 | 9.37E-05 | 9.55E-05 | 1.96412400380129 |
| exp | torch.float16 | 134217728 | 0.000180913504057874 | 0.000187193179347863 | 3.47109262113439 |
| exp | torch.float16 | 268435456 | 0.00035607748820136 | 0.000369079003576189 | 3.65131630210701 |
| exp | torch.float16 | 536870912 | 0.000707551507124056 | 0.000732363162872692 | 3.50669251620789 |
| exp | torch.bfloat16 | 16777216 | 2.98E-05 | 3.04E-05 | 1.74345594341654 |
| exp | torch.bfloat16 | 33554432 | 4.88E-05 | 5.04E-05 | 3.40217856534821 |
| exp | torch.bfloat16 | 67108864 | 9.32E-05 | 9.62E-05 | 3.29219958210226 |
| exp | torch.bfloat16 | 134217728 | 0.000180999826019009 | 0.000187239318620414 | 3.44723679499521 |
| exp | torch.bfloat16 | 268435456 | 0.000355944503098726 | 0.000369370992605885 | 3.77207384585864 |
| exp | torch.bfloat16 | 536870912 | 0.000707135167128096 | 0.000733066000975668 | 3.66702648277075 |
| exp | torch.float32 | 16777216 | 4.89E-05 | 5.63E-05 | 15.1245314346532 |
| exp | torch.float32 | 33554432 | 9.34E-05 | 9.31E-05 | -0.259945454477446 |
| exp | torch.float32 | 67108864 | 0.000181152504713585 | 0.000180474346658836 | -0.374357536939058 |
| exp | torch.float32 | 134217728 | 0.000356771342922002 | 0.000355627329554409 | -0.3206573034212 |
| exp | torch.float32 | 268435456 | 0.000708404501589636 | 0.00070713268360123 | -0.179532736671163 |
| exp | torch.float32 | 536870912 | 0.00141283582585553 | 0.00140944866385932 | -0.23974208002295 |
</details>
cc @msaroufim @ptrblck @eqy @manuelcandales @SherlockNoMad @angelayi | true |
2,813,758,428 | Add nitpick warning that aoti_torch/c/shim.h is ABI stable | ezyang | closed | [
"Merged",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145745
Signed-off-by: Edward Z. Yang <ezyang@meta.com> | true |
2,813,740,994 | [dynamo][builtin-skiplist-cleanup] Remove weakref | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145559
* #145804
* #145828
* #145826
* #145753
* __->__ #145744
WeakKeyDictionary already works very nicely with the UserDefinedObject Variable Tracker.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,813,737,711 | TorchScript run_method fails from 2.5.0 onward on Ubuntu | blace-ai-transfer | open | [
"oncall: jit",
"module: deadlock"
] | 4 | NONE | ### 🐛 Describe the bug
Using https://download.pytorch.org/libtorch/cu118/libtorch-shared-with-deps-2.5.1%2Bcu118.zip or https://download.pytorch.org/libtorch/cu118/libtorch-shared-with-deps-2.5.0%2Bcu118.zip the following code does not return. Console prints the "run..." but never seems to finish execution of the `run_method` line.
```cpp
TEST(MLLib, JITForward) {
// Here we use compile on a small TorchScript snippet.
auto identity_module = torch::jit::compile(R"JIT(
def forward(x):
return x
)JIT");
// Run forward
std::cout << "run..." << std::endl;
auto output = identity_module->run_method("forward", torch::ones({2, 3}));
auto t = output.toTensor();
std::cout << "Output size: " << t.sizes() << std::endl;
std::cout << "Output:\n" << t << std::endl;
}
```
### Versions
libtorch 2.5.0 / 2.5.1
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | true |
2,813,704,938 | [aarch64] Rebuild everything with ArmPL | psaab | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12 | CONTRIBUTOR | Summary: Rebuild everything that used OpenBLAS with ArmPL
Test Plan: CI, prod test
Reviewed By: Nicoshev
Differential Revision: D68219559
| true |
2,813,650,365 | [BE]: Update Cutlass submodule to 3.8 candidate for SM100+ support | Skylion007 | closed | [
"open source",
"better-engineering",
"ciflow/trunk",
"release notes: cuda",
"topic: not user facing"
] | 17 | COLLABORATOR | Update CUTLASS submodule to 3.8 candidate for preliminary Blackwell support, without this PyTorch will not compile various CUTLASS kernels properly for Blackwell. | true |
2,813,638,034 | [MPS] Fix `c0::metal::log_gamma` correctness on M4 | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145740
To workaround a bug where `abs` method call seems to be ignored before calling log, which could be reproduced by running the following code (submitted as FB16415011 )
```swift
import Metal
func run_shader<T: BinaryFloatingPoint> (library: MTLLibrary, kernel_name: String, type: T.Type, nelem: Int = 16) {
guard let mfunc = library.makeFunction(name: kernel_name) else { fatalError("Can't find function") }
let device = library.device
guard let queue = device.makeCommandQueue() else { fatalError("Can't make queue") }
guard let cmdBuffer = queue.makeCommandBuffer() else { fatalError("Can't make command buffer") }
guard let computeEncoder = cmdBuffer.makeComputeCommandEncoder() else { fatalError("Can't make compute encoder") }
guard let ibuf = device.makeBuffer(length:nelem * MemoryLayout<T>.size, options: [.storageModeShared]) else { fatalError("Can't alloc") }
let ibuf_data = ibuf.contents().assumingMemoryBound(to: T.self)
for i in 0..<nelem {
ibuf_data[i] = T(sin(Float(2 + i)))
}
guard let obuf = device.makeBuffer(length:nelem * MemoryLayout<T>.size, options: [.storageModeShared]) else { fatalError("Can't alloc") }
let obuf_data = obuf.contents().assumingMemoryBound(to: T.self)
computeEncoder.setComputePipelineState(try! device.makeComputePipelineState(function: mfunc))
computeEncoder.setBuffer(obuf, offset:0, index: 0)
computeEncoder.setBuffer(ibuf, offset:0, index: 1)
computeEncoder.dispatchThreads(MTLSizeMake(nelem, 1, 1), threadsPerThreadgroup:MTLSizeMake(nelem, 1, 1))
computeEncoder.endEncoding()
cmdBuffer.commit()
cmdBuffer.waitUntilCompleted()
print("Results for \(String(describing: T.self)):", terminator: " ")
for i in 0..<nelem {
print(obuf_data[i], terminator: " ")
}
print()
}
let shader_source = """
#include <metal_stdlib>
template<typename T>
float foo(T x) {
const auto abs_x = ::metal::abs(static_cast<float>(x));
auto rc = ::metal::log(abs_x);
return rc - ::metal::log(::metal::abs(abs_x * ::metal::sinpi(abs_x)));
}
kernel void half_kernel(
device half* out_ptr0,
constant half* in_ptr0,
uint xindex [[thread_position_in_grid]]
) {
auto inp = in_ptr0[xindex];
auto out = foo(inp);
out_ptr0[xindex] = static_cast<half>(out);
}
kernel void float_kernel(
device float* out_ptr0,
constant float* in_ptr0,
uint xindex [[thread_position_in_grid]]
) {
auto inp = in_ptr0[xindex];
auto out = foo(inp);
out_ptr0[xindex] = static_cast<float>(out);
}
"""
let options = MTLCompileOptions()
options.mathMode = .safe
options.mathFloatingPointFunctions = .precise
guard let device = MTLCopyAllDevices().first else { fatalError("Not Metal device found") }
let library = try! device.makeLibrary(source:shader_source, options:options)
run_shader(library:library, kernel_name:"half_kernel", type: Float16.self)
run_shader(library:library, kernel_name:"float_kernel", type: Float.self)
``` | true |
2,813,499,812 | No broadcasting by default. | YagaoDirac | open | [
"triaged",
"needs research",
"module: python frontend"
] | 5 | NONE | ### 🚀 The feature, motivation and pitch
I actually ran into shape bug but pytorch guessed it would be some broadcasting intentionally, and wasted a bit time( like hours, but still acceptable).
Then I decide to print the shape and denote it in the variable name, like g_reshaped__batch_1_output.
I prefer the no-broadcasting to be the by default, and when I need it, I write a line to tell pytorch to do it.
Although the variable names will still be very long in my code.

### Alternatives
The default behavior now is the alternative.
Another alternative is like, specify a batch dimention for everywhere, so pytorch knows the xth dim is batch, and it never touches the batch dim.
### Additional context
It's ok if you guys decide not to make this change.
The only problem I can remember is, when I use mseloss or any loss that automatically reduce the shape to scalar, I always have to print the shape to make sure it's correct.
My personal convention is a explicit batch dim everywhere and assert it in every function. It makes all my test code have one extra layer of [], but it makes me much more confident about my code.
cc @albanD | true |
2,813,484,099 | [Docs] Make comm handle wait & is_completed docs more clear for multi-stream | Edenzzzz | open | [
"triaged",
"open source",
"Stale",
"release notes: distributed (c10d)"
] | 5 | NONE | Fixes #145713
Looks like a earlier fix on `wait()` (https://github.com/pytorch/pytorch/pull/143305) has been pushed but not reflected in the main branch, so feel free to revert my change on that part.
Made coressponding clarifications on `is_completed`
cc @awgu @wconstab | true |
2,813,470,095 | while_loop fails to export when strict=False | desertfire | closed | [
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 1 | CONTRIBUTOR | Repro:
Apply the following patch, then run `python test/export/test_export.py -k test_while_loop_simple` and the test fails when `strict=False`
```
diff --git a/test/export/test_export.py b/test/export/test_export.py
index 9b070141ac3..22096a68a1b 100755
--- a/test/export/test_export.py
+++ b/test/export/test_export.py
@@ -66,6 +66,8 @@ from torch.testing._internal.common_utils import (
IS_MACOS,
IS_SANDCASTLE,
IS_WINDOWS,
+ instantiate_parametrized_tests,
+ parametrize,
run_tests,
skipIfCrossRef,
skipIfXpu,
@@ -12162,6 +12164,26 @@ class TestExportCustomClass(TorchTestCase):
"torch.ops.aten.upsample_bilinear2d.vec", 1, exactly=True
).run(ep.graph_module.code)
+ @parametrize("strict", [False, True])
+ def test_while_loop_simple(self, strict):
+ class Simple(torch.nn.Module):
+ def forward(self, ci, a, b):
+ def cond_fn(i, x, y):
+ return i > 0
+
+ def body_fn(i, x, y):
+ return i - 1, x + y, y - x
+
+ return torch._higher_order_ops.while_loop(cond_fn, body_fn, [ci, a, b])
+
+ example_inputs = (
+ torch.tensor(1),
+ torch.randn(10, 20),
+ torch.randn(10, 20),
+ )
+ ep = export(Simple(), example_inputs, strict=strict)
+
+instantiate_parametrized_tests(TestExportCustomClass)
if __name__ == "__main__":
run_tests()
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,813,435,813 | [MPS][BE] Use conveinence methods to set args | malfet | closed | [
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3 | CONTRIBUTOR | It's better to call `mtl_setArgs` rather than set arguments one by one with the risk of making a typo
Also, all interactions with MTLCommandBuffer must be serialized, which is commonly done using dispatch queues
| true |
2,813,403,617 | [MPSInductor] Compiled lgamma produces different results on M2 vs M4 for float16 | malfet | closed | [
"triaged",
"module: mps",
"oncall: pt2"
] | 3 | CONTRIBUTOR | ### 🐛 Describe the bug
Running this on my M4
```
% python -c "import torch; print(torch.compile(torch.ops.aten.lgamma)(torch.arange(128, device='mps', dtype=torch.half).sin()))"
```
results in tensor full of NaNs
```
tensor([ inf, 1.1414e-01, 5.9540e-02, 1.8916e+00, nan, nan,
nan, 3.1567e-01, 6.2943e-03, 7.6611e-01, nan, inf,
nan, 7.4658e-01, 5.4245e-03, 3.2495e-01, nan, nan,
nan, 1.8281e+00, 5.6671e-02, 1.1841e-01, nan, nan,
nan, nan, 1.8970e-01, 2.6672e-02, 1.2031e+00, nan,
nan, nan, 4.7778e-01, 0.0000e+00, 5.1709e-01, nan,
nan, nan, 1.1084e+00, 2.1942e-02, 2.0862e-01, nan,
nan, nan, 4.0234e+00, 1.0565e-01, 6.5002e-02, 2.0312e+00,
nan, nan, nan, 2.9834e-01, 7.7553e-03, 8.0713e-01,
nan, nan, nan, 7.0850e-01, 4.2725e-03, 3.4399e-01,
nan, nan, nan, 1.7119e+00, 5.1727e-02, 1.2708e-01,
nan, nan, nan, nan, 1.7798e-01, 3.0197e-02,
1.2715e+00, nan, nan, nan, 4.5288e-01, 2.8205e-04,
5.4492e-01, nan, nan, nan, 1.0508e+00, 1.9165e-02,
2.2168e-01, nan, nan, nan, 3.3223e+00, 9.8267e-02,
7.0923e-02, 2.1914e+00, nan, nan, nan, 2.8198e-01,
9.8114e-03, 8.5059e-01, nan, nan, nan, 6.7285e-01,
3.1242e-03, 3.6377e-01, nan, nan, nan, 1.6074e+00,
4.6844e-02, 1.3611e-01, nan, nan, nan, nan,
1.6663e-01, 3.4088e-02, 1.3457e+00, nan, nan, nan,
4.2896e-01, 5.6458e-04, 5.7471e-01, nan, nan, nan,
9.9561e-01, 1.6403e-02], device='mps:0', dtype=torch.float16)
```
while on M1 it's more reasonable
```
tensor([ inf, 1.1414e-01, 5.9540e-02, 1.8916e+00, 1.5957e+00, 3.2129e+00,
1.5117e+00, 3.1567e-01, 6.2943e-03, 7.6611e-01, 1.2725e+00, inf,
1.2705e+00, 7.4658e-01, 5.4245e-03, 3.2495e-01, 1.4912e+00, 3.2734e+00,
1.5781e+00, 1.8281e+00, 5.6671e-02, 1.1841e-01, 4.7344e+00, 1.9688e+00,
2.4141e+00, 2.1152e+00, 1.8970e-01, 2.6672e-02, 1.2031e+00, 1.3857e+00,
4.4102e+00, 1.3105e+00, 4.7778e-01, 0.0000e+00, 5.1709e-01, 1.2910e+00,
4.7969e+00, 1.3564e+00, 1.1084e+00, 2.1942e-02, 2.0862e-01, 1.9551e+00,
2.5273e+00, 1.8896e+00, 4.0234e+00, 1.0565e-01, 6.5002e-02, 2.0312e+00,
1.6309e+00, 3.0938e+00, 1.5547e+00, 2.9834e-01, 7.7553e-03, 8.0713e-01,
1.2793e+00, 7.6250e+00, 1.2666e+00, 7.0850e-01, 4.2725e-03, 3.4399e-01,
1.4541e+00, 3.4062e+00, 1.5459e+00, 1.7119e+00, 5.1727e-02, 1.2708e-01,
3.6445e+00, 2.0234e+00, 2.3398e+00, 2.2422e+00, 1.7798e-01, 3.0197e-02,
1.2715e+00, 1.4082e+00, 4.2305e+00, 1.3271e+00, 4.5288e-01, 2.8205e-04,
5.4492e-01, 1.2812e+00, 5.0625e+00, 1.3389e+00, 1.0508e+00, 1.9165e-02,
2.2168e-01, 1.8672e+00, 2.6094e+00, 1.8418e+00, 3.3223e+00, 9.8267e-02,
7.0923e-02, 2.1914e+00, 1.6689e+00, 2.9863e+00, 1.6035e+00, 2.8198e-01,
9.8114e-03, 8.5059e-01, 1.2871e+00, 6.9336e+00, 1.2656e+00, 6.7285e-01,
3.1242e-03, 3.6377e-01, 1.4229e+00, 3.5430e+00, 1.5137e+00, 1.6074e+00,
4.6844e-02, 1.3611e-01, 3.1445e+00, 2.0820e+00, 2.2715e+00, 2.3965e+00,
1.6663e-01, 3.4088e-02, 1.3457e+00, 1.4326e+00, 4.0234e+00, 1.3457e+00,
4.2896e-01, 5.6458e-04, 5.7471e-01, 1.2744e+00, 5.4297e+00, 1.3232e+00,
9.9561e-01, 1.6403e-02], device='mps:0', dtype=torch.float16)
```
### Versions
nightly
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen @chauhang @penguinwu | true |
2,813,365,866 | Workaround for unsupported return of multiple tensors for `torch.cond` in a model intended for torch.onnx.export(dynamo=true,...) | ionymikler | closed | [
"module: onnx",
"triaged",
"oncall: pt2",
"module: higher order operators",
"oncall: export"
] | 14 | NONE | ### 🐛 Describe the bug
Trying to `onnx.export` a `nn.Module` with a conditional in its computational graph. In essence similar to this example:
```py
import torch
class Wrapper(torch.nn.Module):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.cond_model = CondModel()
def forward(self, x):
nt = self.cond_model(x)
return nt
class CondModel(torch.nn.Module):
def forward(self, x):
def true_fn(x,z):
x = x + 1.0
z = z * 0.0
return x,z
def false_fn(x,z):
x = x - 1.0
z = z * 1.0
return x,z
z = torch.rand(x.shape)
nt = torch.cond(x.sum() > 0, true_fn, false_fn, [x,z])
return nt
```
As per [the documentation](https://pytorch.org/docs/2.6/cond.html#torch._higher_order_ops.cond.cond), the return from `torch.cond` must be a single tensor. Is there a dirty workaround that allows to get multiple tensors from the return?
I tried using nested tensors:
```py
def true_fn(x,z):
x = x + 1.0
z = z * 0.0
nt = torch.nested.nested_tensor([x,z], layout=torch.jagged)
return nt
```
But compile fails at validation of the `.shape` of the return tensors (`.shape` in `NestedTensors` [loses precise meaning](https://pytorch.org/docs/2.6/nested.html#size)):
```pytb
torch._dynamo.exc.Unsupported: Expect branches to return tensors with same metadata but find pair[0] differ in 'shape: torch.Size([2, s1]) vs torch.Size([2, s2])', 'stride: (s1, 1) vs (s2, 1)', where lhs is TensorMetadata(shape=torch.Size([2, s1]), dtype=torch.float32, requires_grad=False, stride=(s1, 1), memory_format=None, is_quantized=False, qparams={}) and rhs is TensorMetadata(shape=torch.Size([2, s2]), dtype=torch.float32, requires_grad=False, stride=(s2, 1), memory_format=None, is_quantized=False, qparams={})
```
<details>
**<summary>Full traceback here</summary>**
```pytb
Traceback (most recent call last):
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/variables/higher_order_ops.py", line 55, in graph_break_as_hard_error
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/variables/higher_order_ops.py", line 906, in call_function
unimplemented(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/exc.py", line 356, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Expect branches to return tensors with same metadata but find pair[0] differ in 'shape: torch.Size([2, s1]) vs torch.Size([2, s2])', 'stride: (s1, 1) vs (s2, 1)', where lhs is TensorMetadata(shape=torch.Size([2, s1]), dtype=torch.float32, requires_grad=False, stride=(s1, 1), memory_format=None, is_quantized=False, qparams={}) and rhs is TensorMetadata(shape=torch.Size([2, s2]), dtype=torch.float32, requires_grad=False, stride=(s2, 1), memory_format=None, is_quantized=False, qparams={})
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/iony/DTU/f24/thesis/code/early_exit_vit/simple/example_conditional.py", line 34, in <module>
result = model(input_tensor)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/DTU/f24/thesis/code/early_exit_vit/simple/example_conditional.py", line 9, in forward
nt = self.cond_model(x)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/DTU/f24/thesis/code/early_exit_vit/simple/example_conditional.py", line 28, in forward
nt = torch.cond(x.sum() > 0, true_fn, false_fn, [x,z])
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 201, in cond
return torch.compile(_cond_op_wrapper, backend=backend, fullgraph=True)(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1406, in __call__
return self._torchdynamo_orig_callable(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 566, in __call__
return _compile(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1006, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 734, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 769, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1402, in transform_code_object
transformations(instructions, code_options)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 237, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 681, in transform
tracer.run()
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1076, in run
while self.step():
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 986, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 683, in wrapper
return inner_fn(self, inst)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1763, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 921, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/variables/higher_order_ops.py", line 58, in graph_break_as_hard_error
raise UncapturedHigherOrderOpError(reason + msg) from e
torch._dynamo.exc.UncapturedHigherOrderOpError: Cond doesn't work unless it is captured completely with torch.compile. Scroll up to find out what causes the graph break.
from user code:
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 193, in _cond_op_wrapper
return cond_op(*args, **kwargs)
```
</details>
Is the feature not implemented, even in a nightly? Or is there another workaround that might work if I intend to operate only for inference?
### Error logs
_No response_
### Versions
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.19.2
[pip3] onnxscript==0.1.0.dev20241226
[pip3] torch==2.6.0.dev20241226+cu124
cc @chauhang @penguinwu @zou3519 @ydwu4 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @bdhirsh @yf225 | true |
2,813,335,428 | "AssertionError: Guard check failed" on PyTorch nightlies after 2025-01-22 when running torchao tests | vkuzo | closed | [
"high priority",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 10 | CONTRIBUTOR | ### 🐛 Describe the bug
Running the following command introduces an assertion error on recent versions of PyTorch nightlies:
```
// run a test in torchao repo
// test source code: https://github.com/pytorch/ao/blob/main/test/prototype/test_smoothquant.py
> pytest test/prototype/test_smoothquant.py -s -x
...
if not output_graph.export:
if not self.guard_manager.check(output_graph.local_scope):
reasons = get_guard_fail_reason_helper(
self.guard_manager, # type: ignore[arg-type]
output_graph.local_scope,
CompileContext.current_compile_id(),
)
> raise AssertionError(f"Guard check failed: {reasons}")
E AssertionError: Guard check failed: 0/0: ___check_metadata_139942440334048_c0/0
E
E
E You can suppress this exception and fall back to eager by setting:
E import torch._dynamo
E torch._dynamo.config.suppress_errors = True
```
Full output: https://gist.github.com/vkuzo/b50ac59aa0936fd072250d7544aaa2a9
The nightly from `20250122` does not have this failure, but the nightlies from `20250123` and after do.
### Versions
https://gist.github.com/vkuzo/ddcaa43948a96cf63e58fb71fbdcbf68
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames | true |
2,813,279,758 | add pt2 callbacks for backward pass and prevent duplicate callbacks | burak-turk | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Summary: This change adds callbacks for lazy backwards compilation while preventing duplicate callbacks to be fired.
Differential Revision: D68577593
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,813,256,319 | [Performance] `tensordot` has substantial overhead | randolf-scholz | open | [
"module: performance",
"triaged",
"module: linear algebra"
] | 11 | CONTRIBUTOR | ### 🐛 Describe the bug
`torch.tensordot(A, B)` is ~50% slower than `torch.dot(A.flatten(), B.flatten())` for matrices of size≤512 on GPU, and on CPU it is even slower than `torch.trace(A.T @ B)` until n=1024.


## Why this matters
A common mistake is to compute a quantity expressed in a math textbook / paper like $\text{trace}(A^T B)$ naively how it is written, which is $O(n^3)$, when it could be computed as $\langle A \mid B \rangle_{F} = \sum_{ij} A_{ij}B_{ij}$ which is $O(n^2)$. (The former computes all off-diagonal entries of the matrix product, which are irrelevant for the trace).
This would make a good lint rule, but the most natural implementation of the Frobenius inner product, which is `tensordot`, has this huge overhead and so should be currently avoided for small matrices.
## Expectation
- `tensordot(A, B)` should not be substantially slower than `torch.dot(A.flatten(), B.flatten())`.
- `tensordot(A, B)` should usually be faster than `torch.sum(A*B)`, as the latter cannot take advantage of fused multiply-add instructions.
## References:
- https://github.com/numpy/numpy/issues/25713
- https://github.com/astral-sh/ruff/issues/9664
- Code: https://gist.github.com/randolf-scholz/9b166ff6b112820d123037448369a0b7
### Versions
<details>
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 570.86.10
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz
CPU family: 6
Model: 141
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU max MHz: 4800,0000
CPU min MHz: 800,0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 10 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.14.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchinfo==1.8.0
[pip3] triton==3.1.0
[conda] Could not collect
</details>
cc @msaroufim @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | true |
2,813,142,448 | [export] Fail fast on pytorch with `aoti_load_package` | bhack | open | [
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 4 | CONTRIBUTOR | ### 🐛 Describe the bug
When we use
`aoti_compile_and_package` for GPU and then on another instance we use `aoti_load_package` without GPU it is failing internally at:
https://github.com/triton-lang/triton/blob/main/python/triton/runtime/driver.py#L8
As the message is a bit ambiguous
`raise RuntimeError(f"{len(actives)} active drivers ({actives}). There should only be one.")`
Can we fast and safely fail in pytorch with a more user friendly message instead of triton internals?
### Versions
nightly
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,813,139,536 | KL div between t-distribution | moghadas76 | open | [
"module: distributions",
"triaged"
] | 2 | NONE | ### 🐛 Describe the bug
KL between T-distribution did not implemented:
```python
target_distribution = StudentT(
df=df,
loc=target,
scale=torch.ones_like(target) * scale.mean() # Using mean scale for target
)
# Compute KL divergence
kl_div = torch.distributions.kl_divergence(pred_distribution, target_distribution)
```
raise NotImplemented error

[reference](https://arxiv.org/abs/1701.05638)
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.3.52 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] easy-torch 1.3.2 pypi_0 pypi
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.4.52 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-forecasting 1.2.0 pypi_0 pypi
[conda] pytorch-lightning 2.2.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.3.0 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt23cu121 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt23cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt23cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt23cu121 pypi_0 pypi
[conda] torch-summary 1.4.5 pypi_0 pypi
[conda] torchaudio 2.3.0 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchmetrics 1.3.0.post0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.18.0 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi
cc @fritzo @neerajprad @alicanb @nikitaved | true |
2,812,788,078 | [ATen][Native][CUDA][SCALED_MM] limit f8f8bf16 rowwise scaled matmul to sm_90 | Aidyn-A | closed | [
"module: cuda",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"matrix multiplication",
"module: float8",
"module: core aten"
] | 6 | COLLABORATOR | The CUTLASS-based kernel for f8f8bf16 rowwise scaled matmul is specific to Hopper devices only. It is not re-usable on newer devices without modifications. This PR adds a guard for this matmul to be sm_90 specific. Once the kernel is there, the guard may be removed.
cc @ptrblck @msaroufim @eqy @yanbing-j @vkuzo @albanD @kadeng @penguinwu @manuelcandales @SherlockNoMad @angelayi | true |
2,812,586,788 | [NT] torch.cat for nested tensors | FloCF | closed | [] | 1 | NONE | ### 🚀 The feature, motivation and pitch
Nested tensors are a great way to handle a batch of sequences with different lengths, which is often the case in NLP or ViTs for images of very different resolutions.
Often, you add a `cls_token` to the input tensor as the first token, which currently (awaik) is not straightforward with nested tensors.
Ideally, I would expect something like the following to work:
```py
#nt = nested tensor of shape (4, *seq_len, 256)
cls_token = torch.randn(1, 256).expand(nt.size(0), 1, -1)
# adding the cls_token to all sequences in the batch
out = torch.cat((cls_token, nt), dim=1)
```
### Alternatives
Currently, I would have to do a workaround like this, where I am unsure if the backward pass is preserved correctly:
```py
cls_token = torch.randn(1, 256)
concatenated_tensors = [torch.cat((cls_token, t), dim=0) for t in out.unbind()]
nested.as_nested_tensor(concatenated_tensors)
```
### Additional context
torch== 2.5.1
And keep up the awesome work you do for PyTorch and NT! 👍
cc @jbschlosser @cpuhrsch | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.