id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,964,096,425
|
Raise error/warning when calling collectives with tensors of different dtypes.
|
gkroiz
|
closed
|
[
"oncall: distributed"
] | 3
|
NONE
|
### 🚀 The feature, motivation and pitch
I've noticed that collectives with tensors of different data types will cause hangs. This behavior makes sense and I think this issue should only happen when there is a logical error in user code. However, it could be helpful if some warning/error was raise for when this happens to help with debugging. Maybe this already exists?
### Alternatives
_No response_
### Additional context
```
import torch
import torch.distributed as dist
def init_distributed():
dist.init_process_group(backend="nccl")
torch.cuda.set_device(dist.get_rank())
def main():
init_distributed()
rank = dist.get_rank()
print(f"Rank: {rank}, World size: {dist.get_world_size()}")
if rank % 2 != 0:
t = torch.tensor([1], dtype=torch.float32).cuda()
else:
t = torch.tensor([2], dtype=torch.float32).cuda()
# Uncomment line below to trigger hang
# t = torch.tensor([2], dtype=torch.bfloat16).cuda()
dist.all_reduce(t)
torch.cuda.synchronize()
print(f"Rank: {rank}, t: {t.item()}")
if __name__ == "__main__":
main()
```
Example script that can cause hanging behavior mentioned above if `# t = torch.tensor([2], dtype=torch.bfloat16).cuda()` is uncommented (run with `torchrun`).
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,964,095,892
|
pytorch pip install instructions: always include the cuda index
|
stas00
|
open
|
[
"module: docs",
"oncall: releng",
"triaged",
"needs design"
] | 4
|
CONTRIBUTOR
|
I'd like to propose that the default CUDA install command should include the explicit CUDA version target? i.e. instead of:
pip3 install torch torchvision torchaudio
this:
pip3 install torch torchvision torchaudio --index-url <https://download.pytorch.org/whl/cu124>

This is because the current way, while "neat", doesn't follow the principle of least surprise - someone putting `pip3 install torch torchvision torchaudio` in a script where it has to be cuda-12.4 will break their build as soon as pytorch upgrades the default to a higher CUDA version. Granted, an advanced user is likely to figure out that they have to add `--index-url
``<https://download.pytorch.org/whl/cu124>` themselves to future proof their build system, but many won't know that.Also, I'm aware that pytorch will bring whatever cuda version libs it needs, but this is for the use-case where a specific cuda build is wanted and not just the latest
The other way to do it is to rename the default install command w/o index to CUDA default and have CUDA 12.4 option point to --index-url https://download.pytorch.org/whl/cu124
That way it's clear to the user that if they just want the defaults, then they can use the command w/o index url, but if they want a specific cuda version then they have to use the indexed version.
A third option is to list both commands, like so:
pip3 install torch torchvision torchaudio
# if you want to future proof your build script to cuda-12.4 use this instead:
pip3 install torch torchvision torchaudio --index-url <https://download.pytorch.org/whl/cu124>
tag: @seemethere
cc @svekars @sekyondaMeta @AlannaBurke
| true
|
2,964,083,110
|
[WIP][dynamic shapes] guard_or_false rewrite for fake_impls.py:infer_size, compute_contiguous
|
pianpwk
|
open
|
[] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,964,068,197
|
UNSTABLE pull / linux-jammy-xpu-2025.0-py3.9 / build
|
malfet
|
closed
|
[
"module: ci",
"triaged",
"unstable"
] | 2
|
CONTRIBUTOR
|
It started to fail after a PR that did not touch anything related to XPUs were landed see https://hud.pytorch.org/hud/pytorch/pytorch/783f045c4f26cb9d11789d80f68a86854dfad9f9/1?per_page=50&name_filter=xpu&mergeLF=true
cc @seemethere @pytorch/pytorch-dev-infra
| true
|
2,964,029,625
|
[dynamo] Lazily import fsdp-related modules
|
anijain2305
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150689
* __->__ #150429
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,020,921
|
[export] Strict-export fails with list of modules
|
angelayi
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
([x-post from internal](https://fb.workplace.com/groups/pytorch.edge.users/permalink/1732340610969558/))
```python
def test_list_model(self):
class A(torch.nn.Module):
def __init__(self):
super().__init__()
self.a = torch.nn.Parameter(torch.ones(3, 3))
def forward(self, x):
return x + self.a
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.models = [A(), A()]
def forward(self, x):
for m in self.models:
x = m(x)
return x
inp = (torch.ones(3, 3),)
M()(*inp)
ep = torch.export.export(M(), inp, strict=True)
print(ep)
```
Strict-export fails with:
```
File "/data/users/angelayi/pytorch/torch/export/_trace.py", line 1981, in _export_for_training
export_artifact = export_func(
File "/data/users/angelayi/pytorch/torch/export/_trace.py", line 1475, in _strict_export
_replace_param_buffer_names(param_buffer_table, export_graph_signature)
File "/data/users/angelayi/pytorch/torch/export/_trace.py", line 271, in _replace_param_buffer_names
spec.target = param_buffer_table[spec.target]
KeyError: 'L__self___models_0_a
```
Non-strict considers the parameters as constant tensors, which I believe is the correct approach, because that parameter is not in the state dict of the eager module:
```
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, c_lifted_tensor_0: "f32[3, 3]", c_lifted_tensor_1: "f32[3, 3]", x: "f32[3, 3]"):
# File: /data/users/angelayi/pytorch/moo.py:325 in forward, code: return x + self.a
add: "f32[3, 3]" = torch.ops.aten.add.Tensor(x, c_lifted_tensor_0); x = c_lifted_tensor_0 = None
add_1: "f32[3, 3]" = torch.ops.aten.add.Tensor(add, c_lifted_tensor_1); add = c_lifted_tensor_1 = None
return (add_1,)
Graph signature:
# inputs
c_lifted_tensor_0: CONSTANT_TENSOR target='lifted_tensor_0'
c_lifted_tensor_1: CONSTANT_TENSOR target='lifted_tensor_1'
x: USER_INPUT
# outputs
add_1: USER_OUTPUT
```
I feel like the proper way to write the code is to use torch.nn.ModuleList or torch.nn.Sequential.
### Versions
main
| true
|
2,964,013,459
|
[BE] Move all lint runner to 24.04
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
As Ubuntu-20 reached EOL on Apr 1st, see https://github.com/actions/runner-images/issues/11101
This forces older python version to be 3.8
Delete all linux-20.04 runners from the lintrunner.yml
| true
|
2,963,987,119
|
clang-format aten/src/ATen/cpu/vec/*.h
|
swolchok
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150568
* #150380
* __->__ #150426
I got a complaint about indentation on #150380. Make the machines fix it for us.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,963,985,001
|
clang-format aten/src/ATen/cpu/vec/*.h
|
swolchok
|
closed
|
[
"module: cpu"
] | 4
|
CONTRIBUTOR
|
I got a complaint about indentation on #150380. Make the machines fix it for us.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,963,983,966
|
[ez] Remove dead lite interpreter CI code
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
There are no lite-interpreter build environments in CI
I assume every mac build is arm64
| true
|
2,963,970,643
|
Make CompileEventLogger more defensive w.r.t to AOTAutogradCache and FXGraphCache
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150423
This PR makes it so that we don't crash due to logging if we invoke AOTAutogradCache/FXGraphCache without using dynamo. This is preparation for supporting certain VLLM use cases where they store graph modules and have special handling in conjunection with the caches.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,963,957,515
|
Test self hosted GPU runner
|
zhe-thoughts
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"ciflow/periodic"
] | 2
|
NONE
|
This is for experimenting with hosting github runners on nvidia managed hardware
| true
|
2,963,903,833
|
Support tuning of _scaled_grouped_mm
|
bertmaher
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 21
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150421
This includes the default aten implementation, as well as a Triton
implementation imported from FBGEMM
(https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/experimental/gemm/triton_gemm/grouped_gemm.py)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,963,881,484
|
UNSTABLE inductor-unittest / linux-jammy-cpu-py3.12-gcc11-inductor-halide / build
|
seemethere
|
closed
|
[
"module: ci",
"triaged",
"oncall: pt2",
"module: inductor",
"unstable",
"topic: inductor halide backend"
] | 3
|
MEMBER
|
This is actually a 2 part failure:
Part 1
* The actual job `linux-jammy-cpu-py3.12-gcc11-inductor-halide` is failing because the docker image is attempting to be rebuilt on a `c5.2xlarge` ([link](https://github.com/pytorch/pytorch/actions/runs/14191050396/job/39755627529))
* This is causing a timeout because the `c5.2xlarge` doesn't actually have enough compute to be able to compile the CUDA stuff
Part 2 (actual failure)
* The docker-build job for this halide configuration is failing (on a larger runner) due to a minimum cmake version bump: ([link]( https://github.com/pytorch/pytorch/actions/runs/14140877129/job/39621933172#step:6:66338))
```
> [final 12/30] RUN if [ -n "yes" ]; then bash ./install_halide.sh; fi:
652.1 Compatibility with CMake < 3.5 has been removed from CMake.
652.1
652.1 Update the VERSION argument <min> value. Or, use the <min>...<max> syntax
652.1 to tell CMake that the project requires at least <min> but has been updated
652.1 to work with policies introduced by <max> or earlier.
652.1
652.1 Or, add -DCMAKE_POLICY_VERSION_MINIMUM=3.5 to try configuring anyway.
652.1
652.1
652.1 -- Configuring incomplete, errors occurred!
```
### What should we do?
We should more than likely fix the cmake failure since if we resolve the timeout issue the cmake failure will still persist in the halide step.
cc @malfet @pytorch/pytorch-dev-infra @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,963,856,466
|
Add stride + dtype to autotune results
|
PaulZhang12
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150419
Add stride/dtype info to autotune gemm results. New output header:
`AUTOTUNE mm(1024x1024, 1024x7680)`
`strides: [1, 1024], [7680, 1]`
`dtypes: torch.bfloat16, torch.bfloat16`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D72253313](https://our.internmc.facebook.com/intern/diff/D72253313)
| true
|
2,963,853,936
|
ci: Use cache / progress when local docker build
|
seemethere
|
closed
|
[
"topic: not user facing"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150418
It's a bit annoying to try and work on these locally when the cache /
progress isn't being used so let's just set it so that those flags are
only valid when in CI directly.
`${CI}` is a default environment variable that's defined by actions
itself.
See https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/store-information-in-variables#default-environment-variables
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,963,849,380
|
DISABLED test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_uint8 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_uint8&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39760851170).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_uint8`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1420, in only_fn
return fn(slf, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 1349, in test_foreach_copy_with_multi_dtypes
out = foreach_copy_(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_copy_', keys=('aten::_foreach_copy_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.uint8], Tensor[size=(19, 19), device="cuda:0", dtype=torch.uint8], Tensor[size=(18, 18), device="cuda:0", dtype=torch.uint8], Tensor[size=(17, 17), device="cuda:0", dtype=torch.uint8], Tensor[size=(16, 16), device="cuda:0", dtype=torch.uint8], Tensor[size=(15, 15), device="cuda:0", dtype=torch.uint8], Tensor[size=(14, 14), device="cuda:0", dtype=torch.uint8], Tensor[size=(13, 13), device="cuda:0", dtype=torch.uint8], Tensor[size=(12, 12), device="cuda:0", dtype=torch.uint8], Tensor[size=(11, 11), device="cuda:0", dtype=torch.uint8], Tensor[size=(10, 10), device="cuda:0", dtype=torch.uint8], Tensor[size=(9, 9), device="cuda:0", dtype=torch.uint8], Tensor[size=(8, 8), device="cuda:0", dtype=torch.uint8], Tensor[size=(7, 7), device="cuda:0", dtype=torch.uint8], Tensor[size=(6, 6), device="cuda:0", dtype=torch.uint8], Tensor[size=(5, 5), device="cuda:0", dtype=torch.uint8], Tensor[size=(4, 4), device="cuda:0", dtype=torch.uint8], Tensor[size=(3, 3), device="cuda:0", dtype=torch.uint8], Tensor[size=(2, 2), device="cuda:0", dtype=torch.uint8], Tensor[size=(1, 1), device="cuda:0", dtype=torch.uint8]], args=(TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.uint8], Tensor[size=(19, 19), device="cuda:0", dtype=torch.uint8], Tensor[size=(18, 18), device="cuda:0", dtype=torch.uint8], Tensor[size=(17, 17), device="cuda:0", dtype=torch.uint8], Tensor[size=(16, 16), device="cuda:0", dtype=torch.uint8], Tensor[size=(15, 15), device="cuda:0", dtype=torch.uint8], Tensor[size=(14, 14), device="cuda:0", dtype=torch.uint8], Tensor[size=(13, 13), device="cuda:0", dtype=torch.uint8], Tensor[size=(12, 12), device="cuda:0", dtype=torch.uint8], Tensor[size=(11, 11), device="cuda:0", dtype=torch.uint8], Tensor[size=(10, 10), device="cuda:0", dtype=torch.uint8], Tensor[size=(9, 9), device="cuda:0", dtype=torch.uint8], Tensor[size=(8, 8), device="cuda:0", dtype=torch.uint8], Tensor[size=(7, 7), device="cuda:0", dtype=torch.uint8], Tensor[size=(6, 6), device="cuda:0", dtype=torch.uint8], Tensor[size=(5, 5), device="cuda:0", dtype=torch.uint8], Tensor[size=(4, 4), device="cuda:0", dtype=torch.uint8], Tensor[size=(3, 3), device="cuda:0", dtype=torch.uint8], Tensor[size=(2, 2), device="cuda:0", dtype=torch.uint8], Tensor[size=(1, 1), device="cuda:0", dtype=torch.uint8]]), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_uint8
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,963,787,976
|
`torch.backends.mkldnn.flags()` CM should not warn
|
pytorchbot
|
closed
|
[
"module: cpu",
"module: mkldnn",
"open source",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
By returning `None` rather than `False` from `THPModule_allowTF32OneDNN` when USE_XPU is not defined
Added regression test
Fixes https://github.com/pytorch/pytorch/issues/149829
| true
|
2,963,779,297
|
[Inductor] Fix scaled_mm template migration missing endif block
|
PaulZhang12
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150415
* #150045
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D72254740](https://our.internmc.facebook.com/intern/diff/D72254740)
| true
|
2,963,762,482
|
Fix for CVE-2024-7804 needed
|
MateiLipan
|
closed
|
[] | 1
|
NONE
|
Cross posting from https://github.com/pytorch/pytorch/issues/149044#issuecomment-2757233290
@ZainRizvi let me know if this is the right place
Hi team,
Didn't know where it is best to write this, but I would suggest scheduling [CVE-2024-7804](https://www.cve.org/CVERecord?id=CVE-2024-7804) for a future release . Even though the description says <=2.3.1, I see that _InternalRPCPickler.deserialize has not been patched yet. The code is the same in 2.3.1 and 2.6.0.
2.3.1: https://github.com/pytorch/pytorch/blob/v2.3.1/torch/distributed/rpc/internal.py#L146
2.6.0: https://github.com/pytorch/pytorch/blob/v2.6.0/torch/distributed/rpc/internal.py#L148
An exact description of the vulnerability can be found here: https://huntr.com/bounties/0e870eeb-f924-4054-8fac-d926b1fb7259
### Versions
2.3.x - 2.6.x
| true
|
2,963,681,991
|
Disable -Werror for s390x test module compilation
|
AlekseiNikiforovIBM
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/s390"
] | 3
|
COLLABORATOR
|
This change should make nightly testsuite green again for s390x.
| true
|
2,963,606,447
|
[MPSInductor] Fix neg for unsigned types
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150412
* #150386
By more-or-less copy-n-pasting the fix from https://github.com/pytorch/pytorch/pull/94035
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,963,563,561
|
Test layout_opt_default set to 0
|
atalman
|
open
|
[
"ciflow/periodic",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ci-no-td",
"ciflow/inductor-periodic"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,963,381,544
|
Add torch._scaled_mm for CPU
|
yanbing-j
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/slow",
"ciflow/rocm",
"ci-no-td",
"ciflow/inductor-rocm",
"ciflow/rocm-mi300"
] | 15
|
COLLABORATOR
|
This PR is the duplicated one for https://github.com/pytorch/pytorch/pull/139975.
This PR is to add torch._scaled_mm for CPU backend.
_scaled_mm_out_cpu and _scaled_mm_cpu are new added and included in torch._scaled_mm CPU dispatch. We also add _scaled_mm_out_cpu_emulated as a fallback function if the current platform cannot run FP8 matmul using oneDNN. And this PR also updates the various UTs related to FP8 to support CPU tests.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,963,319,009
|
Update torch-xpu-ops commit pin
|
xytintel
|
closed
|
[
"open source",
"topic: not user facing",
"ciflow/xpu"
] | 9
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [11320f39484d1887870f24172c4803392491a76c](https://github.com/intel/torch-xpu-ops/commit/11320f39484d1887870f24172c4803392491a76c), include
- Move torch-xpu-ops commit pin from release/2.7 to main branch.
- ~~Bugfix of building error relating to XCCL. (Since https://github.com/pytorch/pytorch/pull/148590 has not landed, we decided to revert the XCCL changes to avoid breaking our CI.)~~
- Fixes #150001 by removing pre-CXX11 ABI logic from build script for XPU
- Resolves new added UT failures relating to https://github.com/pytorch/pytorch/pull/145241
- Fixes https://github.com/pytorch/pytorch/issues/150430
| true
|
2,962,954,897
|
tensor with same shape, all contiguous, but have different stride
|
Xiang-cd
|
open
|
[
"triaged"
] | 7
|
NONE
|
### 🐛 Describe the bug
```python
base = './'
q = torch.load(f'{base}/problemq2.pt')
k = torch.load(f'{base}/problemk2.pt')
v = torch.load(f'{base}/problemv2.pt')
o = torch.load(f'{base}/problemo2.pt')
print(q.stride(), k.stride(),v.stride(),o.stride())
print(q.shape, k.shape, v.shape, o.shape)
print(q.is_contiguous(), k.is_contiguous(),v.is_contiguous(),o.is_contiguous())
```
output is:
```
(256000, 256000, 64, 1) (256000, 256000, 64, 1) (512000, 256000, 64, 1) (256000, 256000, 64, 1)
torch.Size([1, 2, 4000, 64]) torch.Size([1, 2, 4000, 64]) torch.Size([1, 2, 4000, 64]) torch.Size([1, 2, 4000, 64])
True True True True
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 545.23.08
cuDNN version: Probably one of the following:
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn.so.8.2.4
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.4
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.4
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.4
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.4
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.4
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.4
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8350C CPU @ 2.60GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 3100.006
CPU max MHz: 2601.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.1.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxsim==0.4.36
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] numpy 2.1.0 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
| true
|
2,962,896,411
|
DISABLED test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_int8 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 7
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_int8&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39748509020).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_int8`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,962,896,405
|
DISABLED test_matrix_rank_basic_cuda_float32 (__main__.TestLinalgCUDA)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"module: linear algebra",
"skipped"
] | 7
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_matrix_rank_basic_cuda_float32&suite=TestLinalgCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39743051664).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_matrix_rank_basic_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_linalg.py", line 3699, in test_matrix_rank_basic
self.assertEqual(matrix_rank(a, hermitian=True).item(), 10)
torch._C._LinAlgError: linalg.eigh: The algorithm failed to converge because the input matrix is ill-conditioned or has too many repeated eigenvalues (error code: 9).
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/test_linalg.py TestLinalgCUDA.test_matrix_rank_basic_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_linalg.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,962,872,771
|
Generalize compile collective to avoid cuda-bias
|
Chao1Han
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"ciflow/xpu"
] | 15
|
CONTRIBUTOR
|
Fixes https://github.com/intel/torch-xpu-ops/issues/1527
Let the combination of `compile` and `collective` to support more devices.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @zhangxiaoli73 @kwen2501 @guangyey
| true
|
2,962,861,927
|
Running `LazyModuleMixin` example throw errors
|
zeshengzong
|
closed
|
[
"module: nn",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 📚 The doc issue
Running example in doc [LazyModuleMixin](https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html) got errors like this:
```python
class LazyMLP(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.fc1 = torch.nn.LazyLinear(10)
self.relu1 = torch.nn.ReLU()
self.fc2 = torch.nn.LazyLinear(1)
self.relu2 = torch.nn.ReLU()
def forward(self, input):
x = self.relu1(self.fc1(input))
y = self.relu2(self.fc2(x))
return y
# constructs a network with lazy modules
lazy_mlp = LazyMLP()
# transforms the network's device and dtype
# NOTE: these transforms can and should be applied after construction and before any 'dry runs'
lazy_mlp = lazy_mlp.cuda()
lazy_mlp
# performs a dry run to initialize the network's lazy modules
lazy_mlp(torch.ones(10,10).cuda())
# after initialization, LazyLinear modules become regular Linear modules
lazy_mlp
# attaches an optimizer, since parameters can now be used as usual
optim = torch.optim.SGD(mlp.parameters(), lr=0.01)
NameError Traceback (most recent call last)
Cell In[14], line 23
21 lazy_mlp
22 # attaches an optimizer, since parameters can now be used as usual
---> 23 optim = torch.optim.SGD(mlp.parameters(), lr=0.01)
NameError: name 'mlp' is not defined
```
Example is different from actual result
Example:
```python
>>> lazy_mlp = LazyMLP()
>>> # The state dict shows the uninitialized parameters
>>> lazy_mlp.state_dict()
OrderedDict([('fc1.weight', Uninitialized parameter),
('fc1.bias',
tensor([-1.8832e+25, 4.5636e-41, -1.8832e+25, 4.5636e-41, -6.1598e-30,
4.5637e-41, -1.8788e+22, 4.5636e-41, -2.0042e-31, 4.5637e-41])),
('fc2.weight', Uninitialized parameter),
('fc2.bias', tensor([0.0019]))])
```
Actual:
```python
In [16]: lazy_mlp = LazyMLP()
...: # The state dict shows the uninitialized parameters
...: lazy_mlp.state_dict()
Out[16]:
OrderedDict([('fc1.weight', <UninitializedParameter>),
('fc1.bias', <UninitializedParameter>),
('fc2.weight', <UninitializedParameter>),
('fc2.bias', <UninitializedParameter>)])
```
### Suggest a potential alternative/fix
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,962,787,847
|
[Attention] Always pad in preprocess_mask to avoid recompilations
|
ChuanqiXu9
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 10
|
CONTRIBUTOR
|
Motivation: for the following script:
```
// demo.py
import torch
import json
from transformers import BertModel, BertConfig
CONFIG = """
{
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.6.0.dev0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
"""
config = json.loads(CONFIG)
bloom_config = BertConfig(**config)
model = BertModel(bloom_config).half().cuda()
torch.compiler.reset()
torch.cuda.empty_cache()
compiled_fn = torch.compile(model)
vocab_size = 30522
for b in range(1, 3):
for s in range(1, 10):
print(f"🚀 {b} {s}")
input_ids = torch.randint(0, vocab_size, (b, s)).cuda()
attention_mask = torch.ones(b, s).cuda()
with torch.no_grad():
out = compiled_fn(input_ids, attention_mask).last_hidden_state
```
when we run it with:
```
time TORCH_LOGS=recompiles python demo.py
```
We can see there are 7 recompilations and it takes 2 mins (fresh build) or 1 min (cached build) in my machine.
One root cause of the recompilations is, there are guards to check the alignments of the inputs (see the patch). So there are unexpected recompilations for `(1, 4)`, `(1, 8)`, `(2, 4)` and `(2, 8)` inputs.
In this patch, we always try to always pad the inputs if we don't know its shape at compilation to avoid the guards on alignment. It is fine to always pad the tensor. It won't change the semantics.
Now there are only 3 recompilations and it takes 1 min (fresh build) and 17s (cached build) in my machine.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,962,782,805
|
Feature Request: Add minLSTM and minGRU Modules
|
constatza
|
open
|
[
"module: nn",
"module: rnn",
"triaged",
"module: python frontend"
] | 0
|
NONE
|
### Feature, motivation, pitch
**Summary:**
Implement minimal versions of LSTM and GRU (minLSTM and minGRU) in PyTorch. These modules simplify traditional RNNs by removing hidden state dependencies and non-linearities, enabling parallel training via a parallel scan algorithm.
**Motivation:**
- **Speed:** Achieves up to 175× (minGRU) and 235× (minLSTM) training speed improvements on long sequences by avoiding sequential BPTT.
- **Efficiency:** Uses fewer parameters, reducing memory footprint.
- **Performance:** Empirically comparable to Transformers (see [Feng et al., 2024](https://arxiv.org/abs/2410.01201)).
Are there any thoughts on officially supporting these modules in pytorch?
### Alternatives
There is an unofficial implementation here:
- [lucidrains/minGRU-pytorch](https://github.com/lucidrains/minGRU-pytorch)
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,962,770,497
|
AssertionError: found no DeviceMesh from dtensor args for c10d.broadcast_.default!
|
KylinC
|
closed
|
[
"oncall: distributed",
"module: dtensor"
] | 9
|
NONE
|
### 🐛 Describe the bug
my bash script:
```
CUDA_VISIBLE_DEVICES=3,4,5,6 nohup accelerate launch --config_file /archive/share/cql/LLM-FoR-ALL/mini_vlm/accelerate_config.yaml /archive/share/cql/LLM-FoR-ALL/mini_vlm/qwen25vl_sft.py > /archive/share/cql/LLM-FoR-ALL/mini_vlm/logs/output_sft.log 2>&1 &
```
accelerate_config.yaml:
```
compute_environment: LOCAL_MACHINE
main_process_port: 6000
debug: false
deepspeed_config:
gradient_accumulation_steps: 16
gradient_clipping: 1.0
distributed_type: DEEPSPEED
downcast_bf16: 'no'
enable_cpu_affinity: false
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4 # GPU数量进行设置
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
```
[rank0]: assert mesh is not None, f"found no DeviceMesh from dtensor args for {op_call}!"
[rank0]: AssertionError: found no DeviceMesh from dtensor args for c10d.broadcast_.default!
Traceback (most recent call last):
File "/archive/share/cql/LLM-FoR-ALL/mini_vlm/qwen25vl_sft.py", line 93, in <module>
trainer.train()
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/trl/trainer/sft_trainer.py", line 434, in train
output = super().train(*args, **kwargs)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/transformers/trainer.py", line 2245, in train
return inner_training_loop(
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/transformers/trainer.py", line 2369, in _inner_training_loop
model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/accelerate/accelerator.py", line 1323, in prepare
result = self._prepare_deepspeed(*args)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/accelerate/accelerator.py", line 1842, in _prepare_deepspeed
engine, optimizer, _, lr_scheduler = ds_initialize(**kwargs)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/deepspeed/__init__.py", line 193, in initialize
engine = DeepSpeedEngine(args=args,
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 269, in __init__
self._configure_distributed_model(model)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1201, in _configure_distributed_model
self._broadcast_model()
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1120, in _broadcast_model
dist.broadcast(p.data, groups._get_broadcast_src_rank(), group=self.seq_data_parallel_group)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/deepspeed/comm/comm.py", line 117, in log_wrapper
return func(*args, **kwargs)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/deepspeed/comm/comm.py", line 224, in broadcast
return cdb.broadcast(tensor=tensor, src=src, group=group, async_op=async_op)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/deepspeed/comm/torch.py", line 200, in broadcast
return torch.distributed.broadcast(tensor=tensor, src=src, group=group, async_op=async_op)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 83, in wrapper
return func(*args, **kwargs)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2421, in broadcast
work = group.broadcast([tensor], opts)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/torch/distributed/tensor/_api.py", line 340, in __torch_dispatch__
return DTensor._op_dispatcher.dispatch(
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py", line 166, in dispatch
op_info = self.unwrap_to_op_info(op_call, args, kwargs)
File "/archive/share/cql/envs/xdit1/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py", line 399, in unwrap_to_op_info
assert mesh is not None, f"found no DeviceMesh from dtensor args for {op_call}!"
AssertionError: found no DeviceMesh from dtensor args for c10d.broadcast_.default!
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241112+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.29.3
Libc version: glibc-2.31
Python version: 3.9.21 | packaged by conda-forge | (main, Dec 5 2024, 13:51:40) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
GPU 4: NVIDIA A100-PCIE-40GB
GPU 5: NVIDIA A100-PCIE-40GB
GPU 6: NVIDIA A100-PCIE-40GB
Nvidia driver version: 535.183.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.4
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.4
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.4
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.4
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.4
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.4
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 3202.724
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4500.12
Virtualization: AMD-V
L1d cache: 4 MiB
L1i cache: 4 MiB
L2 cache: 64 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Vulnerable, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241112+cu121
[pip3] torchaudio==2.5.0.dev20241112+cu121
[pip3] torchvision==0.20.0.dev20241112+cu121
[pip3] triton==3.0.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.93 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241112+cu121 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241112+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241112+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,962,760,362
|
RuntimeError when exporting large model to ONNX due to 2GiB protobuf limit
|
byrcoder
|
closed
|
[] | 5
|
NONE
|
### 🐛 Describe the bug
When exporting videoseal model (https://github.com/facebookresearch/videoseal/) to ONNX format, I encountered the following error:
`RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library...`
** The sample code
video_model = videoseal.load("videoseal")
video_model = video_model.to(device)
video_model.eval()
video_model.train(False)
output_path = "output/videoseal.onnx"
torch.onnx.export(
model,
(video, masks, msgs, is_video),
f=output_path,
input_names=input_names,
output_names=output_names,
do_constant_folding=True,
)
** Error
<img width="1517" alt="Image" src="https://github.com/user-attachments/assets/0e6cca83-62ed-4cb0-af10-ddb8c28ff4a7" />
** Environment
- PyTorch Version: 2.5.0 / 2.5.1(Both tried)
- OS: Centos
- CUDA: 12.1
- Python: 3.10
### Versions
Can someone help me?
| true
|
2,962,648,889
|
torch wheels are unusable if CUDA RPMs are installed on the system (was Import error in nvidia/cuda:12.6.3-cudnn-devel-rockylinux9)
|
hzhangxyz
|
open
|
[
"module: binaries",
"module: cuda",
"triaged",
"module: third_party",
"has workaround"
] | 8
|
NONE
|
### 🐛 Describe the bug
```python
import torch
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Rocky Linux 9.5 (Blue Onyx) (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.12.5 (main, Dec 3 2024, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-2)] (64-bit runtime)
Python platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.34
Is CUDA available: N/A
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.9.8.0
/usr/lib64/libcudnn_adv.so.9.8.0
/usr/lib64/libcudnn_cnn.so.9.8.0
/usr/lib64/libcudnn_engines_precompiled.so.9.8.0
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib64/libcudnn_graph.so.9.8.0
/usr/lib64/libcudnn_heuristic.so.9.8.0
/usr/lib64/libcudnn_ops.so.9.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 7
CPU(s) scaling MHz: 57%
CPU max MHz: 3900.0000
CPU min MHz: 1200.0000
BogoMIPS: 5800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts vnmi pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 32 MiB (32 instances)
L3 cache: 44 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @osalpekar @atalman @ptrblck @eqy
| true
|
2,962,474,171
|
[Reland] Launch kernel on current stream & remove `record_stream` entirely
|
kwen2501
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ci-no-td"
] | 13
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150398
Relanding #148590 due to merge conflict.
This PR has multiple changes to `ProcessGroupNCCL` (which unfortunately are related):
1. When async_op=False, we directly launch the collective on "current" stream, instead of a trampoline stream and join back.
- Resolves #147729
- Resolves #146881
- Also saves two event syncs (which have overhead in case of HIP) and one pybind when we call `work.wait()` in distributed_c10d.py on behalf of user.
2. Entirely remove `record_stream` and use CPU-side stashing for managing tensor lifetime against recycling.
- Resolves #147168
3. Remove tensor life management when async_op=False; only use it when async_op=True.
4. To guard against user not calling `work.wait()`, we ask watchdog to unstash tensors after detecting completion of collectives, to prevent us from holding reference to tensors forever. This is a safety net, rather than a service guarantee, see discussion [here](https://github.com/pytorch/pytorch/issues/147168#issuecomment-2660142460).
5. Profile in async_op=False mode would look different -- collective kernels would show up in the same line and compute kernels.
Joint work with @cenzhaometa who wants to remove the event sync overhead.
Squashed contents:
* [ptd][nccl] use current-stream as nccl-stream under async=False mode (#147820)
PTD current workflow:
- PTD creates its own dedicated `ncclStream` for comm operation
- it will first add a dependency on current-stream (typically the compute stream) to ensure tensors are ready before invoking collective
such stream synchronization become expensive in Inference world (cpu overhead: 70us vs GPU kernel time: 160us).
This diff:
- async=False [default], will use current-stream as nccl-stream and avoid the stream-sync overhead
- async=True, will retain existing logic: create new nccl-stream, let it wait on current-stream to ensure tensors are ready
- pass down async from c10d down to NCCL-PG
this helps shave off 50% CPU overhead **(70us -> 35us)**, which reduce total CPU/GPU from **230us to 195us by 15%**
* [PGNCCL] Make avoid-record-stream default
* [c10d] Add asyncOp argument to Ops
* Change python side wait
* Pass asyncOp at ProcessGroup level
* Watchdog unstashing tensors as a safety net
* Stash tensors for reduce_scatter_v and all_gather_v
Pull Request approved: https://github.com/pytorch/pytorch/pull/149753
* [c10d] Move unstashing from watchdog to main thread
Pull Request approved: https://github.com/pytorch/pytorch/pull/150079
* [PGNCCL][BE] Merge mutex into TensorShelf for encapsulation
Pull Request approved: https://github.com/pytorch/pytorch/pull/150130
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
Differential Revision: D72224314
| true
|
2,962,391,038
|
update get start xpu document for v2.7
|
ZhaoqiongZ
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: docs",
"release notes: xpu"
] | 14
|
CONTRIBUTOR
|
update get start xpu document for v2.7
| true
|
2,962,366,887
|
Compare device name of profiler dynamically
|
elpis-furiosa
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: PrivateUse1"
] | 6
|
CONTRIBUTOR
|
Compare self.use_device of torch.autograd.profiler.profiler with _get_privateuse1_backend_name(), since privateuse1 backend can be renamed.
cc @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens @albanD
| true
|
2,962,339,582
|
[Doc] Update CMAKE_PREFIX_PATH for XPU windows README
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
We found that the `pip install cmake` and `conda install cmake` has different behavior.
The reason is that the pip installed one doesn't find the corresponding libs under conda env. So we need to set the `CMAKE_PREFIX_PATH` for alignment.
cc @svekars @sekyondaMeta @AlannaBurke @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,962,305,674
|
Bugfix in backward of linalg.eigh
|
podgorskiy
|
open
|
[
"fb-exported",
"topic: not user facing"
] | 12
|
NONE
|
Summary:
Bugfix in backward of linalg.eigh
The expression `VhgV = VhgV - at::matmul(V.mH(), V * at::real(diag_VhgV).unsqueeze(-2))` was simplified to `VhgV = 0.5 *(vhgv - vhgv.T)` for hermitian matrices, which does not seem to be correct.
I do not understand where `VhgV = 0.5 *(vhgv - vhgv.T)` came from, but the generic variant `VhgV = VhgV - at::matmul(V.mH(), V * at::real(diag_VhgV).unsqueeze(-2))` must work for all cases, and it yields different result.
See https://www.internalfb.com/intern/anp/view?id=6872083 for more detail.
The aforementioned simplification was introduced in D33530149.
I also do not know why tests are passing with this bug, or how tests work. My speculation is that the bug does not make gradients completely wrong, it only messes up the component due to normalization of eigenvectors to norm 1. Also depending on how exactly tests are run, it is possible that finite differences do not produce adequate numerical derivative because there is no unique eigen value decomposition and variation of input matrix may produce jumps in the output values.
Test Plan:
CI + Test example:
```
# Test data
A = th.as_tensor(
[
[0.7124138474464417, 1.7507718801498413, 1.3207206726074219],
[1.7507718801498413, -0.9052235484123230, 1.3285166025161743],
[1.3207206726074219, 1.3285166025161743, 0.9041665196418762],
]
).requires_grad_()
# Some arbitrary gradients
gL = th.as_tensor([-0.5, -0.4, -0.3])
gV = th.as_tensor(
[
[-1.2920498847961426, -0.8850541710853577, -1.0448201894760132],
[2.084738254547119, -0.3660943806171417, 1.0761417150497437],
[0.3967193365097046, -0.7547367811203003, 0.9107910990715027],
]
)
# Do eigenval decomposition
A.grad = None
L, V = th.linalg.eigh(A)
L.backward(gL, retain_graph=True)
V.backward(gV)
A.grad
```
Currently this produces:
```
tensor([[-0.7728, 0.0490, 0.0705],
[ 0.0490, -0.3912, 0.0465],
[ 0.0705, 0.0465, -0.0360]])
```
However the expected values:
```
vhgv = V.T @ gV
m = vhgv - V.T @ (V * th.diag(vhgv)[..., None, :])
F = 1.0/(L[..., None, :] - L[..., None])
F[0,0] = 1
F[1,1] = 1
F[2,2] = 1
gA = V @ (th.diag(gL) + F * m) @ V.T
print(gA)
tensor([[-0.7728, 0.1188, -0.0907],
[-0.0208, -0.3912, 0.3343],
[ 0.2317, -0.2413, -0.0360]], grad_fn=<MmBackward0>)
```
Differential Revision: D72221098
| true
|
2,962,234,702
|
[DTensor] Fix compute_local_shape_and_global_offset for uneven sharding
|
wconstab
|
closed
|
[
"oncall: distributed",
"release notes: distributed (fsdp)",
"ciflow/inductor",
"release notes: distributed (checkpoint)"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150393
* #150146
* #148894
This fix is needed for cases where distributed checkpointing (DCP) is used to
save a local state dict. That's becuase DCP relies on the local-shape / global-offset
for each rank being correct to save files that can be correctly
resharded (or indeed loaded at all).
(If saving a 'full state dict' instead of a local one, DCP would convert
to 'full tensors' before saving, and that logic got fixed in the
previous PR in this stack.)
Also add a util `_explicit_order_placements` which converts a list of
placements with StridedSharding into a list of placements with only
regular sharding, with the order shuffled such that it is equivalent.
| true
|
2,962,151,208
|
DISABLED test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_int64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 7
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_int64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39737123964).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_int64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,962,141,813
|
Add new dependences for gen_pyi.py
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150391
As the title stated.
When we update some functions in _torch_docs.py or _tensor_docs.py, and execute some commands (like ``python setup.py evolve``) to install the latest version, the description about the function we just changed is not updated.
| true
|
2,962,137,551
|
On SM89, Triton is not supported as Inductor GEMM backend?
|
henrylhtsang
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"upstream triton"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
repro 1:
1. Get a SM89 machine
2. run
```
TORCHINDUCTOR_AUTOTUNE_FALLBACK_TO_ATEN=0 pytest -v test/inductor/test_benchmark_fusion.py -k test_avoid_register_spilling_cuda
TORCHINDUCTOR_AUTOTUNE_FALLBACK_TO_ATEN=0 pytest -v test/inductor/test_torchinductor.py -k test_linear_dynamic_maxautotune_cuda
TORCHINDUCTOR_AUTOTUNE_FALLBACK_TO_ATEN=0 python test/inductor/test_aot_inductor.py -k test_addmm_multiple_dynamic_cuda
```
repro 2:
1. Checkout https://github.com/pytorch/pytorch/pull/148622
2. Check the errors of test_avoid_register_spilling_cuda and test_linear_dynamic_maxautotune_cuda
error:
```
FAILED [0.1854s] inductor/test_torchinductor.py::GPUTests::test_linear_dynamic_maxautotune_cuda - torch._inductor.exc.InductorError: LoweringException: NoValidChoicesError: No choices to select, please consider adding ATEN into max_autotune_gemm_backends config (defined in torch/_inductor/config.py) to allow at least one choice.
```
Unfortunately I don't have a SM89 machine to repro further..
### Versions
trunk
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @bertmaher @int3 @davidberard98 @nmacchioni @embg @peterbell10
| true
|
2,962,118,211
|
Enabling xpu in OffsetBasedRNGTracker .
|
pytorchbot
|
closed
|
[
"oncall: distributed",
"open source",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Else torch.distributed breaks on xpu devices.
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,962,113,991
|
double free or corruption (out) in torch.as_strided_scatter
|
qiqicliff
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"topic: fuzzer"
] | 3
|
NONE
|
### 🐛 Describe the bug
Fuzzing of the edge cases in api torch.as_strided_scatter as below:
```
import torch
input = torch.randn(4, 4)
src = torch.full((2, 2), 10.0)
size = 2, 2
stride = 2, 1
storage_offset = 9223372036854775807
torch.as_strided_scatter(input, src, size, stride, storage_offset)
```
## output
crushed with
```
double free or corruption (out)
Aborted (core dumped)
```
## version
<=2.6.0
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.9 | packaged by conda-forge | (main, Mar 4 2025, 22:48:41) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-106-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 1
CPU max MHz: 2900.0000
CPU min MHz: 1200.0000
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Virtualization: VT-x
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 6 MiB (24 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.2.4 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @malfet
| true
|
2,962,112,065
|
Inconsistent results with PyTorch DeepLabV3 model even after fixing random seeds
|
wwwwwly
|
closed
|
[] | 2
|
NONE
|
I encountered non-deterministic behavior when using PyTorch's DeepLabV3 model with pretrained weights. Despite fixing all random seeds, repeated executions still produce different results.
Code for fixing random seeds and model implementation are as follows.
```python
import torch
import torch.nn as nn
import numpy as np
import random
import os
def seed_fixed(seed=2025):
random.seed(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # for multi-GPU
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
seed_fixed()
from torchvision.models.segmentation import deeplabv3_resnet50
from torchvision.models.segmentation import DeepLabV3 as DLV3
class DeepLabV3(nn.Module):
def __init__(self, in_channels=9, out_channels=1):
super().__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.model = deeplabv3_resnet50(pretrained=True)
self.model: DLV3
def process_model(self):
self.model.backbone.conv1 = nn.Conv2d(
self.in_channels, 64, kernel_size=7, stride=2, padding=3, bias=False
)
self.model.classifier[-1] = nn.Conv2d(
256, self.out_channels, kernel_size=1, stride=1
)
self.model.aux_classifier[-1] = nn.Conv2d(
256, self.out_channels, kernel_size=1, stride=1
)
def freeze(self, flag=True):
if flag:
self.model.requires_grad_(False)
self.model.backbone.conv1.requires_grad_(True)
self.model.classifier[-1].requires_grad_(True)
self.model.aux_classifier[-1].requires_grad_(True)
else:
self.model.requires_grad_(True)
def forward(self, input):
if self.training:
return self.model(input)
else:
return self.model(input)["out"]
if __name__ == "__main__":
pass
```
When training with fixed random seeds, I can reproduce results using other models (e.g., UNet and FCN), but with DeepLabV3, I get inconsistent outcomes between runs.
Images 1-2: Reproducible outputs from FCN and UNet with fixed seeds. Image 3: Non-deterministic behavior in DeepLabV3.



**Python environment:**
python 3.9.20 + pytorch 1.11.0 + cudatoolkit 11.3.1 + torchvision 0.12.0 + numpy 1.24.3
| true
|
2,962,109,718
|
[MPSInductor] torch.complex128 is unsupported on MPS
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150412
* __->__ #150386
Same as torch.float64
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,962,055,151
|
Fix device description of torch.asarray to avoid ambiguity
|
FFFrog
|
closed
|
[
"open source",
"release notes: python_frontend"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150391
* __->__ #150385
As the title stated.
Related Issue:
https://github.com/pytorch/pytorch/issues/150199
| true
|
2,962,047,677
|
Add `mse_loss_backward_out` type promotion
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes #94086, following #94621
## Test Result
```bash
pytest test/test_nn.py -k test_mse_loss_backward_promotion
```

| true
|
2,962,047,222
|
bound sympy accuracy
|
avikchaudhuri
|
open
|
[
"module: cpu",
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 13
|
CONTRIBUTOR
|
Differential Revision: D72215735
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,962,030,412
|
[MPS] Test bf16 perf of few unary and binary ops
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150386
* __->__ #150382
| true
|
2,962,030,077
|
"RuntimeError: makeDeviceForHostname(): unsupported gloo device" with nightly torch 2.8
|
AznamirWoW
|
open
|
[
"high priority",
"triage review",
"oncall: distributed",
"triaged",
"module: regression"
] | 10
|
NONE
|
### 🐛 Describe the bug
Nightly 2.8 torch results in an error during attempt to init a distributed training
```python
import sys
import os
import torch.distributed as dist
from random import randint
import torch
os.environ["USE_LIBUV"] = "0" if sys.platform == "win32" else "1"
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = str(randint(20000, 55555))
device = torch.device("cuda")
n_gpus = 1
rank = 0
dist.init_process_group(
backend="gloo" if sys.platform == "win32" or device.type != "cuda" else "nccl",
init_method="env://",
world_size=n_gpus if device.type == "cuda" else 1,
rank=rank if device.type == "cuda" else 0,
)
print("done")
```
Traceback (most recent call last):
File "T:\test.py", line 16, in <module>
dist.init_process_group(
File "X:\torch\venv\Lib\site-packages\torch\distributed\c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "X:\torch\venv\Lib\site-packages\torch\distributed\c10d_logger.py", line 95, in wrapper
func_return = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "X:\torch\venv\Lib\site-packages\torch\distributed\distributed_c10d.py", line 1724, in init_process_group
default_pg, _ = _new_process_group_helper(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "X:\torch\venv\Lib\site-packages\torch\distributed\distributed_c10d.py", line 1949, in _new_process_group_helper
backend_class = ProcessGroupGloo(
^^^^^^^^^^^^^^^^^
RuntimeError: makeDeviceForHostname(): unsupported gloo device
### Versions
installed torch versions
torch-2.8.0.dev20250327+cu128 torchaudio-2.6.0.dev20250331+cu128 torchvision-0.22.0.dev20250331+cu128
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @kwen2501 @c-p-i-o
| true
|
2,961,997,310
|
Make at::vec::Vectorized ops work with scalars
|
swolchok
|
closed
|
[
"module: cpu",
"Merged",
"release notes: cpp"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150568
* __->__ #150380
I noticed that I couldn't use `vec::Vectorized` operations with scalars, even though there is an implicit conversion from `T` to `vec::Vectorized<T>`, so I made it work.
Test Plan: Added tests. Reverted vec_base.h, left the new tests in place, and confirmed that new tests don't compile in that state.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,961,982,550
|
aten_mm_info counters not being logged properly in `_compile_fx_inner`
|
exclamaforte
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This line fails:
https://github.com/pytorch/pytorch/blob/main/torch/_inductor/compile_fx.py#L879
when with `aten._int_mm_{m}_{n}_{k}` and others:
https://github.com/pytorch/pytorch/blob/main/torch/_inductor/kernel/mm.py#L717
From this PR:
https://github.com/pytorch/pytorch/pull/148800?fbclid=IwZXh0bgNhZW0CMTEAAR1R1wq0kLr2y5Hc0cOYIAuUiCPgUOor-OblpAdX0BC7QVEJpQmgjf4QV_I_aem_MeiGPbAybvW_fOomGNotZw
@YUNQIUGUO
### Error logs
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,961,961,345
|
Not generating runtime checks when the number of inputs is large
|
yushangdi
|
closed
|
[
"fb-exported",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Summary: if the number of inputs/outputs is large, we don’t generate the check_inputs_outputs unless the aot_inductor.compile_wrapper_with_O0 flag is set, and if the environment variable AOTI_RUNTIME_CHECK_INPUTS is set when the check inputs are not generated, we just error out and say you have to compile again with the aot_inductor.compile_wrapper_with_O0 flag being set
Test Plan:
```
buck2 run mode/dev-nosan sigmoid/inference/ts_migration:pt2i_readiness_main -- --model_id 522655924 --test_suite ads_dsnn_prod --mode test_sub_module
```
Differential Revision: D72211775
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,961,947,382
|
[CI] Skip test_copy_large_tensor on M2-15 runners
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
They have more than 12Gb memory, but may be running this test causes OOM in CI
| true
|
2,961,889,701
|
[dynamic shapes] stop writing Max(*, 1) for strides
|
pianpwk
|
open
|
[
"release notes: export"
] | 1
|
CONTRIBUTOR
|
When handling strides, avoiding generating Max(u0, 1) expressions if we can, since it'll be hard to deal with these once we move away from guard_size_oblivious.
Looking at what the code was before sym_max was introduced (https://github.com/pytorch/pytorch/pull/94400 in `_prims_common/__init__.py`), this change seems appropriate.
| true
|
2,961,888,927
|
[re_build] Get output from stdout and sterr in local and remote execution and better error msg for too big to optimize
|
yushangdi
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Summary:
In aoti, we make a better error message for the `too big to optimize` error by suggesting the `aot_inductor.compile_wrapper_opt_level = 'O0'` flag.
Test Plan:
result example:
```
error: Function _ZN5torch12aot_inductorL22__check_inputs_outputsEPP16AtenTensorOpaqueS3_ is too big to optimize [-Werror,-Wignored-optimization-argument]
2 warnings and 1 error generated.
The runtime check __check_inputs_outputs() is too big to optimize. Please use torch._inductor.config.aot_inductor.compile_wrapper_opt_level = 'O0' flag.
```
Differential Revision: D72208338
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,961,871,649
|
bf16 grouped gemm
|
ngimel
|
closed
|
[
"module: cuda",
"Merged",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Enabled bf16 grouped gemm with an API similar to _scaled_group_gemm, except without scale and fast accum arguments. All transpose variants are enabled, unlike scaled gemm. Ideally we'd factor out a lot more code from scaled gemm, currently there's a lot of repetition between scaled and non-scaled versions. I factored out only a helper kernel that prepares arguments.
cc @ptrblck @msaroufim @eqy @jbschlosser, this can be used as an impl for NJT gemm, but it supports only bf16 currently, and I didn't do a lot of perf tuning.
| true
|
2,961,870,345
|
[AMD] [TRITON] [INDUCTOR] Add tl.assume to enable bufferops on AMD
|
njriasan
|
closed
|
[
"module: rocm",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Summary: Update the GEMM template to include the necessary `tl.assume` annotations to enable bufferops with AMD.
Test Plan: Tested manually with a simple matmul run with torch.complie(f, mode="max-autotune") the environment variables TRITON_ALWAYS_COMPILE=1 AMDGCN_ENABLE_DUMP=1 AMDGCN_USE_BUFFER_OPS=1.
Inspecting the generated AMDGCN all loads/stores use bufferops.
Note: Since inductor is loading constants for many of the shape values assumes are generally not needed for the stride/shape information, but pid calculations are generally a gap in Triton's inference capability.
Differential Revision: D71922698
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,961,856,469
|
[dtensor][tp] add a ParallelStyle PrepareModuleInputOutput
|
tianyu-l
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: distributed (dtensor)"
] | 6
|
CONTRIBUTOR
|
Needed this class for because `parallelize_module` takes a dict, which doesn't allow `PrepareModuleInput` and `PrepareModuleOutput` to be applied at the same time.
The `PrepareModuleInputOutput` in this PR initializes two variables `prepare_module_input` and `prepare_module_output` and uses them to process module / inputs / outputs.
I had another implementation which put all code in `PrepareModuleInputOutput` and let `PrepareModuleInput` and `PrepareModuleOutput` inherit the monolithic `PrepareModuleInputOutput`. But it is
1. less cleaner
2. conceptually abusing inheritance because `PrepareModuleInput` shouldn't be able to access class methods of `PrepareModuleOutput` and vice versa
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,961,828,501
|
Dynamic shapes doesn't work with kwargs
|
tugsbayasgalan
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```
def test_dynamic_shapes_kwargs(self):
class Foo(torch.nn.Module):
def forward(self, *, x, y, **kwargs):
z = kwargs["z"]
return x.sum() + y.sum() + z.sum()
inputs = {"x": torch.randn(4, 4), "y": torch.randn(4, 4), "z": torch.rand(4, 4)}
def convert_to_spec(x):
shapes = {}
for ix in range(len(x.shape)):
shapes[ix] = torch.export.Dim.AUTO
from torch.utils._pytree import tree_map_only
dynamic_shapes = tree_map_only(torch.Tensor, convert_to_spec, inputs)
export(Foo(), (), inputs, dynamic_shapes=dynamic_shapes, strict=False)
```
This errors with:
```
torch._dynamo.exc.UserError: When `dynamic_shapes` is specified as a dict, its top-level keys must be the arg names ['x', 'y', 'kwargs'] of `inputs`, but here they are ['x', 'y', 'z']. Alternatively, you could also ignore arg names entirely and specify `dynamic_shapes` as a list/tuple matching `inputs`. For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation
```
This shows up for rag and xlnet models in huggingface
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4
| true
|
2,961,828,485
|
[Profiler] Fix Empty C Call Queue
|
sraikund16
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: profiler",
"topic: bug fixes",
"ci-no-td"
] | 20
|
CONTRIBUTOR
|
Summary:
My commandeer of https://github.com/pytorch/pytorch/pull/150102
Based on description of PR it seems that we need to add C calls for each starting python event with a callable such that when the tracing exits we will have a matching enter for any given exit. It adds some unnecessary events at worst but prevents segfaults/failures. My PR just cleans up some refcount impl and logging.
Contributors: @arjun-choudhry
Test Plan: Ran resnet test internally. Will check CI and ask reviewers to make sure it resolves their issues.
Differential Revision: D72207570
| true
|
2,961,821,620
|
[hop schema] add gen_schema support for invoke_subgraph
|
ydwu4
|
closed
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150369
* #149688
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,961,808,651
|
Support `copy` kwarg in `torch.reshape()` following Python array API standard
|
leofang
|
open
|
[
"triaged",
"module: python array api"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
As title. Currently (as of PyTorch 2.6.0), this is not yet supported.
- `torch.reshape`: https://pytorch.org/docs/2.6/generated/torch.reshape.html#torch-reshape
- `array_api.reshape`: https://data-apis.org/array-api/2024.12/API_specification/generated/array_api.reshape.html#reshape
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry @rgommers @asmeurer @AnirudhDagar @asi1024 @emcastillo @kmaehashi
| true
|
2,961,797,801
|
[ONNX] decomp does not preserve custom CompositeImplicitAutograd ops
|
borisfom
|
closed
|
[
"module: onnx",
"triaged"
] | 39
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
I recently was forced to implement some composite custom ops with CompositeImplicitAutograd on top of another, more general custom op, for ONNX/TRT export purposes(the original op was used for both forward and backward and therefore had sequence output type that neither onnxruntime-extensions nor TRT can handle).
The idea was to use custom_translation_table with those composite ops for export.
Now it looks like I would need to implement some additional machinery based on DecompSkip to make that work - as the composite ops are being decomposed during ONNX export.
Can that be handled automatically?
I mean - if I do specify custom_translation table, it should be safe to assume the best course of action is to keep the custom ops specified in the table, from decomposition so that the translation would actually happen.
@xadupre @justinchuby
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,961,766,903
|
`torch.vdot()` returns zero when input tensors have complex data type
|
canturk
|
closed
|
[
"needs reproduction",
"triaged",
"module: macos",
"module: linear algebra",
"module: intel"
] | 4
|
NONE
|
### 🐛 Describe the bug
`torch.vdot()` returns zero when the input tensors are complex data type:
```
>>> torch.vdot(torch.tensor([2, 3]), torch.tensor([2, 1])) # input arguments are real tensors
tensor(7)
>>>
>>> a = torch.tensor([1 +2j, 3 - 1j])
>>> b = torch.tensor([2 +1j, 4 - 0j])
>>> torch.vdot(a, b) # input arguments a & b are tensors with complex data type
tensor(0.+0.j)
```
In the preceding code segment, the outcome of the first `vdot` returns as expected because the input tensors have float data type. However, in the second `vdot` the outcome was supposed to be `tensor(16+1j)` but it is `tensor(0.+0j)`.
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (x86_64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 15:55:29) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-15.3.2-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i5-8259U CPU @ 2.30GHz
Versions of relevant libraries:
[pip3] flake8==7.1.2
[pip3] mypy==1.15.0
[pip3] mypy_extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] numpydoc==1.8.0
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[conda] libtorch 2.5.1 cpu_openblas_h17d8791_2
[conda] nomkl 1.0 h5ca1d4c_0 conda-forge
[conda] numpy 2.1.3 py312hfc93d17_0 conda-forge
[conda] numpydoc 1.8.0 pyhd8ed1ab_1 conda-forge
[conda] pytorch 2.5.1 cpu_openblas_py312h180b29c_2
[conda] torchvision 0.20.1 cpu_py312_h462eaf5_6 conda-forge
cc @malfet @albanD @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,961,746,493
|
Build MacOS CI with MKLDNN
|
malfet
|
open
|
[
"ciflow/trunk",
"release notes: build",
"topic: improvements",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150365
To reduce divergence between aarch64 and MacOS builds
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,961,745,422
|
AOTI doesn't error if dynamic shape gets specialized during lowering
|
angelayi
|
open
|
[
"triaged",
"oncall: pt2",
"export-triaged",
"oncall: export",
"module: aotinductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
In some passes inductor is able to further shape specialize a dynamic shape. However AOTI does not error on this (during lowering or during runtime). Example:
```python
def test_aoti_specialization(self):
from torch._inductor.pattern_matcher import (
fwd_only,
PatternMatcherPass,
register_replacement,
)
import torch
def pattern1(x) -> torch.Tensor:
return x.sin()
def replacement1(x) -> torch.Tensor:
return x.cos()
patterns = PatternMatcherPass()
inputs = [torch.randn(6, 5),]
register_replacement(pattern1, replacement1, inputs, fwd_only, patterns)
def custom_pass(graph: torch.fx.Graph):
nodes = [n for n in graph.nodes]
if nodes[0].meta['val'].shape[0] > 5:
# replace sin with cos
patterns.apply(graph)
torch._inductor.config.post_grad_custom_post_pass = custom_pass
class M(torch.nn.Module):
def forward(self, x):
return x.sin()
ep = torch.export.export(M(), (torch.randn(6, 5),), dynamic_shapes=({0: Dim.DYNAMIC},))
print(ep)
path = torch._inductor.aoti_compile_and_package(ep)
compiled_m = torch._inductor.aoti_load_package(path)
inp = torch.randn(6, 5)
self.assertTrue(torch.allclose(compiled_m(inp), inp.cos()))
inp = torch.randn(3, 5)
self.assertTrue(torch.allclose(compiled_m(inp), inp.sin())) # this fails
```
```
===== BEFORE PRE GRAD =====
class GraphModule(torch.nn.Module):
def forward(self, x: "f32[s35, 5]"):
# File: /data/users/angelayi/pytorch/moo.py:347 in forward, code: return x.sin()
sin: "f32[s35, 5]" = torch.ops.aten.sin.default(x); x = None
return (sin,)
{s35: VR[2, int_oo]}
===== AFTER POST GRAD =====
class <lambda>(torch.nn.Module):
def forward(self):
arg0_1: "f32[s35, 5]";
arg0_1, = fx_pytree.tree_flatten_spec([], self._in_spec)
# No stacktrace found for following nodes
cos_default: "f32[s35, 5]" = torch.ops.aten.cos.default(arg0_1); arg0_1 = None
return (cos_default,)
{s35: VR[6, int_oo]}
```
Maybe we should error while lowering if the shape changes?
I tried `AOT_INDUCTOR_DEBUG_COMPILE=1` but strangely that didn't seem to actually error when running it, even though the [cpp code](https://www.internalfb.com/phabricator/paste/view/P1771873612?lines=478-484) has the correct check (maybe I did something wrong):
```cpp
if (arg0_1_size[0] < 6) {
std::stringstream ss;
ss << "input_handles[0]: dim value is too small at 0, "
<< "expected it to be >= 6, " << "but got: "
<< arg0_1_size[0] << "\n";
throw std::runtime_error(ss.str());
}
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1 @zou3519
### Versions
main
| true
|
2,961,744,407
|
Fix typo
|
malfet
|
closed
|
[
"oncall: distributed",
"Merged",
"release notes: distributed (c10d)"
] | 3
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/150339
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,961,742,870
|
[BE] Get rid of cross-compile and x86 build options for Mac
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150365
* __->__ #150362
As both cross-compilation and x86 builds has been removed a while back
Remove stale TODO about building with OpenMP support
| true
|
2,961,732,263
|
[ROCm] cmake 4 workaround for hiprtc
|
pytorchbot
|
closed
|
[
"module: rocm",
"open source",
"ciflow/rocm"
] | 1
|
COLLABORATOR
|
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,961,730,249
|
CUTLASS backend updates: Instantiation level, long compilation and long autotuning time
|
henrylhtsang
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 8
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
This document is intended to be a status update for the CUTLASS backend, including a brief summary of its prospects and outstanding issues. The focus is on H100.
# The good
As it is known, CUTLASS can outperform Aten and Triton meaningfully (5% - 10%+) on many shapes with exhaustive search, via tuning the [instantiation level](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/config.py#L1358). This is even the case when we use [Triton with persistent TMA](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/config.py#L1167).
For example, using the [benchmark](https://github.com/pytorch/pytorch/pull/148347) for CUTLASS backend, we can benchmark Aten, Triton and CUTLASS for any particular shape.
```
Experiment group: mm (2040x3584, 3584x3584) torch.bfloat16
+-----------------------+-------------------+----------------------+---------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+-------------------+----------------------+---------------------+
| aten | 88.22282403707504 | 5.648318733088672 | NA |
| triton | 91.12168848514557 | 10.730457949917763 | 3.2858440882059026 |
| triton_persistent_tma | 90.57512879371643 | 8.790873480960727 | 2.6663222157259985 |
| cutlass_lvl_default | 85.57070791721344 | 94.51847390504554 | -3.0061564553261966 |
| cutlass_lvl_9999 | 79.19035851955414 | 5753.81738591427 | -10.238241199040592 |
+-----------------------+-------------------+----------------------+---------------------+
```
On top of that, @mlazos is working on Epilogue fusion, which might bring further perf gains for the CUTLASS backend.
# The Bad
CUTLASS backend still has some gaps that prevent us from evaluating its performance. Even at a GEMM level, benchmarking multiple shapes and instantiation levels is not yet seamless and requires significant handholding.
### Experiment blocking problems
* C++ compile errors only get resolved in the autotuning stage, which results in 100x (from 0.1s to 10s) autotuning time for those kernels. A possible solution is to label these kernels as disabled in the precompilation stage. See [issue](https://github.com/pytorch/pytorch/issues/148122). @coconutruben is working on a fix.
* FlexibleLayout not supported. This can limit the portion of GEMMs that can use CUTLASS
### Missing features / side issues
* addmm has accuracy issues. See [issue](https://github.com/NVIDIA/cutlass/issues/2147)
* addmm dynamic support was not enabled. WIP [PR](https://github.com/pytorch/pytorch/pull/148234)
* bmm lacks dynamic support
* Some configs give C++ compile errors. We should try to prune them a bit. Also see [issue](https://github.com/NVIDIA/cutlass/issues/2133) for ways to fix some compilation errors
* Autotune in subproc is not supported for addmm, tracked in test. Tracking in [test](https://github.com/pytorch/pytorch/blob/main/test/inductor/test_cutlass_backend.py#L168)
* FP32 GEMMs are not supported. Tracking in [issue](https://github.com/pytorch/pytorch/issues/145952)
# Next set of blockers
Before I go into the blockers and ideas for solutions, I want to mention that CUTLASS 4 was recently announced, so that can change all the plans.
The next major blockers are **long kernel compilation and long autotuning time**.
## Long kernel compilation
CUTLASS is notorious for its lengthy compilation time, taking approximately 25 seconds to compile a single kernel. Inductor has infrastructure to parallelize compilation up to the number of CPUs; however, it can still take around an hour to process all 10k+ configurations at the max instantiation level for each dtype and layout combination.
There are ways to get around this problem:
* Model-agnostic warm-up of the kernels / Global cache for the kernels, since the kernels only depend on CUDA version and CUTLASS hash
* Prune the configs significantly, see Autotuning below
## Long autotuning time
Autotuning matters more to the overall PT2 compilation compared to kernel compilations, as we only need to compile kernels once per model but each shape requires its own autotuning. At max instantiation level, we have over 14k configs times 3 swizzles to tune. It takes roughly 0.12 seconds to benchmark a kernel, so autotuning a single shape can take up to 1.5 hours.
There are some low-hanging fruits that can help reduce autotuning time. For example, currently we have three swizzle options to choose from for each compiled kernel, which effectively triples the autotuning time. This can be improved by a two-stage autotuning process (see [discussions](https://github.com/pytorch/pytorch/pull/147224)):
1. Reduce the number of kernels by 100 times without considering swizzles
2. Autotune for the optimal kernel and swizzle combination
Another approach is to manually prepare a list of 100s of reasonably performant configs for each model and use it to restrict the autotuning search space, via [CUTLASS allowlist and denylist regex](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/config.py#L1335-L1350).
cross-post: https://fb.workplace.com/groups/257735836456307/posts/862302765999608/
### Alternatives
_No response_
### Additional context
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @jcaip @alexsamardzic
| true
|
2,961,711,084
|
[PP] Update 1f1b cooldown None steps
|
H-Huang
|
open
|
[
"oncall: distributed",
"release notes: distributed (pipeline)"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151249
* #151248
* __->__ #150359
* #150347
This shouldn't make a difference to the schedule runtime since `None` ops are skipped, but helps for readability and visualization. We previously added `None` after each step in 1f1b during cooldown which is not necessary.
Previous:

New:

cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,961,699,718
|
`torch.backends.mkldnn.flags()` CM should not warn
|
malfet
|
closed
|
[
"module: cpu",
"module: mkldnn",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: bug fixes",
"ciflow/linux-aarch64"
] | 7
|
CONTRIBUTOR
|
By returning `None` rather than `False` from `THPModule_allowTF32OneDNN` when USE_XPU is not defined
Added regression test
Fixes https://github.com/pytorch/pytorch/issues/149829
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,961,686,142
|
[dtensor] add op support for select_backward and slice_backward
|
tianyu-l
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"release notes: distributed (dtensor)"
] | 6
|
CONTRIBUTOR
|
Inheriting and rebasing @awgu 's PR https://github.com/pytorch/pytorch/pull/149071
- fixed an issue for `select_backward` and an issue for `slice_backward`
- removed `_experimental_ops.py` as it becomes empty
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,961,684,263
|
[fr] Add logger config for flight record in PGNCCL
|
fduwjj
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Summary: We want to move from a scuba based direct logging to a logger config based logging. Mostly changes are internal but we need to change the exception to exception_msg.
Test Plan: Following https://www.internalfb.com/wiki/Server_Logging/Getting_Started_with_Logging/Onboarding_Existing_Scribe-Based_Logging_(Alpha)/ to test it.
Differential Revision: D72198171
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,961,677,747
|
support nested compile when inner compile is inside of __torch_dispatch__
|
bdhirsh
|
open
|
[
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 1
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/150262
At a high level, the idea is that anywhere we do fake tensor prop inside of compile, there is a risk that this fake prop can recursively invoke compilation. This can happens if we are doing fake prop on a tensor subclass, and it's __torch_dispatch__ uses torch.compile.
Two things to call out:
(1) I took @williamwen42 's advice and used `set_stance("force_eager")`, to ensure that dynamo doesn't turn back on
(2) I needed to use it both in dynamo's fake prop, and in AOTAutograd compilation. I also needed to set `error_on_nested_fx_trace`, since otherwise dynamo will raise an error when it sees it is running from an `fx_trace` context. We could theoretically avoid this if we properly **turned off** dynamo, but `set_stance()` still leaves dynamo on.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150355
* #150302
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,961,676,303
|
[DTensor][tp] fix errors in FSDP+TP checkpointing test
|
XilunWu
|
closed
|
[
"oncall: distributed",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"release notes: distributed (checkpoint)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150354
## Summary
remove the `tp_parallelize_plan` assignment that accidentally rewrites the previous assignments in `test_fsdp_dsd.py`.
## Test
`pytest test/distributed/checkpoint/fsdp/test_fsdp_dsd.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,961,669,791
|
Memory leak base tests for compile
|
IvanKobzarev
|
open
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150353
| true
|
2,961,645,371
|
Revert "[PGNCCL] Launch kernel on current stream & remove `record_stream` entirely (#148590)
|
atalman
|
closed
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 1
|
CONTRIBUTOR
|
This reverts commit ef6296e7f20d744a0cfed81cab573d60204e7626.
Reverting this since its reverted on trunk
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,961,644,693
|
test enummeta
|
Sunnie912
|
open
|
[
"fb-exported",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Differential Revision: D72196352
| true
|
2,961,643,098
|
DISABLED test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_int32 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 8
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_int32&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39713980635).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_int32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1420, in only_fn
return fn(slf, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 1349, in test_foreach_copy_with_multi_dtypes
out = foreach_copy_(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_copy_', keys=('aten::_foreach_copy_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.int32], Tensor[size=(19, 19), device="cuda:0", dtype=torch.int32], Tensor[size=(18, 18), device="cuda:0", dtype=torch.int32], Tensor[size=(17, 17), device="cuda:0", dtype=torch.int32], Tensor[size=(16, 16), device="cuda:0", dtype=torch.int32], Tensor[size=(15, 15), device="cuda:0", dtype=torch.int32], Tensor[size=(14, 14), device="cuda:0", dtype=torch.int32], Tensor[size=(13, 13), device="cuda:0", dtype=torch.int32], Tensor[size=(12, 12), device="cuda:0", dtype=torch.int32], Tensor[size=(11, 11), device="cuda:0", dtype=torch.int32], Tensor[size=(10, 10), device="cuda:0", dtype=torch.int32], Tensor[size=(9, 9), device="cuda:0", dtype=torch.int32], Tensor[size=(8, 8), device="cuda:0", dtype=torch.int32], Tensor[size=(7, 7), device="cuda:0", dtype=torch.int32], Tensor[size=(6, 6), device="cuda:0", dtype=torch.int32], Tensor[size=(5, 5), device="cuda:0", dtype=torch.int32], Tensor[size=(4, 4), device="cuda:0", dtype=torch.int32], Tensor[size=(3, 3), device="cuda:0", dtype=torch.int32], Tensor[size=(2, 2), device="cuda:0", dtype=torch.int32], Tensor[size=(1, 1), device="cuda:0", dtype=torch.int32]], args=(TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.int32], Tensor[size=(19, 19), device="cuda:0", dtype=torch.int32], Tensor[size=(18, 18), device="cuda:0", dtype=torch.int32], Tensor[size=(17, 17), device="cuda:0", dtype=torch.int32], Tensor[size=(16, 16), device="cuda:0", dtype=torch.int32], Tensor[size=(15, 15), device="cuda:0", dtype=torch.int32], Tensor[size=(14, 14), device="cuda:0", dtype=torch.int32], Tensor[size=(13, 13), device="cuda:0", dtype=torch.int32], Tensor[size=(12, 12), device="cuda:0", dtype=torch.int32], Tensor[size=(11, 11), device="cuda:0", dtype=torch.int32], Tensor[size=(10, 10), device="cuda:0", dtype=torch.int32], Tensor[size=(9, 9), device="cuda:0", dtype=torch.int32], Tensor[size=(8, 8), device="cuda:0", dtype=torch.int32], Tensor[size=(7, 7), device="cuda:0", dtype=torch.int32], Tensor[size=(6, 6), device="cuda:0", dtype=torch.int32], Tensor[size=(5, 5), device="cuda:0", dtype=torch.int32], Tensor[size=(4, 4), device="cuda:0", dtype=torch.int32], Tensor[size=(3, 3), device="cuda:0", dtype=torch.int32], Tensor[size=(2, 2), device="cuda:0", dtype=torch.int32], Tensor[size=(1, 1), device="cuda:0", dtype=torch.int32]]), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_int32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,961,639,251
|
[DO NOT REVIEW] Update _fsdp_param_group.py
|
Ritesh1905
|
open
|
[
"oncall: distributed",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
asset `all_reduce_event` is None only if it's not a cpu based device.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,961,627,788
|
[ROCm] update test buffer fudge factor for hipblaslt
|
ethanwee1
|
closed
|
[
"oncall: distributed",
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"ciflow/rocm"
] | 6
|
CONTRIBUTOR
|
The default workspace for hipblaslt is larger than for cublas/cublaslt which requires a slight increase to the buffer needed.
Forward-fix for #150227 that broke ROCm distributed tests but wasn't part of initial CI signal.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,961,619,309
|
[PP] Add schedule visualizer
|
H-Huang
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (pipeline)",
"module: pipelining"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151249
* #151248
* #150359
* __->__ #150347
Added a new private file (`_schedule_visualizer.py`) with some helper methods that can be used to visualize the operations of a schedule and plot with matplotlib.
InterleavedZeroBubble(pp_group=4, microbatches=8):

cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,961,598,859
|
[Cutlass] Integrate EVT codegen into 3x gemm template
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150910
* #150909
* #150908
* #150907
* #150906
* #150905
* #150904
* #150903
* __->__ #150346
* #150345
* #150344
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,961,598,751
|
[Cutlass] Codegen for EVT Epilogue
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150910
* #150909
* #150908
* #150907
* #150906
* #150905
* #150904
* #150903
* #150346
* __->__ #150345
* #150344
Previously merged:
* #150344
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,961,598,673
|
[Cutlass] Import cutlass python API for EVT
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 7
|
CONTRIBUTOR
|
This imports the pieces of the cutlass python API that are needed for python EVT tracing. It builds on existing importing for cutlass_library. Once EVT tracing has been added to cutlass_library (should be later this year) this can be removed.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150910
* #150909
* #150908
* #150907
* #150906
* #150905
* #150904
* #150903
* #150346
* #150345
* __->__ #150344
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,961,584,322
|
[ez][inductor][tests] Skip triton backend only for CPU tests
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150343
Motivation: to unblock https://github.com/pytorch/pytorch/pull/148622
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,961,538,008
|
Torch trace doesn't respect @torch.jit.ignore on torch.nn.Module forward method
|
bpottersta
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
### 🐛 Describe the bug
Tracing a class derived from torch.nn.Module doesn't respect the @torch.jit.ignore decorator on the forward method.
There is good reason to want a class to derive from torch.nn.Module but NOT trace the forward method, see example below.
The forward method is valid and useful in usages with python, as are the other features of torch.nn.Module (autograd, 'registering' of child objects who also implement torch.nn.Module) , but not when traced in this example, since the control flow is ignored during tracing (but the methods invoked in forward() are able to be called, and control flow implemented in C++).
```
import torch
class Foo(torch.nn.Module):
def __init__(self):
super().__init__()
self.register_buffer('_n', torch.tensor(0), persistent=True)
self.register_parameter('_w', torch.nn.Parameter(torch.ones((2, 1))))
@torch.jit.export
def transform(self, x) -> torch.Tensor:
return x @ self._w
@torch.jit.export
def is_odd(self):
return (self._n % 2) == 1
@torch.jit.ignore
def forward(self, x: torch.Tensor) -> torch.Tensor:
if torch._C._get_tracing_state(): # Shouldn't need this to avoid torch.jit.trace complaining about the following control flow & graph diff
return x
self._n += 1
if self.is_odd().all():
return self.transform(x)
return x
obj = Foo()
torch.jit.save(
torch.jit.trace(
obj,
example_inputs=(torch.tensor([1.0, 2.0])), ),
'module.pt')
# ^^^ Warns the below message. Torch should have ignored the forward method
# TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
# The values for attribute 'shape' do not match: torch.Size([2]) != torch.Size([1]).
# _check_trace(
```
Reference to a tangentially related, but since resolved issue: https://github.com/pytorch/pytorch/issues/24314
### Versions
[pip3] torch==2.3.0
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,961,536,810
|
[dynamo] add reason field to torch.compiler.disable
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"module: compile ux"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150440
* __->__ #150341
Implements https://github.com/pytorch/pytorch/issues/146445
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,961,511,691
|
[AOTI] Skip test_buffer_mutation_and_force_mmap_weights for fbcode
|
desertfire
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: Skip due to an older ideep version
Differential Revision: D72190746
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,961,505,496
|
[typo] pecify -> specify
|
bobcao3
|
closed
|
[
"oncall: distributed",
"module: docs",
"triaged",
"actionable"
] | 0
|
NONE
|
https://github.com/pytorch/pytorch/blob/80b7f6b70426ae329b1c99a7efb863835d1de0cb/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L4753
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @svekars @sekyondaMeta @AlannaBurke
| true
|
2,961,503,950
|
Copy native runtime code to OSS.
|
zhxchen17
|
open
|
[
"ciflow/trunk",
"topic: not user facing"
] | 14
|
CONTRIBUTOR
|
Summary:
# High level context
Torch native runtime (codename: Sigmoid) is a new feature we're upstreaming to libtorch. Native runtime will execute a graph based on torch.export() format directly and the entire runtime is written in C++ so there's no Python dependency for running this.
The open sourcing part of the work is a follow up to our previous discussion in the doc https://docs.google.com/document/d/1KzZvJAscqKZdNaAmM3Is1A-VVgXSUclu-3Luk8z_m0k/edit?tab=t.0#heading=h.raww8l713hqp.
Since we have a very small C++ API surface (mostly on ModelRunner) and codebase, we prefer to copy the entire directory over in one diff to minimize the complications from syncing multiple diffs and redundant CI tests.
# Code layout
There are mainly two parts of sigmoid: C++ part and Python part.
- Currently we put all the C++ code under `torch/csrc/nativert`. All the function/classes are put under c++ namespace `torch::nativert`
- Python code are put under `torch/export/experimental/`. Python code is needed for packaging torch.export() artifacts into *PT2 Archive* and later loaded by the C++ runtime.
# Supported Platform
Native Runtime is designed to work across the platforms meaning we support Linux, Windows and Mac by default.
As one advanced feature, AOTI delegate will only work on Linux due to the limitation from Inductor.
# Core API
Frontend
```Python
ep = torch.export.export(model, args, kwargs)
from torch.export.experimental.package import package_model
from torch.export.experimental.package.pt2_archive import PT2ArchiveWriter
with open(FILENAME) as f:
with PT2ArchiveWriter(f) as archive_writer:
package_model(
ep,
MODEL_NAME,
archive_writer,
)
```
Backend
```c++
#include <torch/csrc/nativert/ModelRunner.h>
using torch::nativert::BaseRuntimeConfig;
using torch::nativert::ExecutorType;
using torch::nativert::Placement;
using torch::nativert::ModelRunner;
int main() {
ExecutorType executor_type = ExecutorType::INTERPRETER;
BaseRuntimeConfig runtime_config; // can be set to default value most of the time.
Placement placement; // can be set to default value most of the time.
ModelRunner model_runner(FILENAME, MODEL_NAME, executor_type, runtime_config, placement);
// args and kwargs should have the same PyTree spec as the example inputs from Python
std::vector<c10::IValue> args = ...;
std::unordered_map<std::string, c10::IValue> kwargs = ...;
c10::IValue result = model_runner.run(args, kwargs); // safe to be called from multiple threads
return 0;
}
```
Therefore here is the list of types we expose as TORCH_API:
- ModelRunner (and its base class ModelRunnerBase) (from torch/csrc/nativert/ModelRunner.h)
- The main execution engine and entry point for exported graph.
- BaseRuntimeConfig (from torch/csrc/nativert/executor/ModelRunnerBase.h)
- Centralized type to store configs/knobs.
- ExecutorType (from torch/csrc/nativert/executor/ModelRunnerBase.h)
- The delegate executor type to be used for execution. Currently supports: 1. Plain interpreter (default) 2. AOTInductor 3. MTIA delagate.
- Placement (from torch/csrc/nativert/executor/Placement.h)
- Typically not used. Sometimes useful for adjusting the device of weights during model loading.
# Test Plan
We already have a comprehensive test bundled with this diff.
The basic idea is that we automatically generate tests based on current unittests of torch export (https://fburl.com/pikxf3xi). There are 340 different model code from torch export's test suite and sigmoid is passing on 98% of them.
To run the unittests of sigmoid:
```
pytest test/export/test_nativert.py
```
In addition to the unittests we bundle in OSS, we also have many tests running inside fbcode, but we will stick with export's unittest for a while in OSS.
# Shared utils from c10/aten
I believe in fact we have very little duplicated code with libtorch's current codebase. The reason is that sigmoid is developed to be tightly integrated with libtorch from the beginning and we took opportunities to reuse everything we could from libtorch from day 1. Examples include:
- c10::IValue
- c10::Synchronized
- C10_LIKELY and TORCH_CHECK macros
- c10::FastMap
- ATen/core/function_schema
- ATen/core/dispatch/Dispatcher
For the sake of development speed we made the best effort to reuse libtorch as much as we could and maintaining the graph execution engine on top of this.
Certainly there are some exceptions. If something is not stable enough in the core to be reused, we would implement it on our own. The best example is `torch/csrc/nativert/executor/AOTInductorModelImpl.cpp` which could be refactored to merge with at::AOTIModelContainerRunner in the future (since this part is also in active development in core and compiler team have control over both of them).
| true
|
2,961,497,028
|
[cuDNN][SDPA] Loosen constraints for GQA for cuDNN Attention
|
eqy
|
closed
|
[
"module: cudnn",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: sdpa"
] | 9
|
COLLABORATOR
|
cuDNN attention doesn't require key and value tensors to have the same number of heads
cc @csarofeen @ptrblck @xwang233
| true
|
2,961,484,530
|
[parallelize_module] uneven sharding + use_local_output breaks
|
wconstab
|
open
|
[
"oncall: distributed",
"triaged",
"module: dtensor"
] | 4
|
CONTRIBUTOR
|
Rowwise/ColwiseParallel strategies both default to 'use_local_output=True'.
If a linear layer has an uneven size (e.g. 5) on the sharded dimension, bad things happen.
1) first linear (input projection) will produce an output of global size (5,), with shards of (3, ) and (2, ) on 2 ranks.
2) second linear (output projection) will recv a raw tensor of size (3, ) or (2, ) depending on rank, and convert this to a DTensor with shard(-1) as specified in RowwiseStrategy, and then redistribute it to Replica with size (6,) or (4,) depending on rank. During the mm operator, a shape mismatch will occur, since the tensor shape is indeed wrong.
e.g. on rank0
```
RuntimeError: a and b must have same reduction dim, but got [1, 6] X [5, 5].
```
I encountered this in the middle of debugging another uneven-sharding bug, and am filing this issue as a placeholder to come back to later. But i'm also wondering if folks can contribute context/motivation for the 'use_local_output' feature.
It seems like keeping the output in DTensor form is simpler and safer, but probably breaks some assumptions somewhere?
Thanks to @bdhirsh for help figuring this out.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o @tianyu-l @XilunWu @fmassa
| true
|
2,961,477,097
|
[PGNCCL][BE] Merge mutex into TensorShelf for encapsulation
|
kwen2501
|
closed
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 2
|
CONTRIBUTOR
|
ghstack-source-id: e4b48e5473af4c7fbc227e63948633a33b1c7a59
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150130
(cherry picked from commit 783c3c823ef261cf00d33568966357cd97909cd6)
Fix 3 of 3 for https://github.com/pytorch/pytorch/pull/148590
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,961,475,116
|
[c10d] Move unstashing from watchdog to main thread
|
kwen2501
|
closed
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 2
|
CONTRIBUTOR
|
ghstack-source-id: 2a00866ec975f1beac417b4c9e7829baebabe843
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150079
(cherry picked from commit 27b79263d78594e466d578fa88b570be2dd626ae)
Fix 2 of 3 for #148590
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.