id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,945,404,232
|
add loop mm benchmark
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149910
* __->__ #149932
results:
compile time instruction count for iteration 4 is 67947323682
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,945,379,474
|
[Minimizer] Better debugging message
|
sweetStreet
|
open
|
[
"fb-exported",
"release notes: fx",
"fx"
] | 6
|
NONE
|
Summary:
This diff tries to have better report message for Minimizer
1. Fix Minimizer block mode to handle case where no culprits are found between start and end indices, preventing out-of-range exception at https://fburl.com/code/r9w0xurj
2. Instead of directly converting the list to set that lost the order of nodes, printing the set of nodes that remain the original order but also remove duplication
Test Plan:
```
MODEL_ID=698748927_64
MODEL_ENTITY_ID=${MODEL_ID%_*}
WORK_DIR=$HOME/${MODEL_ID}
NET=local
BATCH_IDX=2
MODE=block
buck2 run @//mode/opt mtia/accuracy/dbg:mtia_minimizer_runner -- \
--mode ${MODE} \
--model_path ${WORK_DIR}/data_tp/${MODEL_ID}.predictor.precompute.mix.fbia.${NET} \
--snapshot_path $HOME/minimizer \
--ref_io_path ${WORK_DIR}/ref_io/mtia_${NET}_input_ \
--save_submodule=True \
--use_torch_export=True \
--batch_idx=${BATCH_IDX} \
--report_path=${WORK_DIR}/minimizer_${MODE}_${NET}.log |& tee $HOME/${MODEL_ID}.minimizer.torchexport.${MODE}.${NET}.log
```
Differential Revision: D71294641
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,945,359,632
|
[ROCm][TunableOp] TunableOp Context Manager for unit tests
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm-mi300"
] | 3
|
COLLABORATOR
|
This PR is cleanup only. There are no feature changes or bug fixes.
We create a TunableOp context manager for setting up and cleanup. We re-write TunableOp unit tests in terms of this context manager. Ultimately reduces the amount of copy-paste code.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,945,345,172
|
[ued][whisper][dynamo] Graph break on cached_property
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```
torch._dynamo.exc.Unsupported: 'inline in skipfiles: cached_property.__get__ | __get__ /home/lsakka/.conda/envs/user-empathy/lib/python3.11/functools.py, skipped according trace_rules.lookup SKIP_DIRS'
from user code:
File "/home/lsakka/whisper/whisper/decoding.py", line 40, in torch_dynamo_resume_in_detect_language_at_35
or tokenizer.language_token not in tokenizer.sot_sequence
```
Model doc - https://docs.google.com/document/d/1282EbgtIM2eKillT_7r-p6T-ugX1rq-t6s6uVloWEDc/edit?tab=t.0
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,945,337,747
|
[MPS/Inductor] Add support for chebyshev_polynomial_t.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,945,321,480
|
add weight 2D tensor for xpu
|
sunjiweiswift
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
NONE
|
Intel xpu kernel uses 2D int4 weight
| true
|
2,945,317,598
|
[Intel GPU] trigger tf32 no-gpu warn only when setting true
|
ZhiweiYan-96
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 21
|
COLLABORATOR
|
Fix issue #149829
# Detail
In `torch.export` initialization stage, the context variable of `torch.backends.mkldn` would be initialized at function `_ignore_backend_decomps` in `torch/export/_trace.py`.
It should be wrong to trigger no-gpu warning when trying to setting the value to `False` in a CPU-Only environment. The right behavior is raising warning only when user try to turn it on if no GPU.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149926
| true
|
2,945,273,469
|
Replace c10::guts::is_fundamental with std::is_fundamental
|
cyyever
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
c10::guts::is_fundamental was introduced as a workaround to MSVC bug for at::Half.
| true
|
2,945,175,106
|
Delegate torch.accelerator.device_count to torch.xxx.device_count for multi-process usage
|
guangyey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"module: accelerator"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149924
* #147507
# Motivation
Adapt `torch.accelerator.device_count` for multi-process usage. For example, `torch.cuda.device_count` avoids poisoning fork, then `torch.accelerator.device_count` should meet the same requirement.
Now that `torch.get_device_module(device).device_count` supports this, `torch.accelerator.device_count` should align with this behavior as well.
cc @albanD @EikanWang
| true
|
2,945,163,889
|
Enable move warnings for torch targets
|
cyyever
|
closed
|
[
"oncall: jit",
"open source",
"Merged",
"NNC",
"ciflow/trunk",
"release notes: jit",
"ciflow/periodic"
] | 6
|
COLLABORATOR
|
This PR enables more move warnings for torch targets and fixes some code.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,945,160,284
|
[CI][BE] Update other actions
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149919
* __->__ #149922
* #149918
* #149917
Discovered by actionlint-1.7.7:
- `actions/checkout@v3`->`actions/checkout@v4`
- `actions/setup-python@v4` -> `actions/setup-python@v5`
| true
|
2,945,147,607
|
[ued][whisper][dynamo] Graph break - setattr on class object
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```
File "/home/lsakka/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: builtin: setattr [<class 'torch._dynamo.variables.user_defined.UserDefinedClassVariable'>, <class 'torch._dynamo.variables.constant.ConstantVariable'>, <class 'torch._dynamo.variables.constant.ConstantVariable'>] False
File "/home/lsakka/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1699, in register_forward_hook
handle = RemovableHandle(
File "/home/lsakka/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/utils/hooks.py", line 27, in __init__
RemovableHandle.next_id += 1
```
Graph break - https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/lsakka/5e9dc55a-a909-408a-ae3c-2466eb6a7d75/custom/-_21_0_0/dynamo_graph_break_reason_124.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
Full tlparse - [https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/lsakka/5e9dc55a-a909-408a-ae3c-2466eb6a7d75/custom/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000#[30/1]](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/lsakka/5e9dc55a-a909-408a-ae3c-2466eb6a7d75/custom/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000#%5B30/1%5D)
Full doc - [docs.google.com/document/d/1282EbgtIM2eKillT_7r-p6T-ugX1rq-t6s6uVloWEDc/edit?tab=t.0](https://docs.google.com/document/d/1282EbgtIM2eKillT_7r-p6T-ugX1rq-t6s6uVloWEDc/edit?tab=t.0)
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,945,143,417
|
[ued][whisper][dynamo] Graph break on a unsupported dict key
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```
raph break in user code at /home/lsakka/.conda/envs/user-empathy/lib/python3.11/site-packages/regex/regex.py:503
Reason: Unsupported: Dict key of type <class 'torch._dynamo.variables.lazy.LazyVariableTracker'>. Key: TupleVariable(length=6)
User code traceback:
File "/home/lsakka/whisper/whisper/decoding.py", line 430, in apply
logits[:, self.tokenizer.encode(" ") + [self.tokenizer.eot]] = -np.inf
File "/home/lsakka/whisper/whisper/tokenizer.py", line 162, in encode
return self.encoding.encode(text, **kwargs)
File "/home/lsakka/.conda/envs/user-empathy/lib/python3.11/site-packages/tiktoken/core.py", line 120, in encode
if match := _special_token_regex(disallowed_special).search(text):
File "/home/lsakka/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_dynamo/polyfills/__init__.py", line 160, in getattr_and_trace
return fn(*args[2:], **kwargs)
File "/home/lsakka/.conda/envs/user-empathy/lib/python3.11/site-packages/tiktoken/core.py", line 428, in _special_token_regex
return regex.compile(f"({inner})")
File "/home/lsakka/.conda/envs/user-empathy/lib/python3.11/site-packages/regex/regex.py", line 353, in compile
return _compile(pattern, flags, ignore_unused, kwargs, cache_pattern)
File "/home/lsakka/.conda/envs/user-empathy/lib/python3.11/site-packages/regex/regex.py", line 503, in _compile
return _cache[pattern_key]
```
Graph break - https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/lsakka/5e9dc55a-a909-408a-ae3c-2466eb6a7d75/custom/-_26_0_0/dynamo_graph_break_reason_140.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
Full tlparse - https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/lsakka/5e9dc55a-a909-408a-ae3c-2466eb6a7d75/custom/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000#[30/1]
Full doc - https://docs.google.com/document/d/1282EbgtIM2eKillT_7r-p6T-ugX1rq-t6s6uVloWEDc/edit?tab=t.0
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,945,135,804
|
[BE][CI] Update actionlint to 1.7.7
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149919
* #149922
* #149918
* #149917
- fix anti-pattern started by https://github.com/pytorch/pytorch/pull/81922 when x86 actionlint binaries were placed in Linux-arm64 folder
- Fix renaming lint violations, namely
```
>>> Lint for .github/workflows/_linux-test.yml:
Error (ACTIONLINT) [expression]
property "workspace" is not defined in object type {arch: string; debug:
string; environment: string; name: string; os: string; temp: string;
tool_cache: string}
446 | if: failure() && steps.install-nvidia-driver.outcome && steps.install-nvidia-driver.outcome != 'skipped'
447 | shell: bash
448 | env:
>>> 449 | RUNNER_WORKSPACE: ${{ runner.workspace }}
450 | run: |
451 | set +e
452 | set -x
>>> Lint for .github/workflows/create_release.yml:
Error (ACTIONLINT) [deprecated-commands]
workflow command "set-output" was deprecated. use `echo "{name}={value}"
>> $GITHUB_OUTPUT` instead: https://docs.github.com/en/actions/using-
workflows/workflow-commands-for-github-actions
80 | path: ${{ env.PT_RELEASE_FILE }}
81 | - name: Set output
82 | id: release_name
>>> 83 | run: echo "::set-output name=pt_release_name::${{ env.PT_RELEASE_NAME }}.tar.gz"
84 |
85 | upload_source_code_to_s3:
86 | if: ${{ github.repository == 'pytorch/pytorch' && github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') && contains(github.ref, 'rc') }}
>>> Lint for .github/workflows/target-determination-indexer.yml:
Error (ACTIONLINT) [shellcheck]
shellcheck reported issue in this script: SC2086:info:3:3: Double quote to
prevent globbing and word splitting
98 | DOCKER_IMAGE: ${{ steps.calculate-docker-image.outputs.docker-image }}
99 | GITHUB_RUN_ID: ${{ github.run_id }}
100 | AWS_DEFAULT_REGION: us-east-1
>>> 101 | run: |
102 | # detached container should get cleaned up by teardown_ec2_linux
103 | container_name=$(docker run \
104 | ${GPU_FLAG:-} \
```
| true
|
2,945,135,718
|
[BE][CI] Update configure-aws-credential to v4
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149919
* #149922
* __->__ #149918
* #149917
Prerequisite for update to actionlint-1.7.7
| true
|
2,945,135,645
|
[BE] Add Mac ARM64 actionlint binary
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149919
* #149922
* #149918
* __->__ #149917
Downloaded from https://github.com/rhysd/actionlint/releases/tag/v1.6.21
| true
|
2,945,106,410
|
Enable XPU distributed test for PT2.8
|
daisyden
|
open
|
[
"oncall: distributed",
"open source",
"release notes: distributed (fsdp)",
"module: inductor",
"module: dynamo"
] | 3
|
NONE
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @kwen2501 @c-p-i-o
| true
|
2,944,963,660
|
Change to default backend
|
drisspg
|
open
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149915
| true
|
2,944,956,529
|
[Test] Add simple MPS op benchmarks
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149914
Lots of benchmark tests has been posted in PRs, but they might get lost over time
So let's create a benchmark and populate it with results (preferably from the run on CI machine)
| true
|
2,944,909,501
|
support scalar tensor for functional all_gather
|
yuguo68
|
open
|
[
"oncall: distributed",
"release notes: distributed (c10d)",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149913
* #149912
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,944,909,421
|
add a util function _make_all_gather_out_tensor to reduce code duplication
|
yuguo68
|
open
|
[
"oncall: distributed",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149913
* __->__ #149912
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,944,860,475
|
[dynamo] `torch.compile` doesn't respect `GradTrackingTensor`'s data attribute mutation check
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This is a bug exposed after #149482 opens up tensor attribute mutation for newly constructed tensor objects inside `torch.compile` region. Specifically the PR results in error of the following test
```
$ PYTORCH_TEST_WITH_DYNAMO=1 python test/functorch/test_ops.py TestOperatorsCPU.test_data_write_errors_under_transform_cpu
```
CI log:
```
2025-03-24T22:14:06.1660180Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/utils.py:3131: FutureWarning: We've integrated functorch into PyTorch. As the final step of the integration, `functorch.grad` is deprecated as of PyTorch 2.0 and will be deleted in a future version of PyTorch >= 2.3. Please use `torch.func.grad` instead; see the PyTorch 2.0 release notes and/or the `torch.func` migration guide for more details https://pytorch.org/docs/main/func.migrating.html
2025-03-24T22:14:06.1662222Z lambda: run_node(tx.output, node, args, kwargs, nnmodule)
2025-03-24T22:14:06.1664295Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py:1762: FutureWarning: We've integrated functorch into PyTorch. As the final step of the integration, `functorch.grad` is deprecated as of PyTorch 2.0 and will be deleted in a future version of PyTorch >= 2.3. Please use `torch.func.grad` instead; see the PyTorch 2.0 release notes and/or the `torch.func` migration guide for more details https://pytorch.org/docs/main/func.migrating.html
2025-03-24T22:14:06.1666303Z return forward_call(*args, **kwargs)
2025-03-24T22:14:06.1666803Z _________ TestOperatorsCPU.test_data_write_errors_under_transform_cpu __________
2025-03-24T22:14:06.1667322Z Traceback (most recent call last):
2025-03-24T22:14:06.1667990Z File "/var/lib/jenkins/workspace/test/functorch/test_ops.py", line 2944, in test_data_write_errors_under_transform
2025-03-24T22:14:06.1668719Z with self.assertRaisesRegex(RuntimeError, msg):
2025-03-24T22:14:06.1669564Z File "/var/lib/jenkins/workspace/test/functorch/test_ops.py", line 2944, in torch_dynamo_resume_in_test_data_write_errors_under_transform_at_2944
2025-03-24T22:14:06.1670412Z with self.assertRaisesRegex(RuntimeError, msg):
2025-03-24T22:14:06.1670810Z ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
2025-03-24T22:14:06.1671325Z File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/case.py", line 263, in __exit__
2025-03-24T22:14:06.1671904Z self._raiseFailure("{} not raised".format(exc_name))
2025-03-24T22:14:06.1672316Z ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-03-24T22:14:06.1672869Z File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/case.py", line 200, in _raiseFailure
2025-03-24T22:14:06.1673454Z raise self.test_case.failureException(msg)
2025-03-24T22:14:06.1673851Z AssertionError: RuntimeError not raised
2025-03-24T22:14:06.1674088Z
2025-03-24T22:14:06.1674300Z To execute this test, run the following from the base repo dir:
2025-03-24T22:14:06.1675079Z PYTORCH_TEST_WITH_DYNAMO=1 python test/functorch/test_ops.py TestOperatorsCPU.test_data_write_errors_under_transform_cpu
```
Here's a repro on main, without #149482:
```python
import torch
def test():
@torch.compile(fullgraph=True, backend="eager")
def f(x, y):
x.data = y
return x + 1
with torch._functorch.eager_transforms.grad_increment_nesting():
x = torch.ones(5)
y = torch.ones(5)
res = f(x, y)
print(res)
test()
# GradTrackingTensor(lvl=1, value=
# tensor([2., 2., 2., 2., 2.])
# )
# In eager this (expectedly) fails with:
# x.data = y
# ^^^^^^
# RuntimeError: mutating directly with `.data` inside functorch transform is not allowed.
```
### Error logs
_No response_
### Versions
main 1b08aaea, python 3.12
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,944,833,904
|
cache loaded python modules
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149910
* #149932
I am splitting caching the loading of modules from the caching the codegen since its trivial and much easier.
Module loading is 50% of the cost, and codegen is 50% of maybe_append choice on full graph model. which is 40% of total compile time.
<img width="434" alt="Screenshot 2025-03-24 at 4 35 12 PM" src="https://github.com/user-attachments/assets/aa851c6a-bde9-43f8-b12d-e439504ef62c" />
running mm_loop benchmark,
before this change:
67947323682
after this change:
25845073249
2.6X faster.
it seems that the cache was there then got dropped. I added benchmark so it wont be dropped again by mistake.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,944,821,618
|
[ued][kokoro] RNN/LSTMS do not work with torch.compile
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
As title.
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,944,802,836
|
[ued][ChatTTS][guards] Too many recompilations
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
<img width="1755" alt="Image" src="https://github.com/user-attachments/assets/dcd5d85d-bce4-4657-a35f-ae9f6365fa7d" />
Some notes
* The dispatch key failure seems to be something we should look at.
* Is there a way to avoid the %8 guard?
* Can we use mark_dynamic to handle other recompiles?
We will have to work with the ChatTTS authors to incorporate some of these changes.
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,944,795,204
|
[ued][chatTTS][dynamo] Graph break on x.transpose_
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This might be a fundamental graph break. So, we might need to suggest a workaround.
<img width="1109" alt="Image" src="https://github.com/user-attachments/assets/57050dff-7c69-4c84-91ec-87f45c3be4de" />
Full tlparse - [https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpZEpwwY/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000#[18/0]](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpZEpwwY/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000#%5B18/0%5D)
User doc - [docs.google.com/document/d/19cVx0Vhr0042Rfrw2-Xc7Y2OtS8bwc-NUXRG5ACcd2Q/edit?tab=t.0](https://docs.google.com/document/d/19cVx0Vhr0042Rfrw2-Xc7Y2OtS8bwc-NUXRG5ACcd2Q/edit?tab=t.0)
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,944,789,684
|
[ued][chatTTS][dynamo] graph break on should_compile_partial_graph=False
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
tlparse link - https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpZEpwwY/-_15_0_0/dynamo_graph_break_reason_162.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
Full tlparse - https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpZEpwwY/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000#[18/0]
User doc - https://docs.google.com/document/d/19cVx0Vhr0042Rfrw2-Xc7Y2OtS8bwc-NUXRG5ACcd2Q/edit?tab=t.0
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,944,787,571
|
[Inductor] track block shape of intermediary variables
|
eellison
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
During codegen each we track the dtype and value range of each intermediary variable we emit in trition. See [CSEVariable](https://github.com/pytorch/pytorch/blob/23855391f1a17f7145885b5ef977547a70819505/torch/_inductor/codegen/common.py#L1669-L1680).
Dtype was recently added in https://github.com/pytorch/pytorch/pull/136778 by @arui-meta and subsequently iterated on in PRs like https://github.com/pytorch/pytorch/pull/141495 and https://github.com/pytorch/pytorch/pull/140057.
While dtypes are a bit finicky to get right, shapes are very easy to track in triton. More or less each operator broadcasts its inputs, reductions remove reduction dims, and then there are a few remaining ops.
@kundaMwiza recently [had an use case of shapes in a pr](https://github.com/pytorch/pytorch/pull/148679#issue-2900758922) `Ideally the shape of the input would be an attribute of a TritonCSEVariable via shape propagation`.
Similarly, I [ran into a bug in prologue fusion](https://github.com/pytorch/pytorch/pull/147008/files#diff-73b89475038a5b4705da805f1217783883fb90398ee1164995db392fc4a342c1R773-R775) where I now need to add possibly extraneous broadcasts because in particular cases of loading a constant index we return a different shape.
I'm sure other future changes will run into needing shapes, and after adding we'll discover other places in the codebase we can simplify.
### Alternatives
_No response_
### Additional context
_No response_
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,944,757,872
|
[cuDNN][SDPA] cuDNN SDPA supports `head_dim <= 256` on `sm90` and `sm100` as of `9.5.1+`
|
eqy
|
closed
|
[
"module: cudnn",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: sdpa",
"Blackwell"
] | 3
|
COLLABORATOR
|
gqa check PR will go next...
cc @csarofeen @ptrblck @xwang233
| true
|
2,944,756,827
|
Fix non-strict export doesn't turn on dynamo for hop
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149903
Somehow the torch._dynamo.is_compiling is changed to torch.compiler.is_compiling(), which also checks whether we're exporting. This is not caught by cI because we don't have an export test for scan.
Changing to torch.compiler.is_dynamo_compiling and added a test.
edit: piggyback the re-tracing support in this PR. Related code in combine_fn_is_normalized.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,944,749,520
|
[ROCm] build magma rocm and upload tarball
|
jeffdaily
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"Reverted",
"release notes: releng",
"ciflow/rocm",
"ci-no-td"
] | 8
|
COLLABORATOR
|
This will improve docker image build times by not having to rebuild magma rocm for unrelated changes. This PR is step 1 of 2. The next step is a second PR to modify the docker image builds to use the magma tarball that this PR will produce.
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,944,733,767
|
[ONNX] Supporting different opset versions for torchlib registry
|
shubhambhokare1
|
closed
|
[
"module: onnx",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: new features"
] | 15
|
COLLABORATOR
|
- Allows opset_version to determine which onnx decomposition to choose
- Adds a cleanup function to modify the registry after it is built
| true
|
2,944,716,409
|
[CI] Add MacOS-M2-15 as MPS test target on trunk
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Now that we have runners allocated by AWS
| true
|
2,944,669,413
|
[WIP] no normalizations abstractions
|
laithsakka
|
open
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149899
* #149267
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,944,663,202
|
canary basic normalization
|
bobrenjc93
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149898
* #149415
This change was motivated by internal use case (https://fb.workplace.com/groups/1553867532149891/?multi_permalinks=1708481206688522&comment_id=1711739699696006¬if_id=1742399826944239¬if_t=work_feedback_reaction_generic&ref=notif) where we were producing different intermediate node names for the exact same code. This normalization pass does an alpha renaming of intermediate variables so that more isomorphic graphs now result in the same dynamo outputted graph.
We do a normalization pass that effectively ensures that the name indexes monotonically increase. This typically
already happens but in some cases, such as in HOPs, the invariant could be broken without normalization. Below we
show an example where cond previously would have jumped from getitem_3 to get_item_2, but with normalization correctly uses getitem_4 after getitem_3.
We've run this on the same model internally and confirmed with change we now get a cache hit.
| true
|
2,944,654,850
|
[ca] support anomly mode nan checks with different semantics than eager
|
xmfan
|
closed
|
[
"Merged",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"module: compiled autograd"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150073
* #150074
* #149987
* __->__ #149897
see note in code
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,944,653,832
|
canary to not do max
|
bobrenjc93
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149898
* __->__ #149896
* #149415
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,944,642,077
|
[Dynamo] Cannot instantiate class if `__getattribute__` is defined
|
guilhermeleobas
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
Reproducer:
```python
import torch
class Foo():
def __init__(self, a):
self.a = a
def __getattribute__(self, name):
return super().__getattribute__(name)
@torch.compile(backend="eager", fullgraph=True)
def fn(t):
f = Foo(3)
return t.sin()
t = torch.randn(2)
fn(t)
```
<details>
<summary>Dynamo stacktrace</summary>
```
$ TORCHDYNAMO_VERBOSE=1 python a.py
I0324 19:40:23.309881 92498 torch/_dynamo/utils.py:1603] [0/0] ChromiumEventLogger initialized with id e5bf5521-7ed8-4bf6-9405-834b985e28a3
V0324 19:40:23.310215 92498 torch/_dynamo/convert_frame.py:1003] [0/0] torchdynamo start compiling fn /home/guilhermeleobas/git/pytorch/a.py:14, stack (elided 4 frames):
V0324 19:40:23.310215 92498 torch/_dynamo/convert_frame.py:1003] [0/0] File "/home/guilhermeleobas/git/pytorch/a.py", line 20, in <module>
V0324 19:40:23.310215 92498 torch/_dynamo/convert_frame.py:1003] [0/0] fn(t)
V0324 19:40:23.310215 92498 torch/_dynamo/convert_frame.py:1003] [0/0]
I0324 19:40:23.310710 92498 torch/_dynamo/symbolic_convert.py:3326] [0/0] Step 1: torchdynamo start tracing fn /home/guilhermeleobas/git/pytorch/a.py:14
I0324 19:40:23.310885 92498 torch/fx/experimental/symbolic_shapes.py:3334] [0/0] create_env
V0324 19:40:23.312625 92498 torch/_dynamo/symbolic_convert.py:1216] [0/0] [__trace_source] TRACE starts_line /home/guilhermeleobas/git/pytorch/a.py:16 in fn (fn)
V0324 19:40:23.312625 92498 torch/_dynamo/symbolic_convert.py:1216] [0/0] [__trace_source] f = Foo(3)
V0324 19:40:23.313200 92498 torch/_dynamo/symbolic_convert.py:1239] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL Foo []
V0324 19:40:23.313836 92498 torch/_dynamo/symbolic_convert.py:1239] [0/0] [__trace_bytecode] TRACE LOAD_CONST 3 [UserDefinedClassVariable(<class '__main__.Foo'>)]
V0324 19:40:23.313934 92498 torch/_dynamo/symbolic_convert.py:1239] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [UserDefinedClassVariable(<class '__main__.Foo'>), ConstantVariable(int: 3)]
V0324 19:40:23.314103 92498 torch/_dynamo/symbolic_convert.py:1257] [0/0] empty checkpoint
I0324 19:40:23.314406 92498 torch/_dynamo/convert_frame.py:1121] [0/0] run_gc_after_compile: running gc
Traceback (most recent call last):
File "/home/guilhermeleobas/git/pytorch/a.py", line 20, in <module>
fn(t)
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/convert_frame.py", line 1432, in __call__
return self._torchdynamo_orig_callable(
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/convert_frame.py", line 598, in __call__
return _compile(
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/convert_frame.py", line 1059, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/guilhermeleobas/git/pytorch/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/convert_frame.py", line 761, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/convert_frame.py", line 797, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/convert_frame.py", line 257, in _fn
return fn(*args, **kwargs)
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/convert_frame.py", line 715, in transform
tracer.run()
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/symbolic_convert.py", line 3502, in run
super().run()
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/symbolic_convert.py", line 819, in wrapper
return inner_fn(self, inst)
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/symbolic_convert.py", line 2168, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/symbolic_convert.py", line 1170, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/variables/user_defined.py", line 696, in call_function
return super().call_function(tx, args, kwargs)
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/variables/base.py", line 429, in call_function
unimplemented_v2(
File "/home/guilhermeleobas/git/pytorch/torch/_dynamo/exc.py", line 517, in unimplemented_v2
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: Unsupported function call
Explanation: Dynamo does not know how to trace the function `<class '__main__.Foo'>`
Hint: Avoid calling `<class '__main__.Foo'>` in your code.
Hint: Please report an issue to PyTorch.
Developer debug context: call_function UserDefinedClassVariable(<class '__main__.Foo'>) [ConstantVariable(int: 3)] {}
from user code:
File "/home/guilhermeleobas/git/pytorch/a.py", line 16, in fn
f = Foo(3)
I0324 19:40:23.317632 92498 torch/_dynamo/eval_frame.py:475] TorchDynamo attempted to trace the following frames: [
I0324 19:40:23.317632 92498 torch/_dynamo/eval_frame.py:475] * fn /home/guilhermeleobas/git/pytorch/a.py:14
I0324 19:40:23.317632 92498 torch/_dynamo/eval_frame.py:475] ]
I0324 19:40:23.317845 92498 torch/_dynamo/utils.py:765] TorchDynamo compilation metrics:
I0324 19:40:23.317845 92498 torch/_dynamo/utils.py:765] Function, Runtimes (s)
I0324 19:40:23.317845 92498 torch/_dynamo/utils.py:765] _compile.compile_inner, 0.0040
I0324 19:40:23.317845 92498 torch/_dynamo/utils.py:765] gc, 0.0003
V0324 19:40:23.317926 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats constrain_symbol_range: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0324 19:40:23.318009 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats defer_runtime_assert: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0324 19:40:23.318079 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats evaluate_expr: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0324 19:40:23.318154 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats _simplify_floor_div: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0324 19:40:23.318225 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats _maybe_guard_rel: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0324 19:40:23.318298 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats _find: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0324 19:40:23.318367 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats has_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0324 19:40:23.318434 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats size_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0324 19:40:23.318506 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats simplify: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0324 19:40:23.318575 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats _update_divisible: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0324 19:40:23.318641 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats replace: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0324 19:40:23.318706 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats _maybe_evaluate_static: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0324 19:40:23.318772 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats get_implications: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0324 19:40:23.318838 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats get_axioms: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0324 19:40:23.318912 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats _maybe_evaluate_static_worker: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0324 19:40:23.318977 92498 torch/fx/experimental/symbolic_shapes.py:166] lru_cache_stats safe_expand: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
```
</details>
### Versions
PyTorch main branch
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,944,631,630
|
[ued][f5-tts][dynamo] `torch.compile` changes state dict
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Applying torch.compile to a model changes the state dict, and breaking loading of existing state dict checkpoints.
This issue is to figure out how to instruct users on avoiding this issue.
### Error logs
_No response_
### Versions
N/A
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,944,599,187
|
DRAFT: Add TMA opt for concat function target hopper and blackwell arch
|
Mengran-nvidia
|
open
|
[
"triaged",
"open source",
"release notes: cuda"
] | 18
|
NONE
|
Optimize the torch.cat() function targeting the hopper and Blackwell arch, by leveraging the TMA.
TODO: need to add logic to support concat along different dim in the tma_fast version. And some configurations need to be adjusted a little bit to achieve the peak perf.
| true
|
2,944,567,485
|
[draft] Add support in Flex for non-contiguous NJT
|
ani300
|
open
|
[
"open source",
"module: nestedtensor",
"release notes: nested tensor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149892
* #145778
Signed-off-by: Antoni Viros i Martin <aviros@ibm.com>
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,944,564,039
|
[export] refactor _Dim into Dim
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"fx",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Summary: forward fix T218515233
Test Plan: test_export
Differential Revision: D71769231
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,944,561,841
|
Fix autotune pool shutdown
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149890
* #149700
Summary: A couple follow-ups noted in review from https://github.com/pytorch/pytorch/pull/149700:
1. Make sure we correctly signal _all_ subproces to shutdown, even in the case where some processes are currently benchmarking.
2. Change how the pool singleton is created. That also allows us to fully initialize the object in the ctor and remove a bunch of asserts.
Test Plan: existing unit tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,944,560,722
|
[windows] Linker receives wrong (non existent path) (solution path included)
|
loscrossos
|
closed
|
[
"module: windows",
"triaged",
"oncall: pt2"
] | 4
|
NONE
|
### 🐛 Describe the bug
@xuhancn @shunting314
this is a follow up bug to https://github.com/pytorch/pytorch/issues/149310#issuecomment-2745707169
I just came to test the change in https://github.com/pytorch/pytorch/commit/bc1b8730a45e659dca83ec83995c17d4eec9c869 as torch was built on nightly yesterday but torchaudio got built a day later: the bug is solved but a new (similar) one came up.
The fix in https://github.com/pytorch/pytorch/commit/bc1b8730a45e659dca83ec83995c17d4eec9c869 does fix the original symptom and the code advances now but somehow its still there at some other point that i can not pinpoint the exact point to fix it.
If you see my original post: i already fixed the first symptom in the same manner as https://github.com/pytorch/pytorch/commit/bc1b8730a45e659dca83ec83995c17d4eec9c869 but then got further errors down the path.
My observations: in the error stack the cl.exe command shows C:/Program Files/Python310/Include without quotes.. but it might be just cosmetics as originally the error was that it could not find "python.h", which is in that folder and its obvious that the cl already found it.
now it can not find the file 'python310.lib' (still that folder should be in quotes).
NOW: the linker has this include which points to the folder libs in the windows virtual environment:
` /link /LIBPATH:c:/code/.env/Scripts/libs`
but that folder does not exist and **never**(!) exists under the folder "Scripts"
now.. the missing file **does** exist under
`C:\Program Files\Python310\libs`
as a temporars proof of concept fix: I hardcoded my directory `C:\Program Files\Python310\libs` to replace `c:/code/.env/Scripts/libs` and can confirm that after fixing that directory the library works as intended and fully solves the issue. The code compiles fully. Of course a proper fix should include a proper identified include path(which seems the same as in the bug before).
```
File "c:\code\.env\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "C:\Code\test.py", line 279, in generate
logits = decode_one_token(input_ids, inference_params, cfg_scale, allow_cudagraphs=cg)
File "c:\code\.env\lib\site-packages\torch\_dynamo\eval_frame.py", line 663, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "c:\code\.env\lib\site-packages\torch\_dynamo\eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
File "c:\code\.env\lib\site-packages\torch\_dynamo\convert_frame.py", line 1453, in __call__
return self._torchdynamo_orig_callable(
File "c:\code\.env\lib\site-packages\torch\_dynamo\convert_frame.py", line 1234, in __call__
result = self._inner_convert(
File "c:\code\.env\lib\site-packages\torch\_dynamo\convert_frame.py", line 619, in __call__
return _compile(
File "c:\code\.env\lib\site-packages\torch\_dynamo\convert_frame.py", line 1080, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "c:\code\.env\lib\site-packages\torch\_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "c:\code\.env\lib\site-packages\torch\_dynamo\convert_frame.py", line 782, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "c:\code\.env\lib\site-packages\torch\_dynamo\convert_frame.py", line 818, in _compile_inner
out_code = transform_code_object(code, transform)
File "c:\code\.env\lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "c:\code\.env\lib\site-packages\torch\_dynamo\convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
File "c:\code\.env\lib\site-packages\torch\_dynamo\convert_frame.py", line 736, in transform
tracer.run()
File "c:\code\.env\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 3502, in run
super().run()
File "c:\code\.env\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1337, in run
while self.step():
File "c:\code\.env\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "c:\code\.env\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 711, in inner
jump_graph_break(self, inst, value)
File "c:\code\.env\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 613, in jump_graph_break
self.output.compile_subgraph(
File "c:\code\.env\lib\site-packages\torch\_dynamo\output_graph.py", line 1179, in compile_subgraph
self.compile_and_call_fx_graph(
File "c:\code\.env\lib\site-packages\torch\_dynamo\output_graph.py", line 1437, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "c:\code\.env\lib\site-packages\torch\_dynamo\output_graph.py", line 1487, in call_user_compiler
return self._call_user_compiler(gm)
File "c:\code\.env\lib\site-packages\torch\_dynamo\output_graph.py", line 1519, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "c:\code\.env\lib\site-packages\torch\_dynamo\repro\after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "c:\code\.env\lib\site-packages\torch\__init__.py", line 2357, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "c:\code\.env\lib\site-packages\torch\_inductor\compile_fx.py", line 2152, in compile_fx
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "c:\code\.env\lib\site-packages\torch\_inductor\compile_fx.py", line 2140, in compile_fx
return aot_autograd(
File "c:\code\.env\lib\site-packages\torch\_dynamo\backends\common.py", line 101, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "c:\code\.env\lib\site-packages\torch\_functorch\aot_autograd.py", line 1163, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
File "c:\code\.env\lib\site-packages\torch\_functorch\_aot_autograd\autograd_cache.py", line 775, in load
compiled_fn = dispatch_and_compile()
File "c:\code\.env\lib\site-packages\torch\_functorch\aot_autograd.py", line 1148, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "c:\code\.env\lib\site-packages\torch\_functorch\aot_autograd.py", line 573, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "c:\code\.env\lib\site-packages\torch\_functorch\aot_autograd.py", line 823, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "c:\code\.env\lib\site-packages\torch\_functorch\_aot_autograd\jit_compile_runtime_wrappers.py", line 219, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "c:\code\.env\lib\site-packages\torch\_functorch\aot_autograd.py", line 482, in __call__
return self.compiler_fn(gm, example_inputs)
File "c:\code\.env\lib\site-packages\torch\_inductor\compile_fx.py", line 1987, in fw_compiler_base
return inner_compile(
File "c:\code\.env\lib\site-packages\torch\_inductor\compile_fx.py", line 639, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "c:\code\.env\lib\site-packages\torch\_dynamo\repro\after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "c:\code\.env\lib\site-packages\torch\_inductor\compile_fx.py", line 771, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "c:\code\.env\lib\site-packages\torch\_inductor\compile_fx.py", line 756, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "c:\code\.env\lib\site-packages\torch\_inductor\compile_fx.py", line 1338, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "c:\code\.env\lib\site-packages\torch\_inductor\compile_fx.py", line 1226, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "c:\code\.env\lib\site-packages\torch\_inductor\graph.py", line 2085, in compile_to_module
return self._compile_to_module()
File "c:\code\.env\lib\site-packages\torch\_inductor\graph.py", line 2132, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "c:\code\.env\lib\site-packages\torch\_inductor\codecache.py", line 2865, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "c:\code\.env\lib\site-packages\torch\_inductor\runtime\compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "C:\Users\user\AppData\Local\Temp\torchinductor_user\co\ccofzf4homesdglfshhvzoib3sk572c2qe3j7w47ljfc7da6uf22.py", line 31, in <module>
cpp_fused_eq_0 = async_compile.cpp_pybinding(['const int64_t*', 'bool*'], '''
File "c:\code\.env\lib\site-packages\torch\_inductor\async_compile.py", line 377, in cpp_pybinding
return CppPythonBindingsCodeCache.load_pybinding(argtypes, source_code)
File "c:\code\.env\lib\site-packages\torch\_inductor\codecache.py", line 2359, in load_pybinding
return cls.load_pybinding_async(*args, **kwargs)()
File "c:\code\.env\lib\site-packages\torch\_inductor\codecache.py", line 2351, in future
result = get_result()
File "c:\code\.env\lib\site-packages\torch\_inductor\codecache.py", line 2160, in load_fn
result = worker_fn()
File "c:\code\.env\lib\site-packages\torch\_inductor\codecache.py", line 2188, in _worker_compile_cpp
cpp_builder.build()
File "c:\code\.env\lib\site-packages\torch\_inductor\cpp_builder.py", line 1695, in build
run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
File "c:\code\.env\lib\site-packages\torch\_inductor\cpp_builder.py", line 366, in run_compile_cmd
_run_compile_cmd(cmd_line, cwd, write_stdout_to)
File "c:\code\.env\lib\site-packages\torch\_inductor\cpp_builder.py", line 359, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._inductor.exc.InductorError: CppCompileError: C++ compile error
Command:
cl /I C:/Program Files/Python310/Include /I c:/code/.env/lib/site-packages/torch/include /I c:/code/.env/lib/site-packages/torch/include/torch/csrc/api/include /D TORCH_INDUCTOR_CPP_WRAPPER /D STANDALONE_TORCH_HEADER /D C10_USING_CUSTOM_GENERATED_MACROS /DLL /MD /O2 /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /openmp /openmp:experimental C:/Users/user/AppData/Local/Temp/torchinductor_user/fg/cfgj7oz2j4knn5qsq6yipz6dktpi36ow5v7baghkjngcj4deiqc3.cpp /FeC:/Users/user/AppData/Local/Temp/torchinductor_user/fg/cfgj7oz2j4knn5qsq6yipz6dktpi36ow5v7baghkjngcj4deiqc3.pyd /LD /link /LIBPATH:c:/code/.env/Scripts/libs /LIBPATH:c:/code/.env/lib/site-packages/torch/lib torch.lib torch_cpu.lib torch_python.lib sleef.lib
Output:
Microsoft (R) C/C++ Optimizing Compiler Version 19.43.34809 for x86
Copyright (C) Microsoft Corporation. All rights reserved.
cl : Command line warning D9025 : overriding '/openmp' with '/openmp:experimental'
cfgj7oz2j4knn5qsq6yipz6dktpi36ow5v7baghkjngcj4deiqc3.cpp
Microsoft (R) Incremental Linker Version 14.43.34809.0
Copyright (C) Microsoft Corporation. All rights reserved.
/out:C:/Users/user/AppData/Local/Temp/torchinductor_user/fg/cfgj7oz2j4knn5qsq6yipz6dktpi36ow5v7baghkjngcj4deiqc3.pyd
/dll
/implib:C:/Users/user/AppData/Local/Temp/torchinductor_user/fg/cfgj7oz2j4knn5qsq6yipz6dktpi36ow5v7baghkjngcj4deiqc3.lib
/LIBPATH:c:/code/.env/Scripts/libs
/LIBPATH:c:/code/.env/lib/site-packages/torch/lib
torch.lib
torch_cpu.lib
torch_python.lib
sleef.lib
cfgj7oz2j4knn5qsq6yipz6dktpi36ow5v7baghkjngcj4deiqc3.obj
LINK : fatal error LNK1104: cannot open file 'python310.lib'
```
### Versions
tested in https://github.com/pytorch/pytorch/commit/bc1b8730a45e659dca83ec83995c17d4eec9c869
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @chauhang @penguinwu @malfet @seemethere
| true
|
2,944,545,391
|
[Build] Remove pre-CXX11 ABI logic from build script
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: build",
"topic: bc breaking",
"topic: improvements"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149888
Only keep one in check_binary_symbols to make sure there are no pre-CXX11 ABI symbols in the library
| true
|
2,944,545,267
|
[CD] Check that nightly x86 binaries are build with gcc-11
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149888
* __->__ #149887
Though they should have been with gcc-14, per https://github.com/pypa/manylinux?tab=readme-ov-file#manylinux_2_28-almalinux-8-based
| true
|
2,944,490,870
|
TransformerDecoder produces identical outputs regardless of input
|
nicolacalzone
|
closed
|
[
"module: nn",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
I'm building a decoder-only transformer using TransformerDecoder and TransformerDecoderLayer classes from the PyTorch library.
The model consistently produces the same output tensor regardless of the input, also with all rows in the output being identical.
### My environment:
```
Python 3.10.12
torch 2.5.1
Ubuntu 22.04
```
### Minimal Reproduction Code
```python
class DecoderOnlyTransformer(pl.LightningModule):
def __init__(
self,
vocab_size,
max_seq_length,
d_model=512,
nhead=8,
num_decoder_layers=6,
dim_feedforward=2048,
dropout=0.1,
learning_rate=5e-4,
optimizer_type="adam"
):
super().__init__()
self.save_hyperparameters()
self.vocab_size = vocab_size
self.d_model = d_model
self.learning_rate = learning_rate
self.optimizer_type = optimizer_type
# Model components
self.embeddings = nn.Embedding(vocab_size, d_model)
self.pos_embeddings = PositionalEmbedding(d_model, max_seq_length)
self.single_decoder_layer = nn.TransformerDecoderLayer(d_model=d_model,
nhead=nhead,
dropout=dropout,
dim_feedforward=dim_feedforward,
activation="gelu")
self.stack_decoder_layers = nn.TransformerDecoder(self.single_decoder_layer, num_decoder_layers)
self.output_projection = nn.Linear(d_model, vocab_size)
def forward(self, tgt, tgt_mask=None, memory_mask=None,
tgt_key_padding_mask=None, memory_key_padding_mask=None) -> torch.Tensor:
tgt = self.embeddings(tgt)
tgt = self.pos_embeddings(tgt)
# Generate causal mask if not provided
if tgt_mask is None:
seq_len = tgt.size(0)
tgt_mask = nn.Transformer.generate_square_subsequent_mask(seq_len).to(tgt.device)
memory = torch.zeros(1, tgt.size(1), self.embeddings.embedding_dim).to(tgt.device)
output = self.stack_decoder_layers(
tgt,
memory,
tgt_mask=tgt_mask,
memory_mask=memory_mask,
tgt_key_padding_mask=tgt_key_padding_mask,
memory_key_padding_mask=memory_key_padding_mask
)
return output
```
For any input, the model produces an output tensor where:
- All positions in the sequence have identical values
- The output is always the same regardless of input
Output:
```python
tensor([[[ 0.3030, 0.2662, 0.7048, ..., 0.7949, -0.4710, -0.2264]],
[[ 0.3030, 0.2662, 0.7048, ..., 0.7949, -0.4710, -0.2264]],
[[ 0.3030, 0.2662, 0.7048, ..., 0.7949, -0.4710, -0.2264]],
...,
[[ 0.3030, 0.2662, 0.7048, ..., 0.7949, -0.4710, -0.2264]],
[[ 0.3030, 0.2662, 0.7048, ..., 0.7949, -0.4710, -0.2264]],
[[ 0.3030, 0.2662, 0.7048, ..., 0.7949, -0.4710, -0.2264]]],
device='cuda:0')
```
### Expected Behavior
The decoder should produce different outputs for different inputs, with variations across sequence positions.
### Additional Context
- I'm following the PyTorch documentation for the TransformerDecoder implementation
- The issue persists regardless of input sequence or mask configuration
- The PositionalEmbedding implementation is standard (sinusoidal)
### Questions
- Is there something wrong with how I'm initializing or using the TransformerDecoder?
- Could the zero-initialized memory be causing this behavior? (I set it like that because there is no encoder, and from the documentation I happen to understand that memory is its output)
- Are there known issues with the TransformerDecoder implementation?
Any help diagnosing this would be greatly appreciated!
### Versions
Python 3.10.12
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,944,487,821
|
Add smoke test to validate pypi env version vs torch complied and installed versions of nccl and cudnn
|
atalman
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Followup after nccl update to validate both cudnn and nccl versions in nightly and release pipelines.
Tested on local dev machine, output.
Success:
```
Found matching cudnn. Torch: 9.5.1 PyPI 9.5.1.17
Found matching nccl. Torch: 2.25.1 PyPI 2.25.1
```
Failure:
```
Traceback (most recent call last):
File "test1.py", line 29, in <module>
compare_pypi_to_torch_versions("nccl", find_pypi_package_version("nvidia-nccl"), torch_nccl_version)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ec2-user/test1.py", line 24, in compare_pypi_to_torch_versions
raise RuntimeError(
f"Wrong {package} version. Torch: {torch_version} PyPI: {pypi_version}"
)
RuntimeError: Wrong nccl version. Torch: 2.25.1 PyPI: 2.26.2
```
| true
|
2,944,482,017
|
Update SGD documentation to match implementation
|
dscamiss
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: optim"
] | 14
|
CONTRIBUTOR
|
Fixes #149476
This PR updates the pseudocode description of the SGD optimizer to better match the implementation.
Updated pseudocode:

| true
|
2,944,457,013
|
[ROCm] missing AT_CUDA_CHECK for cub and SoftMax
|
ethanwee1
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
CONTRIBUTOR
|
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,944,442,719
|
[Bugfix] Add handling for buffer overrides
|
Lucaskabela
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Fixes #139167
This PR:
* uses `named_buffers` to mark static
* Checks that `named_buffers` is of expected type (callable, iterator) before trying to iterate over; if not, we skip this pass
These changes fix the previous errors in dynamo causing to crash (as shown in issue above)
### Unit Test
```
python test/dynamo/test_buffers_override.py
```
Results in:
```
.
----------------------------------------------------------------------
Ran 2 tests in 5.344s
OK
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,944,420,703
|
Dynamo `as_python_constant()` infinite recursion
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```
import torch
@torch.compile(fullgraph=True, backend='eager')
def f(x):
l = []
l.append(l)
return l, x + 1
print(f(torch.ones(5)))
```
The reason is pretty simple, our `as_python_constant` implementations never took cycles into considerations:
https://github.com/pytorch/pytorch/blob/1b08aaeafe93393a7bd34f91381ad40cb463bf8f/torch/_dynamo/variables/lists.py#L95-L96
And if user code returns the cyclic list, we'd reconstruct it in `PyCodegen` and end up calling `is_python_constant` which calls `as_python_constant`:
https://github.com/pytorch/pytorch/blob/1b08aaeafe93393a7bd34f91381ad40cb463bf8f/torch/_dynamo/codegen.py#L264-L265
### Error logs
```
Traceback (most recent call last):
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 1453, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 1131, in _compile
raise InternalTorchDynamoError(
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 1080, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 782, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 818, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 736, in transform
tracer.run()
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/symbolic_convert.py", line 3502, in run
super().run()
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/symbolic_convert.py", line 3703, in RETURN_VALUE
self._return(inst)
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/symbolic_convert.py", line 3688, in _return
self.output.compile_subgraph(
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/output_graph.py", line 1163, in compile_subgraph
self.codegen_suffix(tx, stack_values, pass1)
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/output_graph.py", line 1236, in codegen_suffix
cg.restore_stack(stack_values, value_from_source=not tx.export)
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/codegen.py", line 101, in restore_stack
self.foreach(stack_values)
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/codegen.py", line 382, in foreach
self(i)
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/codegen.py", line 264, in __call__
if value.is_python_constant() and is_safe_constant(value.as_python_constant()):
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/base.py", line 357, in is_python_constant
self.as_python_constant()
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/lists.py", line 96, in as_python_constant
return self.python_type()([x.as_python_constant() for x in self.items])
^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/lists.py", line 96, in as_python_constant
return self.python_type()([x.as_python_constant() for x in self.items])
^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/lists.py", line 96, in as_python_constant
return self.python_type()([x.as_python_constant() for x in self.items])
^^^^^^^^^^^^^^^^^^^^^^
[Previous line repeated 975 more times]
torch._dynamo.exc.InternalTorchDynamoError: RecursionError: maximum recursion depth exceeded
from user code:
File "/home/ryanguo99/pt/scratch/cycle.py", line 7, in f
return l, x + 1
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
### Versions
Main 1b08aaeafe9, python 3.12
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,944,416,235
|
from_blob does not recognize device
|
brccabral
|
closed
|
[
"module: cpp",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
I my case this does not work
```cpp
std::array<int, 3> values={1,2,3};
auto ten = torch::from_blob(values.data(), {values.size()}, torch::kCUDA);
```
but this does
```cpp
auto ten2 = torch::from_blob(values.data(), {values.size()});
ten2 = ten2.to(torch::kCUDA);
```
Other issues I think are related #23859 , #71978 , #49814
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650
Nvidia driver version: 570.124.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 10
CPU(s) scaling MHz: 20%
CPU max MHz: 4500.0000
CPU min MHz: 800.0000
BogoMIPS: 5199.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1.5 MiB (6 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
cc @jbschlosser
| true
|
2,944,382,339
|
Inductor Pattern Matcher's register_replacement function only works with functional `search_fn`s
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
it does an implicit DCE [here](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/pattern_matcher.py#L2005) where it is constructing a graph structure and only cares about nodes that are "reachable" from the outputs
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,944,380,565
|
Fix atomic operation compatibility for ARMv8-A (Raspberry Pi 4) by adjusting compilation flags
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
**Issue:**
* The ldaddal instruction is an AArch64 atomic operation available from ARMv8.1-A onwards.
* Raspberry Pi 4 (Cortex-A72) is ARMv8-A, which does not support ldaddal, leading to failures when running PyTorch built with march=armv8.2-a+sve
* This led to an issue when running PyTorch on ARMv8-A (Raspberry Pi 4), as unsupported atomic operations were generated.
**Fix:**
* Updated the build flags to explicitly use **-march=armv8-a+sve**, ensuring GCC and clang promotes it correctly and resolves compatibility issues with armv8 and still work correctly for SVE like before.
* This ensures that PyTorch builds correctly for ARMv8-A platforms (e.g., Raspberry Pi 4) while still enabling SVE for supported hardware.
Test plan:
- Allocate `a1.4xlarge` on AWS
- Run following script using wheel produced by this PR
```python
import torch
def f(x):
return x.sin() + x.cos()
print(torch.__version__)
f_c = torch.jit.script(f)
```
- Observe no crash
```
$ python3 foo.py
2.7.0.dev20250313+cpu
```
- Observe crash with 2.6.0
```
$ python3 foo.py
2.6.0+cpu
Illegal instruction (core dumped)
```
Fixes #146792
cc @malfet @snadampal @milpuz01
| true
|
2,944,329,205
|
(Maybe unnecessary) FunctionCtx appears in dynamo graph in the presence of custom autograd functions
|
jamesjwu
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
FunctionCtx appears in dynamo graphs with custom autograd functions, but is immediately set to None (as far as I can tell, this happens every single time).
@zou3519 mentioned that this is not expected, and dynamo shouldn't be including these in the graph. Filing this issue to track this. It's unclear to me the effect this has on dynamo.
### Error logs
See unit tests
```
test/dynamo/test_aot_autograd_cache.py -k test_custom_autograd
```
Graph produced by Dynamo:
```
class GraphModule(torch.nn.Module):
def forward(self, L_a_: "f32[5][1]cuda:0"):
l_a_ = L_a_
# File: /data/users/jjwu/fbsource/buck-out/v2/gen/fbcode/2dec47c3d7463205/caffe2/test/dynamo/__test_aot_autograd_cache__/test_aot_autograd_cache#link-tree/caffe2/test/dynamo/test_aot_autograd_cache.py:424 in fn, code: return MyAutogradFunction.apply(a)
function_ctx = torch.autograd.function.FunctionCtx(); function_ctx = None
fwd_body_0 = self.fwd_body_0
bwd_body_0 = self.bwd_body_0
autograd_function_apply: "f32[5][1]cuda:0" = torch.ops.higher_order.autograd_function_apply(fwd_body_0, bwd_body_0, l_a_, args_tensor_mask = [True], non_differentiable_idx = []); fwd_body_0 = bwd_body_0 = l_a_ = None
return (autograd_function_apply,)
class fwd_body_0(torch.nn.Module):
def forward(self, ctx : torch.autograd.function.Function, x: "f32[5][1]cuda:0"):
# No stacktrace found for following nodes
_set_grad_enabled = torch._C._set_grad_enabled(False); _set_grad_enabled = None
# File: /data/users/jjwu/fbsource/buck-out/v2/gen/fbcode/2dec47c3d7463205/caffe2/test/dynamo/__test_aot_autograd_cache__/test_aot_autograd_cache#link-tree/caffe2/test/dynamo/test_aot_autograd_cache.py:413 in forward, code: y = x.sin()
y: "f32[5][1]cuda:0" = x.sin()
# File: /data/users/jjwu/fbsource/buck-out/v2/gen/fbcode/2dec47c3d7463205/caffe2/test/dynamo/__test_aot_autograd_cache__/test_aot_autograd_cache#link-tree/caffe2/test/dynamo/test_aot_autograd_cache.py:415 in forward, code: ctx.foo = x.cos()
cos: "f32[5][1]cuda:0" = x.cos(); x = None
# No stacktrace found for following nodes
_set_grad_enabled_1 = torch._C._set_grad_enabled(True); _set_grad_enabled_1 = None
return (y, [y, cos])
class bwd_body_0(torch.nn.Module):
def forward(self, ctx : torch.autograd.function.Function, grad_output: "f32[5][1]cuda:0", y: "f32[5][1]cuda:0", cos: "f32[5][1]cuda:0"):
# No stacktrace found for following nodes
_set_grad_enabled = torch._C._set_grad_enabled(False); _set_grad_enabled = None
# File: /data/users/jjwu/fbsource/buck-out/v2/gen/fbcode/2dec47c3d7463205/caffe2/test/dynamo/__test_aot_autograd_cache__/test_aot_autograd_cache#link-tree/caffe2/test/dynamo/test_aot_autograd_cache.py:421 in backward, code: return grad_output * result + ctx.foo * grad_output
mul: "f32[5][1]cuda:0" = grad_output * y; y = None
mul_1: "f32[5][1]cuda:0" = cos * grad_output; cos = grad_output = None
add: "f32[5][1]cuda:0" = mul + mul_1; mul = mul_1 = None
# No stacktrace found for following nodes
_set_grad_enabled_1 = torch._C._set_grad_enabled(True); _set_grad_enabled_1 = None
return add
```
tlparse: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmp4K1lgR/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
### Versions
PyTorch version: 2.8.0a0+gite081e38
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.34
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk15_zion_2630_gf27365f948db-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
GPU 7: NVIDIA PG509-210
Nvidia driver version: 535.154.05
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.0
/usr/lib64/libcudnn.so.9.5.0
/usr/lib64/libcudnn_adv.so.9.5.0
/usr/lib64/libcudnn_adv_infer.so.8.8.0
/usr/lib64/libcudnn_adv_train.so.8.8.0
/usr/lib64/libcudnn_cnn.so.9.5.0
/usr/lib64/libcudnn_cnn_infer.so.8.8.0
/usr/lib64/libcudnn_cnn_train.so.8.8.0
/usr/lib64/libcudnn_engines_precompiled.so.9.5.0
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.5.0
/usr/lib64/libcudnn_graph.so.9.5.0
/usr/lib64/libcudnn_heuristic.so.9.5.0
/usr/lib64/libcudnn_ops.so.9.5.0
/usr/lib64/libcudnn_ops_infer.so.8.8.0
/usr/lib64/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 11
Frequency boost: enabled
CPU(s) scaling MHz: 100%
CPU max MHz: 1801.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 132 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Gather data sampling: Vulnerable: No microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] optree==0.13.0
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0a0+gite081e38
[pip3] torch_geometric==2.4.0
[pip3] torchao==0.6.1
[pip3] torchaudio==2.6.0a0+2709b65
[pip3] torchdata==0.10.0a0+b255542
[pip3] torchmetrics==1.0.3
[pip3] torchmultimodal==0.1.0b0
[pip3] torchrec==1.1.0a0+d2ed744
[pip3] torchtext==0.17.0a0+1d4ce73
[pip3] torchvision==0.22.0a0+947722a
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] blas 1.0 mkl
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.10 py310h5eee18b_0
[conda] mkl_random 1.2.7 py310h1128e8f_0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-labs-segment-anything-fast 0.2 pypi_0 pypi
[conda] pytorch-triton 3.3.0+git96316ce5 pypi_0 pypi
[conda] torch 2.8.0a0+gite081e38 dev_0 <develop>
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchao 0.6.1 pypi_0 pypi
[conda] torchaudio 2.6.0a0+2709b65 dev_0 <develop>
[conda] torchdata 0.10.0a0+b255542 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchrec 1.1.0a0+d2ed744 dev_0 <develop>
[conda] torchtext 0.17.0a0+1d4ce73 dev_0 <develop>
[conda] torchvision 0.22.0a0+947722a dev_0 <develop>
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,944,326,712
|
[Async TP] Activations not cleared after backward when reduce_scatter_tensor saved for backward by per op SAC
|
danielvegamyhre
|
closed
|
[
"oncall: distributed",
"module: activation checkpointing",
"module: autograd"
] | 8
|
CONTRIBUTOR
|
### 🐛 Describe the bug
## Context
This is a follow up to the discussion here: https://github.com/pytorch/torchtitan/pull/965#issuecomment-2744476861
## Summary
After using the partitioner change which removes saved collective results which are not actually used for backward (https://github.com/pytorch/pytorch/pull/149652) and then adding back `reduce_scatter_tensor` to the [list of ops to save](https://github.com/pytorch/torchtitan/blob/f3943ddf7a9d5584a2bd3fdf9be3f1816a85bd7f/torchtitan/models/llama/parallelize_llama.py#L226) in per op SAC in torchtitan, I confirmed it works (i.e., doesn't crash) - this is awesome!
However, there is now a new problem to grapple with.
If we add back `reduce_scatter_tensor` to the torchtitan save list for per op SAC, the reduce scatter nodes in the graph now have 2 users instead of 1.
The async TP pattern matcher for finding subgraphs to fuse into fused_matmul_reduce_scatter nodes only matches when reduce_scatter has only 1 user (CallFunction defaults to _users=1): https://github.com/pytorch/pytorch/blob/1e159db57c611b98a531341927b2d01f39383f7a/torch/_inductor/fx_passes/micro_pipeline_tp.py#L225-L230
This means that the reduce_scatters saved for backward are not fused, because those nodes now have 2 users. Since the pattern doesn't match, no fusion occurs for those nodes.
To solve this, I tried adding some additional patterns to match single user and multi user reduce scatters: https://github.com/pytorch/pytorch/pull/149875
The fusion does indeed occur now, but there is something occuring that looks almost like a memory leak - memory usage increases every step until OOM around step 20 (see logs: https://www.internalfb.com/phabricator/paste/view/P1765166464)
To me this indicates something like activations never getting cleared after backward, so i looked at the memory snapshot and this is indeed the case, i can see tensors allocated during the forward passes that are never freed through the rest of the training run:
<img width="1451" alt="Image" src="https://github.com/user-attachments/assets/3b5c7131-fee4-4a7c-9635-22603cecf912" />
### Versions
Pytorch: `scatter-dim` branch (https://github.com/pytorch/pytorch/pull/149247) with @bdhirsh's PR patched in https://github.com/pytorch/pytorch/pull/149652
torchtitan: main@HEAD, with one change - saving reduce scatter tensor in per op SAC.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @soulitzer @ezyang @albanD @gqchen @pearu @nikitaved @Varal7 @xmfan
| true
|
2,944,321,405
|
[Async TP] Fuse matmul-reduce-scatters when reduce scatters have multiple users, and save fused node for backward instead of reduce_scatter node
|
danielvegamyhre
|
closed
|
[
"oncall: distributed",
"release notes: distributed (pipeline)",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Fixes #149876
## Stack
- [previous PR in stack] https://github.com/pytorch/pytorch/pull/149247
## TL;DR
This PR implements support in async TP for saving the reduce-scatter result for backward, which previously would break the torchtitan AC policies: no AC, per op SAC, and per layer SAC.
## Context
In torchtitan's LLama3 per op SAC policy, we want to save the output of `reduce_scatter` ops for backward, which is useful for TP. The reduce_scatter op is also saved for No AC (since all activations are saved) and per layer SAC (since we save the activations for N full layers, which do contain reduce-scatters for TP.
However, doing this causes incompatibility with Async TP for the AC policies above, for 2 reasons:
1) The graph pattern matching specifically only matches on reduce scatter nodes with 1 user, but reduce_scatter nodes saved for backwards will have 2 users (the 2nd one being the return/output node, which saves it for backward).
2) The subgraph replacement logic which replaces the users of the `wait_tensor` after the reduce-scatter with the new fused node has no mechanism to save the fused_node for backward instead of the reduce-scatter node. This means we cannot directly replace the subgraph, since we can't delete nodes which still have users (in this case, the output node is still using the reduce-scatter node).
To fix this, we do 2 things:
1) Add additional pattern matching logic to also match reduce-scatter nodes with 2 users, so we also perform fusion when reduce-scatter is saved for backward.
2) When replacing the subgraph with the fused node, detect if the reduce-scatter was saved for backward, and if so, save the result of the fused node for backward instead. This enables us to properly erase the subgraph and prevent the memory leak which occurred in #149876
## Other changes
- Continue to throw an error if we don't find any candidate all-gathers or reduce-scatters for fusion (since TP should have both) but DON'T throw an error if we don't fuse any matmul-reduce-scatters. This is because I've found there are actually valid graphs where we do fuse reduce scatters in the forward graph but not the backward graph (in the backward pass there are reduce-scatters but the producer op is an "add" not a mm/scaled_mm).
## Test plan
1. All unit tests are passing
2. Visualized the graphs and verified the fusion is occurring properly.
3. Verified via manual torchtitan runs there is no memory leak / OOM occurring anymore.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,944,294,045
|
ci/docker: use NCCL 2.26.2-1
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Related to #149153
This updates some build scripts to hopefully fix the nightly builds which are somehow building against nccl 2.25.1 and using 2.26.2 from pip.
Test plan:
After merging rerun nightly linux jobs and validate that nccl version matches
| true
|
2,944,291,943
|
add bobren and laithsakka as ds owners
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149873
| true
|
2,944,252,269
|
Do all lazy imports for torch.compile in one place?
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2
|
CONTRIBUTOR
|
When benchmarking torch.compile with warm start, I noticed 2s of time in the backend before pre-grad passes were called. Upon further investigation I discovered this is just the time of lazy imports.
Lazy imports can distort profiles and hide problems, especially when torch.compile behavior changes on the first iteration vs next iterations.
Strawman: put all of the lazy compiles for torch.compile into one function (named "lazy_imports"), call this from somewhere (maybe on the first torch.compile call...), and ensure that it shows up on profiles aptly named
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,944,248,714
|
Add release branch push triggers to inductor-rocm-mi300.yml
|
pytorchbot
|
closed
|
[
"module: rocm",
"open source",
"topic: not user facing",
"ciflow/rocm"
] | 1
|
COLLABORATOR
|
In similar vein as https://github.com/pytorch/pytorch/pull/149517
When we added the rocm-mi300.yml earlier this year, we had lower capacity and we were just pipecleaning the workflow, so we set the trigger to only respond to pushes to main branch. But now we have more stability as well as capacity, and we would really like to ensure that the release branch is being tested on MI300s as well.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,944,217,919
|
Use variadic length tuple for `torch.masked.DimOrDims`
|
ringohoffman
|
closed
|
[
"open source",
"Merged",
"module: masked operators",
"ciflow/trunk",
"release notes: python_frontend"
] | 14
|
CONTRIBUTOR
|
`tuple[int]` means only a tuple of length 1, which is not what was intended.
```python
loss = torch.masked.mean(loss, mask=mask, dim=(-1, -2)) # Argument of type "tuple[Literal[-1], Literal[-2]]" cannot be assigned to parameter "dim" of type "DimOrDims"
```
| true
|
2,944,215,050
|
ProcessGroupGloo: support reduce_scatter + update support chart
|
d4l3k
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
MEMBER
|
This adds a `reduce_scatter` implementation for ProcessGroupGloo. This is a pretty naive implementation as it does 1 allreduce per rank but may be useful for testing in FSDP etc. There was an existing implementation of reduce_scatter_tensor/reduce_scatter_tensor_coalesed that has a very similar implementation but requires a fixed tensor size per rank.
If users find these functions to be too slow we can address them as issues arise.
Gloo now supports all major distributed operations. Quite a few of these were added by @rohan-varma and @yifuwang but they didn't update the support chart. We also have `CUDAWork` variants of most operations so those were also added to the chart.
Test plan:
```
pytest -v test/distributed/test_c10d_gloo.py -k reduce_scatter
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o
| true
|
2,944,201,377
|
[ROCm] fix uninitialized warning in BFloat16.h
|
ethanwee1
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
CONTRIBUTOR
|
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,944,195,319
|
Fix cusparseLt.so preload without nvidia directory
|
keith
|
closed
|
[
"triaged",
"open source",
"release notes: build",
"topic: bug fixes"
] | 4
|
NONE
|
Since 2b241a8206843f43f0568b7b65473ebb593c4740, the `nvidia`
subdirectory existing is not enough to skip the rest of this logic since
other paths are now considered below.
| true
|
2,944,190,402
|
[MPS] tril op not handling infs correctly
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"topic: bug fixes",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 13
|
COLLABORATOR
|
Fixes #149813
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,944,069,497
|
Allow rebuild of triton on workflow_dispatch
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Allows to rebuild triton from main.
latest triton build failed : https://github.com/pytorch/pytorch/actions/runs/13984299781/job/39298288914
The cause PR was reverted: https://github.com/pytorch/pytorch/pull/148419
We need to rebuild the triton now
| true
|
2,944,045,757
|
[ONNX] Clean up the diagnostics module
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing",
"skip-pr-sanity-checks",
"suppress-bc-linter",
"ci-no-td"
] | 11
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149864
Remove the diagnostics/SARIF module from ONNX exporter because it is obsolete unused.
cc @albanD
| true
|
2,944,030,253
|
cd: Restore windows release builds for libtorch
|
seemethere
|
closed
|
[
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149863
These were accidentally deleted in the refactor of DEVTOOLSET +
cxx11abi.
This happened because the `build_environment` variable wasn't aware of the `build_variant` for libtorch and subsequently overwrote the original file twice, leaving the last written as the actual workflow (which in this case was the debug builds).
One thing this has made me curious on is if we actually need `debug` builds for window at all? We don't release them for linux and I'd probably bet that they have low download numbers anyways so maybe it makes sense to cut them.
Adds a build_variant parameter to the dataclass so that we can extend
these easily in the future if we want.
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,944,027,080
|
[Inductor UT][Break XPU] Apply CUDA tolerances changes on XPU that introduced by #144579.
|
etaf
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150830
* __->__ #149862
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,943,906,210
|
Configure `cuda.cmake` to ensure consistent behavior downstream
|
jeongseok-meta
|
open
|
[] | 11
|
CONTRIBUTOR
|
The current implementation of `cuda.cmake` relies on the option variable `USE_SYSTEM_NVTX`:
https://github.com/pytorch/pytorch/blob/9bae904cb47b2d896f4653f751f0526379823606/cmake/public/cuda.cmake#L173-L177
which is only set during the PyTorch build process:
https://github.com/pytorch/pytorch/blob/9bae904cb47b2d896f4653f751f0526379823606/CMakeLists.txt#L466
This can cause issues when the file is installed as part of a package, as downstream users have no control over this variable:
https://github.com/pytorch/pytorch/blob/9bae904cb47b2d896f4653f751f0526379823606/CMakeLists.txt#L1300-L1311
When `find_package(Torch CONFIG)` is called downstream, `cuda.cmake` is transitively included, but the value of `USE_SYSTEM_NVTX` is not propagated. This can lead to inconsistent behavior, as reported in https://github.com/pytorch/pytorch/issues/139108 and https://github.com/pytorch/pytorch/issues/147220. In one case, a package manager had to apply a patch to hardcode `USE_SYSTEM_NVTX=TRUE` to make it work with system nvtx3: https://github.com/conda-forge/pytorch-cpu-feedstock/pull/377
To address this issue, I propose that we either decouple `cuda.cmake` from option variables defined during the PyTorch build or configure the file with the chosen option when building PyTorch. This PR presents a minimal solution by configuring `cuda.cmake` to ensure consistent behavior downstream.
Alternative solutions are welcome!
| true
|
2,943,748,727
|
[MTIA] [Triton] Set codename of MTIA device in triton heuristics
|
PatriceVignola
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Summary: Triton-MTIA expects the codename of the device as the arch when querying the module map, not the compute capability. This diff gets rid of the following error: `No libdevice is provided for arch (0, 0)`
Test Plan: CI
Reviewed By: Myrthan
Differential Revision: D70072095
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,943,674,859
|
DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int8 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int8&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39281885935).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int8`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 327, in test_binary_op_with_scalar_self_support
self._binary_test(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 263, in _binary_test
actual = op(inputs, self.is_cuda, is_fastpath)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_pow', keys=('aten::_foreach_pow', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 1: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.int8], Tensor[size=(19, 19), device="cuda:0", dtype=torch.int8], Tensor[size=(18, 18), device="cuda:0", dtype=torch.int8], Tensor[size=(17, 17), device="cuda:0", dtype=torch.int8], Tensor[size=(16, 16), device="cuda:0", dtype=torch.int8], Tensor[size=(15, 15), device="cuda:0", dtype=torch.int8], Tensor[size=(14, 14), device="cuda:0", dtype=torch.int8], Tensor[size=(13, 13), device="cuda:0", dtype=torch.int8], Tensor[size=(12, 12), device="cuda:0", dtype=torch.int8], Tensor[size=(11, 11), device="cuda:0", dtype=torch.int8], Tensor[size=(10, 10), device="cuda:0", dtype=torch.int8], Tensor[size=(9, 9), device="cuda:0", dtype=torch.int8], Tensor[size=(8, 8), device="cuda:0", dtype=torch.int8], Tensor[size=(7, 7), device="cuda:0", dtype=torch.int8], Tensor[size=(6, 6), device="cuda:0", dtype=torch.int8], Tensor[size=(5, 5), device="cuda:0", dtype=torch.int8], Tensor[size=(4, 4), device="cuda:0", dtype=torch.int8], Tensor[size=(3, 3), device="cuda:0", dtype=torch.int8], Tensor[size=(2, 2), device="cuda:0", dtype=torch.int8], Tensor[size=(1, 1), device="cuda:0", dtype=torch.int8]], args=(10), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/test_foreach.py TestForeachCUDA.test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int8
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,943,674,718
|
DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_uint8 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_uint8&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39280293295).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_uint8`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,943,589,643
|
SDPA (`EFFICIENT_ATTENTION`) slower than torch.compile decomposition on `tf32`
|
abdulfatir
|
open
|
[
"triaged",
"oncall: pt2",
"module: sdpa"
] | 18
|
NONE
|
### 🐛 Describe the bug
**EDIT**: The core issue appears to be the use of `tf32`. Please see my comment: https://github.com/pytorch/pytorch/issues/149857#issuecomment-2753969061
I am running into an odd issue where SDPA efficient attention results in slower end to end training times compared to compiled manual attention implementation. For the dummy MWE below, I am observing the following on a single A100 GPU.
| Attention | Est. Runtime (tqdm) | Memory Usage |
|--------|--------|--------|
| SDPA | 18h45m | 8655 MB |
| Manual | 17h48m | 16927 MB |
I see memory improvements with SDPA (expected) but the runtime becomes worse (unexpected). Note that the runtime regression in my actual codebase is much more dramatic.
<details>
<summary>Click to view code</summary>
```py
import torch
import torch.nn as nn
from einops import rearrange
from torch.nn.attention import SDPBackend, sdpa_kernel
from torch.nn.functional import scaled_dot_product_attention
from tqdm.auto import tqdm
class Attention(nn.Module):
def __init__(self, use_sdpa_attn: bool = False, dropout: float = 0.0):
super().__init__()
self.d_model = 512
self.key_value_proj_dim = 64
self.n_heads = 8
self.dropout = dropout
self.inner_dim = self.n_heads * self.key_value_proj_dim
self.use_sdpa_attn = use_sdpa_attn
self.q = nn.Linear(self.d_model, self.inner_dim, bias=False)
self.k = nn.Linear(self.d_model, self.inner_dim, bias=False)
self.v = nn.Linear(self.d_model, self.inner_dim, bias=False)
self.o = nn.Linear(self.inner_dim, self.d_model, bias=False)
def forward(
self,
hidden_states: torch.Tensor,
mask: torch.Tensor,
encoder_states: torch.Tensor | None = None,
):
batch_size = hidden_states.shape[0]
if encoder_states is None:
# Self Attention
query_states = self.q(hidden_states)
key_states = self.k(hidden_states)
value_states = self.v(hidden_states)
else:
# Cross Attention
query_states = self.q(hidden_states)
key_states = self.k(encoder_states)
value_states = self.v(encoder_states)
query_states = query_states.view(batch_size, -1, self.n_heads, self.key_value_proj_dim).transpose(1, 2)
key_states = key_states.view(batch_size, -1, self.n_heads, self.key_value_proj_dim).transpose(1, 2)
value_states = value_states.view(batch_size, -1, self.n_heads, self.key_value_proj_dim).transpose(1, 2)
if self.use_sdpa_attn:
with sdpa_kernel(backends=[SDPBackend.EFFICIENT_ATTENTION]):
attn_output = scaled_dot_product_attention(
query=query_states,
key=key_states,
value=value_states,
attn_mask=mask,
dropout_p=(self.dropout if self.training else 0.0),
is_causal=False,
scale=1.0,
)
else:
scores = torch.matmul(query_states, key_states.transpose(3, 2))
scores += mask
attn_weights = nn.functional.softmax(scores, dim=-1)
attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)
attn_output = torch.matmul(attn_weights, value_states)
attn_output = attn_output.transpose(1, 2).contiguous().view(batch_size, -1, self.inner_dim)
attn_output = self.o(attn_output)
return attn_output
class SourceCrossAttention(Attention):
def forward(self, hidden_states, source_mask, encoder_states: torch.Tensor):
hidden_states = rearrange(hidden_states, "batch 1 d -> 1 batch d")
encoder_states = rearrange(encoder_states, "batch seq d -> 1 (batch seq) d")
hidden_states = super().forward(hidden_states=hidden_states, mask=source_mask, encoder_states=encoder_states)
return rearrange(hidden_states, "1 batch d -> batch 1 d")
class Encoder(nn.Module):
def __init__(self, use_sdpa_attn: bool = False, num_layers: int = 6):
super().__init__()
self.sa_layers = nn.ModuleList([Attention(use_sdpa_attn=use_sdpa_attn) for _ in range(num_layers)])
def forward(self, hidden_states: torch.Tensor, mask: torch.Tensor):
for layer in self.sa_layers:
hidden_states = hidden_states + layer(hidden_states, mask=mask)
return hidden_states
class Decoder(nn.Module):
def __init__(self, use_sdpa_attn: bool = False, num_layers: int = 6):
super().__init__()
self.num_layers = num_layers
self.sa_layers = nn.ModuleList([Attention(use_sdpa_attn=use_sdpa_attn) for _ in range(num_layers)])
self.ca_layers = nn.ModuleList([Attention(use_sdpa_attn=use_sdpa_attn) for _ in range(num_layers)])
self.src_ca_layers = nn.ModuleList(
[SourceCrossAttention(use_sdpa_attn=use_sdpa_attn) for _ in range(num_layers)]
)
def forward(
self,
hidden_states: torch.Tensor,
mask: torch.Tensor,
encoder_states: torch.Tensor,
encoder_mask: torch.Tensor,
source_mask: torch.Tensor,
):
for index in range(self.num_layers):
# Self Attention
hidden_states = hidden_states + self.sa_layers[index](hidden_states, mask=mask)
# Cross Attention
hidden_states = hidden_states + self.ca_layers[index](
hidden_states, mask=encoder_mask, encoder_states=encoder_states
)
# Source Cross Attention
hidden_states = hidden_states + self.src_ca_layers[index](
hidden_states, source_mask=source_mask, encoder_states=encoder_states
)
return hidden_states
class EncoderDecoderModel(nn.Module):
def __init__(self, use_sdpa_attn: bool = False):
super().__init__()
self.use_sdpa_attn = use_sdpa_attn
self.encoder = Encoder(use_sdpa_attn=use_sdpa_attn)
self.decoder = Decoder(use_sdpa_attn=use_sdpa_attn)
def forward(
self,
hidden_states: torch.Tensor,
mask: torch.BoolTensor,
decoder_states: torch.Tensor,
source_ids: torch.LongTensor,
):
# hidden_states: (batch_size, seq_len, d_model)
# mask: (batch_size, seq_len)
# decoder_states: (batch_size, 1, d_model)
# source_ids: (batch_size,)
batch_size, seq_len = hidden_states.shape[:2]
encoder_sa_mask = torch.where(mask[:, None, None, :], 0.0, float("-inf"))
decoder_sa_mask = torch.zeros(decoder_states.shape[:-1], device=decoder_states.device)[:, None, None, :]
decoder_ca_mask = torch.where(mask[:, None, None, :], 0.0, float("-inf"))
# Construct source mask from ids
source_mask = source_ids[:, None] == source_ids[None, :]
source_mask = torch.einsum("qb,bt->qbt", source_mask, mask)
source_mask = torch.where(rearrange(source_mask, "q b t -> 1 1 q (b t)"), 0.0, float("-inf"))
encoder_states = self.encoder(hidden_states, mask=encoder_sa_mask)
decoder_states = self.decoder(
decoder_states,
mask=decoder_sa_mask,
encoder_states=encoder_states,
encoder_mask=decoder_ca_mask,
source_mask=source_mask,
)
return decoder_states
def random_batch(batch_size, seq_len, d_model, device):
hidden_states = torch.rand(batch_size, seq_len, d_model, device=device)
mask = torch.rand(batch_size, seq_len, device=device) > 0.5
decoder_states = torch.rand(batch_size, 1, d_model, device=device)
unique_src_ids = torch.arange(0, batch_size // 2, device=device)
mixed_src_ids = batch_size // 2 + torch.randint(0, 10, (batch_size // 2,), device=device).sort().values
source_ids = torch.cat([unique_src_ids, mixed_src_ids], dim=0)
return hidden_states, mask, decoder_states, source_ids
def test_models_equal():
batch_size = 512
seq_len = 129
d_model = 512
model = EncoderDecoderModel(use_sdpa_attn=False).to("cuda:0")
model_sdpa = EncoderDecoderModel(use_sdpa_attn=True).to("cuda:0")
model_sdpa.load_state_dict(model.state_dict())
model = torch.compile(model)
model_sdpa = torch.compile(model_sdpa)
batch = random_batch(batch_size, seq_len, d_model, "cuda:0")
out_torch = model(*batch)
out_sdpa = model_sdpa(*batch)
print(torch.allclose(out_torch, out_sdpa, atol=1e-5))
print(torch.mean(torch.abs(out_torch - out_sdpa)))
if __name__ == "__main__":
# Uncomment to verify equivalence between SDPA and manual attention
# test_models_equal()
batch_size = 512
num_iters = 100000
seq_len = 129
d_model = 512
model = EncoderDecoderModel(use_sdpa_attn=False).to("cuda:0")
model = torch.compile(model)
for _ in tqdm(range(num_iters)):
out = model(*random_batch(batch_size, seq_len, d_model, "cuda:0"))
out.mean().backward()
```
</details>
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @chauhang @penguinwu
| true
|
2,943,571,308
|
Restore Missing Windows Libtorch Workflows
|
iremyux
|
closed
|
[
"triaged",
"open source",
"ciflow/binaries",
"release notes: build",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
After #149443, several Windows binary workflows were removed and replaced with new ones:
Removed Workflows:
.github/workflows/generated-windows-arm64-binary-libtorch-release-nightly.yml
.github/workflows/generated-windows-arm64-binary-libtorch-debug-nightly.yml
.github/workflows/generated-windows-binary-libtorch-release-nightly.yml
.github/workflows/generated-windows-binary-libtorch-debug-nightly.yml
.github/workflows/generated-windows-binary-libtorch-release-main.yml
.github/workflows/generated-windows-binary-libtorch-debug-main.yml
Added Workflows (Post-#149443):
.github/workflows/generated-windows-arm64-binary-libtorch-nightly.yml
.github/workflows/generated-windows-binary-libtorch-nightly.yml
.github/workflows/generated-windows-binary-libtorch-main.yml
However, the newly introduced workflows only contained steps for the debug versions, omitting the release versions.
This PR restores the removed workflows to ensure both debug and release versions are properly included.
| true
|
2,943,481,901
|
[ROCm] Update libamd_comgr.so file in triton wheel build
|
ethanwee1
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
CONTRIBUTOR
|
In ROCm 6.4 and newer, when building Triton in the Triton-ROCm wheel build flow, newer releases of ROCm no longer have **libamd_comgr.so.2** as the .so file has been updated to **libamd_comgr.so.3** in ROCm 6.4 and newer. We conditionalize on which ROCm the wheel build is for, and choose the .so accordingly.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,943,322,634
|
[Intel GPU][PT2E] Improve asymm qconv perf via weight prepack
|
ZhiweiYan-96
|
open
|
[
"module: cpu",
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149854
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,943,309,899
|
[nn] Implement PartialLinear module for structured sparsity
|
lakshminarasimmanv
|
closed
|
[
"feature",
"module: nn",
"triaged",
"open source",
"topic: not user facing"
] | 14
|
NONE
|
Implements PartialLinear, a linear layer that maintains sparse connectivity by keeping only the top-k weights by magnitude for each output neuron.
- Adds new PartialLinear class to nn/modules/linear.py
- Supports dynamic connectivity updates during training
- Provides both masked-dense and pure-sparse computation paths
- Follows PyTorch initialization and parameter registration patterns
- Includes comprehensive docstrings with examples
- Addresses long-standing TODO in the codebase
Sparse neural networks reduce memory and computation requirements while potentially improving generalization. This implementation allows for easy experimentation with structured sparsity patterns within the PyTorch ecosystem.
Resolves: #135091
Fixes #135091
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet
| true
|
2,943,234,790
|
[DYNAMO] [BUG FIX] correct casting to boolean for TORCH_COMPILE_DISABLE
|
golkir
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Fixes #149840
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,943,066,881
|
[Dynamo] Add easydict support
|
shink
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: dynamo"
] | 13
|
CONTRIBUTOR
|
Fixes #149583
See: https://github.com/pytorch/pytorch/pull/149851#issuecomment-2782208670
## Test
```bash
python test/dynamo/test_dicts.py -k EasyDictTests
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,942,933,021
|
Combine windows x64 and arm64 yaml template files
|
iremyux
|
closed
|
[
"module: windows",
"triaged",
"open source",
"Merged",
"ciflow/binaries",
"topic: not user facing",
"skip-pr-sanity-checks"
] | 10
|
COLLABORATOR
|
While introducing Windows-Arm64 nightly workflows, we created a separate template file for win-arm64. This PR combines x64&arm64 and deletes the win-arm64 one.
Fixes #148776
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @Blackhex @albanD
| true
|
2,942,765,182
|
Fix #149550: Remove pre-cxx11 from documentation and tutorials
|
copley
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 12
|
NONE
|
Fix #149550
This PR removes all occurrences of 'pre-cxx11' from the documentation in
docs/cpp/source/installing.rst and docs/source/cpp_index.rst.
| true
|
2,942,618,711
|
Refactoring FSDP2 (_composable/fsdp) test cases to be device agnostic
|
AnantGulati
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"topic: not user facing"
] | 22
|
CONTRIBUTOR
|
The motivation for this PR is refactor existing test cases in the folder test/distributed/_composable/fsdp/ or fsdp2(as referred to in torch titan) to be device agnostic such that any accelerator type is supported (for eg. CUDA, HPU, XPU etc)
The changes are in line with previously merged changes for fsdp (present in the folder test/distributed/fsdp/ ) test cases: https://github.com/pytorch/pytorch/pull/139184/
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,942,517,825
|
Add SWA with a cyclical scheduler example
|
zeshengzong
|
open
|
[
"open source",
"release notes: optim"
] | 2
|
CONTRIBUTOR
|
Fixes #74022
## Changes
- Add example of SWA with a cyclical scheduler
- Fix optional tag missing in params
| true
|
2,942,328,240
|
Memory leak in torch.save
|
cdzhan
|
open
|
[
"needs reproduction",
"module: memory usage",
"module: serialization",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
It seems that the storage of tensor was not released immediately after this #136034.
```python
>>> import torch
>>> import gc
>>> def test():
... a = torch.randn(3)
... torch.save(a, 'test.pt')
...
>>> gc.set_debug(gc.DEBUG_SAVEALL)
>>> test()
>>> gc.collect()
35
>>> a = gc.garbage
>>> a
[{'mmap': <class 'bool'>, 'endianness': typing.Optional[ForwardRef('_LoadEndianess')], 'mmap_flags': typing.Optional[int], 'calculate_storage_offsets': <class 'bool'>}, (<class 'object'>,), {'__module__': 'torch.utils.serialization.config', '__annotations__': {'mmap': <class 'bool'>, 'endianness': typing.Optional[ForwardRef('_LoadEndianess')], 'mmap_flags': typing.Optional[int], 'calculate_storage_offsets': <class 'bool'>}, 'mmap': False, 'endianness': None, 'mmap_flags': 2, 'calculate_storage_offsets': False, '__dict__': <attribute '__dict__' of 'load' objects>, '__weakref__': <attribute '__weakref__' of 'load' objects>, '__doc__': None}, <class 'torch.utils.serialization.config.load'>, (<class 'torch.utils.serialization.config.load'>, <class 'object'>), <attribute '__dict__' of 'load' objects>, <attribute '__weakref__' of 'load' objects>, (<class 'object'>,), {'__module__': 'torch.utils.serialization.config', '__annotations__': {'compute_crc32': <class 'bool'>, 'use_pinned_memory_for_d2h': <class 'bool'>, 'storage_alignment': <class 'int'>}, 'compute_crc32': True, 'use_pinned_memory_for_d2h': False, 'storage_alignment': 64, '__dict__': <attribute '__dict__' of 'save' objects>, '__weakref__': <attribute '__weakref__' of 'save' objects>, '__doc__': None}, <class 'torch.utils.serialization.config.save'>, (<class 'torch.utils.serialization.config.save'>, <class 'object'>), <attribute '__dict__' of 'save' objects>, <attribute '__weakref__' of 'save' objects>, <cell at 0x7f4b8e29e230: dict object at 0x7f4c479fb000>, <cell at 0x7f4b8e29e140: ConfigModuleInstance object at 0x7f4c479e2890>, <cell at 0x7f4b8e29e110: function object at 0x7f4b8e272dd0>, ('source', typing.Union[module, type], 'dest', typing.Union[module, torch.utils._config_module.SubConfigProxy], 'prefix', <class 'str'>, 'return', None), (<cell at 0x7f4b8e29e230: dict object at 0x7f4c479fb000>, <cell at 0x7f4b8e29e140: ConfigModuleInstance object at 0x7f4c479e2890>, <cell at 0x7f4b8e29e110: function object at 0x7f4b8e272dd0>), <function install_config_module.<locals>.visit at 0x7f4b8e272dd0>, <cell at 0x7f4c479f7b20: dict object at 0x7f4b8e115f80>, <cell at 0x7f4c479f7550: function object at 0x7f4c479ffa30>, <cell at 0x7f4c479f67d0: dict object at 0x7f4c48342140>, <cell at 0x7f4c479f7d00: dict object at 0x7f4c479fb100>, (<cell at 0x7f4c479f7b20: dict object at 0x7f4b8e115f80>, <cell at 0x7f4c479f67d0: dict object at 0x7f4c48342140>, <cell at 0x7f4c479f7d00: dict object at 0x7f4c479fb100>), <function _save.<locals>.persistent_id at 0x7f4c479ffa30>, (<class '_pickle.Pickler'>,), (<cell at 0x7f4c479f7550: function object at 0x7f4c479ffa30>,), <function _save.<locals>.PyTorchPickler.persistent_id at 0x7f4b8e273010>, {'__module__': 'torch.serialization', 'persistent_id': <function _save.<locals>.PyTorchPickler.persistent_id at 0x7f4b8e273010>, '__dict__': <attribute '__dict__' of 'PyTorchPickler' objects>, '__weakref__': <attribute '__weakref__' of 'PyTorchPickler' objects>, '__doc__': None}, <class 'torch.serialization._save.<locals>.PyTorchPickler'>, (<class 'torch.serialization._save.<locals>.PyTorchPickler'>, <class '_pickle.Pickler'>, <class 'object'>), <attribute '__dict__' of 'PyTorchPickler' objects>, <attribute '__weakref__' of 'PyTorchPickler' objects>, 17
148
142
191
140
242
157
63
157
4
175
62
[torch.storage.UntypedStorage(device=cpu) of size 12], {'0': 17
148
142
191
140
242
157
63
157
4
175
62
[torch.storage.UntypedStorage(device=cpu) of size 12]}]
```
### Versions
Collecting environment information...
PyTorch version: 2.8.0.dev20250319+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
Stepping: 6
CPU max MHz: 3100.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 ss
e4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx51
2cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 2.6 MiB (56 instances)
L1i cache: 1.8 MiB (56 instances)
L2 cache: 70 MiB (56 instances)
L3 cache: 84 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.8.0.dev20250319+cpu
[pip3] torch-tb-profiler==0.4.3
[pip3] torchaudio==2.5.0.dev20241201+cpu
[pip3] torchvision==0.20.0.dev20241201+cpu
[conda] Could not collect
cc @mruberry @mikaylagawarecki
| true
|
2,942,318,348
|
CUDA error: no kernel image is available for execution on the device RTX5090D
|
yourbikun
|
open
|
[
"module: build",
"module: cuda",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
I encountered problems when running PyTorch on an RTX 5090D in Ubuntu. I installed PyTorch (torch) and CUDA in a Conda environment, and the following issues occurred:
>>> import torch
>>> x = torch.tensor([0, 1, 1]).to(0)
>>> print(x.device)
cuda:0
>>> x != 0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
### Versions
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
cc @malfet @seemethere @ptrblck @msaroufim @eqy
| true
|
2,942,310,108
|
Update slow tests
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 3
|
COLLABORATOR
|
This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests.
| true
|
2,942,168,842
|
[BE] Replace XPU support packages installation to offline mode in Linux CI/CD
|
chuanqi129
|
open
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 20
|
COLLABORATOR
|
To ensure the build environment is stable
Fixes #149995
| true
|
2,942,035,920
|
Implement aten.select.int sharding strategy
|
kkkkeeee
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"module: dtensor"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149842
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,941,958,016
|
`weight` parameter on functional not in nn.Module
|
neosr-project
|
open
|
[
"module: nn",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
When using `nn` losses, the parameter `weight` is not supported, although added on their functional counterpart:
```python
import torch
loss = torch.nn.L1Loss(weight=0.5)
#Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
#TypeError: L1Loss.__init__() got an unexpected keyword argument 'weight'
```
Hi. It looks like the nn.Module counterparts of the functional losses [haven't been updated](https://github.com/pytorch/pytorch/blob/main/torch/nn/modules/loss.py#L124) with the `weight` parameter, as [added on previous update](https://github.com/pytorch/pytorch/blob/main/torch/nn/functional.py#L3782).
The documentation of functional and Module are also not up to date on this parameter.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,941,903,562
|
Export TORCH_COMPILE_DISABLE=0 continues to disable torch.compile
|
jerrychenhf
|
closed
|
[
"triaged",
"actionable",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Running the following program with export TORCH_COMPILE_DISABLE=0 will still disable torch.compile (cnt.frame_count is 0)
```
import torch
import torch._dynamo.testing
device = "cpu"
cnt = torch._dynamo.testing.CompileCounter()
def m(input):
for i in range(8):
input = input * 3
return input
m = torch.compile(m, backend=cnt)
input = torch.zeros(1, 128, dtype=torch.bfloat16).to(device)
output = m(input)
print(cnt.frame_count)
```
We tried no matter whatever value we export for TORCH_COMPILE_DISABLE, it will disable torch.compile. Is this the designed behavior?
I found it caused by the code which gets the str value of the variable:
```
# Disable dynamo
disable = os.environ.get("TORCH_COMPILE_DISABLE", False)
```
And use it directly in if statement:
```
if (
# TODO: the first condition is not covered by any test
has_started_execution
or is_skipfile
or config.disable
or (
is_in_torch_dispatch_mode(include_infra_modes=False)
and not getattr(self._torchdynamo_orig_callable, "_export", False)
)
):
return ConvertFrameReturn()
```
### Versions
Collecting environment information...
PyTorch version: 2.8.0.dev20250323+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] torch==2.8.0.dev20250323+cpu
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,941,851,469
|
Custom Autograd Functions Don't Work in C++ if it takes Tensors[] as arguments
|
borisfom
|
closed
|
[
"module: cpp",
"module: autograd",
"triaged"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This kind of custom operators don;t seem to work :
```
TORCH_LIBRARY_FRAGMENT(cuequivariance_ops_torch, m)
{
// Define an operator schema
m.def("tensor_product_uniform_1d_jit(Tensor[] tensors, str name, ...) -> Tensor");
```
It works if implemented in Python via register_autograd().
In C++, it works for forward only definition, but fails as soon as AutogradCUDA is registered:
```
self = <OpOverloadPacket(op='cuequivariance_ops_torch.tensor_product_uniform_1d_jit')>
args = ([tensor([[-2.2675, -0.8837, 0.1214, 0.4634, 1.1044, -1.6187, 0.1677, 2.7007,
0.3189, -0.1861, -1.3710,..., requires_grad=True), tensor([0, 1, 0], device='cuda:0', dtype=torch.int32)], 'symmetric_kernel_fwd', 6, 7, 2, 1, ...)
kwargs = {}
def __call__(self, /, *args, **kwargs):
# overloading __call__ to ensure torch.ops.foo.bar()
# is still callable from JIT
# We save the function ptr as the `op` attribute on
# OpOverloadPacket to access it here.
# Directly calling OverloadPacket goes into C++, which will check
# the schema and cause an error for torchbind op when inputs consist of FakeScriptObject so we
# intercept it here and call TorchBindOpverload instead.
if self._has_torchbind_op_overload and _must_dispatch_in_python(args, kwargs):
return _call_overload_packet_from_python(self, args, kwargs)
> return self._op(*args, **(kwargs or {}))
E NotImplementedError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema cuequivariance_ops_torch::t\
ensor_product_uniform_1d_jit. This usually means that this function requires a non-empty list of Tensors, or that you (the operator writer) forgot to register a fallback function. Available functio\
ns are [CUDA, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA\
, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradMeta, Tracer, AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatc\
hed, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
```
Is that by design?
We need to be able to define a custom op in C++ as one of our major clients (MACE+LAMMPS) need to have Python-free option to run Torchscript-exported models.
@ngimel @drisspg
### Versions
Pytorch nightly
cc @jbschlosser @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,941,753,923
|
Improve error message for CUDAGuardImpl, MPSGuardImpl, XPUGuardImpl
|
shink
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"release notes: mps",
"ciflow/mps",
"ciflow/xpu"
] | 16
|
CONTRIBUTOR
|
Fixes #149822
Will get:
```
RuntimeError: t == DeviceType::CUDA INTERNAL ASSERT FAILED at "/home/jyh/workspace/pytorch/c10/cuda/impl/CUDAGuardImpl.h":28, please report a bug to PyTorch. CUDAGuardImpl initialized with non-CUDA DeviceType: cpu
```
| true
|
2,941,720,234
|
Intermittent "AssertionError: can only test a child process" Warning with PyTorch DataLoader on Colab
|
n3than
|
open
|
[
"module: multiprocessing",
"module: dataloader",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
Description
I'm encountering an intermittent warning when training a model on Colab using PyTorch. The warning arises during the shutdown of DataLoader worker processes and reads:
```
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at ...>
...
AssertionError: can only test a child process
```
Although training completes without affecting model performance, this warning is unexpected and clutters the output.
Environment
Platform: Google Colab (Linux-based)
Python Version: 3.11
Multiprocessing Setup:
On Linux, the start method is set explicitly to fork:
```
if sys.platform == "linux":
mp.set_start_method('fork', force=True)
else:
mp.set_start_method('spawn', force=True)
```
Reproduction Steps
Run atraining script which uses:
A custom IterableDataset
A DataLoader configured with:
```
train_loader = DataLoader(
dataset,
num_workers=2,
prefetch_factor=100,
pin_memory=True,
collate_fn=custom_sparse_collate
)
```
Observe the warning messages during the training/validation process, particularly at worker shutdown.
Expected Behavior
The DataLoader should terminate worker processes without generating any warnings or errors.
Actual Behavior
Warnings similar to the following are intermittently printed:
```
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at ...>
...
AssertionError: can only test a child process
```
Questions
Is this a known issue with PyTorch's DataLoader on Colab when using the 'fork' start method?
Are there recommended workarounds or configuration changes to prevent these warnings while still using multiple workers?
Could switching the multiprocessing start method or adjusting DataLoader parameters help mitigate this?
Any guidance or insights would be appreciated.
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.11
[pip3] optree==0.14.1
[pip3] pynvjitlink-cu12==0.5.2
[pip3] torch==2.6.0+cu124
[pip3] torchaudio==2.6.0+cu124
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.21.0+cu124
[pip3] torchviz==0.0.3
[pip3] triton==3.2.0
[conda] Could not collect
cc @VitalyFedyunin @albanD @andrewkho @divyanshk @SsnL @dzhulgakov
| true
|
2,941,472,558
|
torch.hanning_window create values different from the formula
|
chinshou
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
```
window_length = 16
torch.hann_window(
window_length=window_length , periodic=True)
```
will create a value list with wrong value
0, 0.03806, ...., 0.1464, 0.03806,
`scipy.signal.windows.hann(window_length) `
will create a different value list
0, 0.0432272711, ...., 0.1654346968, 0.0432272711,
I calculate the value from the formula used by hanning window , it will create the same value list as scipy
```
for i in range(window_length + 1):
v = 0.5 * (1 - math.cos(2 * math.pi * i / (window_length - 1)))
vector.append(v)
```
So why torch calculate quite different value from the standard hanning window?
### Versions
2.6
| true
|
2,941,468,028
|
Clarified tensor definition in README
|
onepequity
|
closed
|
[
"open source",
"topic: not user facing"
] | 2
|
NONE
|
Updated the tensor definition in the README for better clarity by replacing 'ndarray' with 'n-dimensional array'.
| true
|
2,941,438,817
|
`@torch.compile` and a nested `@torch.compiler.disable` leaks memory
|
koute
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
NONE
|
### 🐛 Describe the bug
Consider the following code:
```python
import torch
import torch.nn as nn
class Inner(nn.Module):
@torch.compiler.disable
def forward(self, x, x0):
return x
class Outer(nn.Module):
def __init__(self):
super().__init__()
self.inner = Inner()
@torch.compile()
def forward(self, x, x0):
return x + self.inner(x, x0)
class Model(nn.Module):
def __init__(self):
super().__init__()
self.e = nn.Embedding(64, 512)
self.a = Outer()
self.b = Outer()
def forward(self, x):
x = self.e(x)
return self.b(self.a(x, x), x)
with torch.inference_mode():
m = Model().to("cuda")
xs = torch.zeros((1, 32), device = "cuda", dtype = torch.long)
for _ in range(20):
m(xs)
torch.cuda.empty_cache()
print(torch.cuda.memory_allocated())
```
This leaks memory indefinitely:
```
131584
197120
262656
...
1245696
1311232
1376768
```
If you comment out `@torch.compile()` or `@torch.compiler.disable` the issue disappears.
### Versions
torch==2.6.0
Python 3.11.8
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,941,421,792
|
Suppress more warnings
|
tugsbayasgalan
|
closed
|
[
"Merged",
"ciflow/trunk",
"fx",
"ciflow/inductor",
"release notes: export"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149833
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
Differential Revision: [D71702307](https://our.internmc.facebook.com/intern/diff/D71702307)
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.