id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,003,852,363
|
Inconsistent document for torch.export and torch.compile
|
ChuanqiXu9
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 2
|
CONTRIBUTOR
|
### 📚 The doc issue
The document (https://pytorch.org/docs/stable/export.html) for torch.export say:
```
[torch.export.export()](https://pytorch.org/docs/stable/export.html#torch.export.export) takes an arbitrary Python callable (a [torch.nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module), a function or a method)
```
but actually, `torch.export` only accept `torch.nn.Module`:
https://github.com/pytorch/pytorch/blob/24262587893517028b169e72e81e8e1c01585249/torch/export/__init__.py#L273-L276
Although `torch.export` is marked as `under active development`, this is still confusing
### Suggest a potential alternative/fix
[torch.export.export()](https://pytorch.org/docs/stable/export.html#torch.export.export) now takes a [torch.nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module), it may take an arbitrary python callable (e.g., a function or a method) in the future.
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,003,696,746
|
[Optimus][Observability] Improve tlparse logging
|
mengluy0125
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: inductor",
"ci-no-td"
] | 18
|
CONTRIBUTOR
|
Summary: We improve tlparse logging for Optimus graph transformaton to enable easier debug
Test Plan:
```
TORCH_TRACE=~/my_trace_log_dir CUDA_VISIBLE_DEVICES=5 buck2 run mode/opt //aps_models/ads/ecosystem/tooling/tools/efficient_module_suite/pyper_models:pyper_model_perf_benchmark -- --flow_id 720055919 --shrink_model --mfu_profile_module "impl.shared_arch.dense_sparse_interaction" --use_synthetic_data
```
Differential Revision: D73229681
@diff-train-skip-merge
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,686,796
|
[Inductor][FlexAttention] fix `vars_and_sizes` divisor error
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"module: flex attention"
] | 3
|
CONTRIBUTOR
|
Triton codegen currently [sorts vars by divisor](https://github.com/pytorch/pytorch/blob/ae6f6b8efb6380a94621d61100ffb05dab05db27/torch/_inductor/codegen/simd.py#L233-L237). When there are two vars with the same divisor, the order is undecided.
```python
nodes.sort(
key=lambda x: V.graph.sizevars.size_hint(
x.divisor, fallback=config.unbacked_symint_fallback
)
)
```
The test case leads to the following nodes:
```
(Pdb) nodes[0]
IterationRangesEntry(x1, ((s37 + 127)//128), 2, (xindex//ps0), {x0: ((s37 + 127)//128), x1: 2, x2: ((s12 + 127)//128), x4: 2*(((s12 + 127)//128))*(((s37 + 127)//128)), x5: 0, x6: 2, x7: (((s12 + 127)//128))*(((s37 + 127)//128))})
(Pdb) nodes[1]
IterationRangesEntry(x0, 1, ((s37 + 127)//128), ModularIndexing(xindex, 1, ps0), {x0: ((s37 + 127)//128), x1: 2, x2: ((s12 + 127)//128), x4: 2*(((s12 + 127)//128))*(((s37 + 127)//128)), x5: 0, x6: 2, x7: (((s12 + 127)//128))*(((s37 + 127)//128))})
(Pdb) nodes[2]
IterationRangesEntry(x2, 2*(((s37 + 127)//128)), ((s12 + 127)//128), (xindex//(2*(((s37 + 127)//128)))), {x0: ((s37 + 127)//128), x1: 2, x2: ((s12 + 127)//128), x4: 2*(((s12 + 127)//128))*(((s37 + 127)//128)), x5: 0, x6: 2, x7: (((s12 + 127)//128))*(((s37 + 127)//128))})
(Pdb) V.graph.sizevars.statically_known_equals(nodes[0].length, 2)
True
(Pdb) V.graph.sizevars.statically_known_equals(nodes[1].length, 1)
True
(Pdb) V.graph.sizevars.statically_known_equals(nodes[2].length, 1)
True
(Pdb) V.graph.sizevars.statically_known_equals(nodes[0].divisor, 1)
True
(Pdb) V.graph.sizevars.statically_known_equals(nodes[1].divisor, 1)
True
(Pdb) V.graph.sizevars.statically_known_equals(nodes[2].divisor, 2)
True
```
Since x1 and x0 both have divisor 1, the relative order is random across runs.
In some runs, we have order [x1, x0, x2] with divisors as [1,1,2] and lengths as [2,1,1]. After x1, we have [divisor = divisor * node.length](https://github.com/pytorch/pytorch/blob/ae6f6b8efb6380a94621d61100ffb05dab05db27/torch/_inductor/codegen/simd.py#L246) = 1 * 2 = 2. Then, when processing x0, we have node.divisor=1, divisor=2, and [FloorDiv(node.divisor, divisor)](https://github.com/pytorch/pytorch/blob/ae6f6b8efb6380a94621d61100ffb05dab05db27/torch/_inductor/codegen/simd.py#L251) = 0, which indicates an iteration length of 0 and leads errors later.
The fix is to sort by both divisor and length_is_one. So for two nodes with the same divisor, we process the node with length=1 first.
Fixes #149789
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @Chillee @drisspg @yanboliang
| true
|
3,003,678,336
|
[invoke_subgraph][fake tensor] Add finalizer on subgraph instead of the functionalize ctx wrapper
|
anijain2305
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ci-no-td"
] | 13
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152062
* #151961
* #151957
* #151477
* __->__ #151633
* #151409
| true
|
3,003,666,460
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
3,003,641,246
|
distributed: add distributed P2P TensorQueue and TensorStore
|
d4l3k
|
open
|
[
"oncall: distributed"
] | 2
|
MEMBER
|
This is an implementation of a distributed tensor queue and store system.
This uses `TCPStore`'s queue implementation to do a fully multi-reader multi-writer queue implementation. This uses the store for metadata exchange but all tensors are transferred via the global ProcessGroup (i.e. NCCL/gloo).
As such this should be highly performant for most use cases (~30k tensors/s across the job).
This implements two structures:
* TensorQueue which allows users to send/recv on arbitrary global queue keys. This supports multi-reader and multi-writer.
* TensorStore which allows trainers to globally register a tensor that allows remote machines to read/write to that tensor emulating RDMA.
Use cases:
* distributed dataloaders
* RL generator/trainer communication
* checkpoint transfers in a P2P fashion
This is mostly a POC and won't land in its current form.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
3,003,638,172
|
Don't eagerly create AliasInfo in parseAliasDeclaration
|
swolchok
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151682
* #151697
* __->__ #151630
* #151629
* #151628
* #151627
* #151626
No need to create an AliasInfo...unless we need it.
Differential Revision: [D73129452](https://our.internmc.facebook.com/intern/diff/D73129452/)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,003,638,122
|
Use fmt::format for debug strings in Library init
|
swolchok
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: bazel"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151682
* #151697
* #151630
* __->__ #151629
* #151628
* #151627
* #151626
Observed several ms taken during `import torch` by c10::str here.
Differential Revision: [D73129453](https://our.internmc.facebook.com/intern/diff/D73129453/)
| true
|
3,003,638,084
|
Reserve vector in StringCordView ctor
|
swolchok
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151682
* #151697
* #151630
* #151629
* __->__ #151628
* #151627
* #151626
Clear missing reserve (we should expect that pieces are not empty).
Differential Revision: [D73129445](https://our.internmc.facebook.com/intern/diff/D73129445/)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,003,638,036
|
Reserve vectors in FunctionSchema::cloneWithRealTypes
|
swolchok
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151682
* #151697
* #151630
* #151629
* #151628
* __->__ #151627
* #151626
1) reserving is much better than not reserving
2) std::transform for a 1-line-body loop is generally not considered to be an improvement (and doesn't get seem to get boiled away by clang under -Oz)
Differential Revision: [D73013363](https://our.internmc.facebook.com/intern/diff/D73013363/)
| true
|
3,003,637,980
|
Overload Library::def rather than templating it
|
swolchok
|
closed
|
[
"module: cpp",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151682
* #151697
* #151630
* #151629
* #151628
* #151627
* __->__ #151626
It ends up being templated over a bunch of reference-to-array-of-characters types with different lengths, such as `char const (&) [88]`, which is an annoyance when profiling and possibly a source of code bloat.
Differential Revision: [D73129450](https://our.internmc.facebook.com/intern/diff/D73129450/)
cc @jbschlosser
| true
|
3,003,637,218
|
[Profiler] Remove Decref From Python Context
|
sraikund16
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: profiler",
"topic: bug fixes"
] | 5
|
CONTRIBUTOR
|
Summary: When doing on-demand profiler with stack, the decref causes a segfault. I tried checking the refcount and the object itself and they both look fine but still segfaults every time. Lets remove it for now and revisit.
This will induce a small memory leak but it should be small enough that it does not create any significant impact on jobs ran.
Test Plan:
Removed decref and got clean traces
https://www.internalfb.com/intern/perfdoctor/trace_view?filepath=tree/traces/dynocli/0/1744933624/localhost/libkineto_activities_2936811.json.gz&bucket=gpu_traces
Differential Revision: D73225468
| true
|
3,003,629,504
|
[export] suggest dynamic re-export in input constraints hook
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
3,003,624,056
|
[ONNX][Eazy] Update onnx program doc formatting and improve robustness
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 7
|
COLLABORATOR
|
- Update docstring list formatting
- Use a try finally block to keep the model unmodified if save() fails.
| true
|
3,003,542,814
|
refresh benchmark results
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151622
updating due to <1.5% increases in https://github.com/pytorch/pytorch/pull/151469
not all benchmarks were updated
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,003,536,919
|
Add jk for force_disable_caches
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151621
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,509,099
|
[fake tensor] Support list of outputs in caching
|
anijain2305
|
open
|
[
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151620
* #150704
* #151410
* #151409
* #151756
* #151633
* #151477
* #151357
* #151256
* #151330
| true
|
3,003,496,951
|
DISABLED test_builtin_score_mods_float32_score_mod0_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float32_score_mod0_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40745552837).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float32_score_mod0_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,496,905
|
DISABLED test_builtin_score_mods_dynamic_float16_score_mask_mod3_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_dynamic_float16_score_mask_mod3_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40745552837).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_dynamic_float16_score_mask_mod3_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,496,849
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE_128_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE_128_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40743971871).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE_128_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 870, in sdpa_dense_backward
grad_scores, _, _, _, _, *grad_score_mod_captured = joint_score_mod(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 833, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 409, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 396, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.1074 from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py:1265 in wrapped", line 9, in forward
where = torch.ops.aten.where.self(ge, add, scalar_tensor); add = scalar_tensor = where = None
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 356.12 MiB is free. Including non-PyTorch memory, this process has 21.69 GiB memory in use. Of the allocated memory 6.75 GiB is allocated by PyTorch, and 14.67 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE_128_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,496,846
|
DISABLED test_non_equal_head_dims_score_mod4_bfloat16_head_dims1_cuda_bfloat16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod4_bfloat16_head_dims1_cuda_bfloat16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40745552837).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod4_bfloat16_head_dims1_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 2159, in test_non_equal_head_dims
self.run_test(score_mod, dtype, B, H, S, qk_d, B, H, S, V_D=v_d, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 178.12 MiB is free. Including non-PyTorch memory, this process has 21.86 GiB memory in use. Of the allocated memory 6.80 GiB is allocated by PyTorch, and 14.42 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_non_equal_head_dims_score_mod4_bfloat16_head_dims1_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,496,845
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod6_BLOCK_SIZE_256_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod6_BLOCK_SIZE_256_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40743971871).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod6_BLOCK_SIZE_256_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,496,636
|
DISABLED test_non_equal_head_dims_score_mod6_bfloat16_head_dims1_cuda_bfloat16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod6_bfloat16_head_dims1_cuda_bfloat16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40740661553).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod6_bfloat16_head_dims1_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 2159, in test_non_equal_head_dims
self.run_test(score_mod, dtype, B, H, S, qk_d, B, H, S, V_D=v_d, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 870, in sdpa_dense_backward
grad_scores, _, _, _, _, *grad_score_mod_captured = joint_score_mod(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 833, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 409, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 396, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.1761 from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py:1265 in wrapped", line 9, in forward
where = torch.ops.aten.where.self(ge, add, scalar_tensor); add = scalar_tensor = where = None
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 880.12 MiB is free. Including non-PyTorch memory, this process has 21.18 GiB memory in use. Of the allocated memory 6.78 GiB is allocated by PyTorch, and 14.09 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_non_equal_head_dims_score_mod6_bfloat16_head_dims1_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,496,579
|
DISABLED test_builtin_score_mods_float32_score_mod5_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float32_score_mod5_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40743981093).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float32_score_mod5_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1109, in test_builtin_score_mods
self.run_test(score_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 881, in sdpa_dense_backward
grad_scores = torch.where(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 264.12 MiB is free. Including non-PyTorch memory, this process has 21.78 GiB memory in use. Of the allocated memory 6.80 GiB is allocated by PyTorch, and 14.72 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_float32_score_mod5_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,490,700
|
index_select performance
|
ngimel
|
open
|
[
"module: performance",
"triaged",
"actionable"
] | 12
|
COLLABORATOR
|
According to my cursory benchmarking, `gather` is always better than `index_select` and it is strictly more expressive
```
def comp_index_select(M,N,K, dim):
def gather(x, dim, ind):
shape_ind = [1] * x.ndim
shape_ind[dim] = ind.shape[0]
shape_out = list(x.shape)
shape_out[dim] = ind.shape[0]
ind = ind.view(shape_ind).expand(shape_out)
return torch.gather(x, dim=dim, index=ind)
src = torch.randn(M,K, device="cuda", dtype=torch.bfloat16)
max_ind = src.shape[dim]
num_ind = N
indices = torch.randint(max_ind, (num_ind,), device="cuda")
out1 = torch.index_select(src, dim=dim, index=indices)
out2 = gather(src, dim, indices)
torch.testing.assert_close(out1, out2)
from torch.utils.benchmark import Timer
t = Timer(stmt="torch.index_select(src, dim=dim, index=indices)", globals={"src": src, "dim": dim, "indices": indices})
print("Torch time", t.blocked_autorange().mean)
t = Timer(stmt="gather(src, dim, indices)", globals={"src": src, "dim": dim, "indices": indices, "gather": gather})
print("Torch time", t.blocked_autorange().mean)
comp_index_select(4096, 4096, 5120, 0)
comp_index_select(4096, 4096, 5124, 0)
comp_index_select(4096, 4096, 5120, 1)
```
```
index_select time 0.00013564855395816267
gather time 3.6312990123406055e-05
index_select time 0.00013131444132886828
gather time 6.217161598615348e-05
index_select time 0.00012328573293052616
gather time 6.178460847586394e-05```
```
If more comprehensive benchmarking confirms that to be true, we should expand `gather` coverage to have all the same dtypes as `index_select`, call into `gather` from `index_select` and remove `index_select` kernels.
cc @msaroufim @jerryzh168
| true
|
3,003,476,680
|
Throw an error in get_pip_packages if pip list returns an error code.
|
naromero77amd
|
closed
|
[
"module: collect_env.py",
"triaged",
"open source",
"topic: bug fixes",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
Fixes #147669
Error handling for pip_get_packages when pip list returns an error code.
| true
|
3,003,456,074
|
Error is incorrectly thrown for histc in deterministic mode
|
ngimel
|
closed
|
[
"module: cuda",
"triaged",
"module: determinism",
"actionable"
] | 0
|
COLLABORATOR
|
`histc` for integer inputs is not non-deterministic yet an error is thrown when it's called in the deterministic mode:
```
In [1]: import torch
In [2]: torch.set_deterministic_debug_mode("error")
In [3]: x=torch.randint(10, (100,))
In [4]: x=torch.randint(10, (100,), device="cuda")
In [5]: torch.histc(x)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[5], line 1
----> 1 torch.histc(x)
RuntimeError: _histc_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation, or you can use the 'warn_only=True' option, if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation.
```
Possibly this is the case for other histogram-related functions.
cc @ptrblck @msaroufim @eqy @jerryzh168 @mruberry @kurtamohler
| true
|
3,003,408,701
|
Unpack the output code in the standalone_compile
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151603
* __->__ #151609
* #151768
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,383,561
|
Fix linting errors on top of PR #151250
|
Camyll
|
closed
|
[
"oncall: fx",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
fix linting errors on top of exported diff
[#151250
](https://github.com/pytorch/pytorch/pull/151250)
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,382,733
|
collect_env: gracefully handle no pip
|
adamjstewart
|
closed
|
[
"open source",
"Merged",
"release notes: python_frontend",
"topic: bug fixes"
] | 3
|
CONTRIBUTOR
|
If pip is not installed:
### Before
```console
> python3 torch/utils/collect_env.py
Collecting environment information...
Traceback (most recent call last):
File "/Users/Adam/pytorch/torch/utils/collect_env.py", line 694, in <module>
main()
~~~~^^
File "/Users/Adam/pytorch/torch/utils/collect_env.py", line 677, in main
output = get_pretty_env_info()
File "/Users/Adam/pytorch/torch/utils/collect_env.py", line 672, in get_pretty_env_info
return pretty_str(get_env_info())
~~~~~~~~~~~~^^
File "/Users/Adam/pytorch/torch/utils/collect_env.py", line 497, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/Users/Adam/pytorch/torch/utils/collect_env.py", line 450, in get_pip_packages
for line in out.splitlines()
^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'splitlines'
```
### After
```console
> python3 torch/utils/collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 15.4 (arm64)
GCC version: Could not collect
Clang version: 20.1.0
CMake version: version 3.31.6
Libc version: N/A
Python version: 3.13.2 (main, Apr 8 2025, 15:27:33) [Clang 17.0.0 (clang-1700.0.13.3)] (64-bit runtime)
Python platform: macOS-15.4-arm64-arm-64bit-Mach-O
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] Could not collect
[conda] Could not collect
```
| true
|
3,003,369,011
|
[easy] Update test/dynamo/test_structured_trace.py
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151606
* #151599
Summary: test/dynamo/test_structured_trace.py is out of date because of some new fields. (I guess the test is disabled?). Bring it up to date.
Test Plan: `python test/dynamo/test_structured_trace.py`
Fixes #149671
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,003,334,440
|
added six and pyyaml to requirements.txt to fix missing module error …
|
Adagedo
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 2
|
NONE
|
This PR adds the six and pyyaml module to the project’s dependencies to resolve a ModuleNotFoundError during the build process. The six module is required for compatibility between Python 2 and 3, and this change ensures the build process runs smoothly by including it in the environment setup.
| true
|
3,003,333,481
|
[DRAFT] fix issues related to deferred assertion on unabcked floats
|
laithsakka
|
open
|
[
"module: cpu",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151604
* #151494
* #151492
* #151171
* #151170
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @ezyang @SherlockNoMad @EikanWang @wenzhe-nrv
| true
|
3,003,329,802
|
Introduce unsafe way to mark functions as cacheable
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151603
* #151609
* #151768
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,284,533
|
Add dates to pages
|
svekars
|
closed
|
[
"module: docs",
"Merged",
"topic: docs",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
cc @sekyondaMeta @AlannaBurke
re: #150873
| true
|
3,003,233,055
|
[dynamic shapes] user-code friendly statically_known_true, has_static_value
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Fixes #151480
Allows `statically_known_true` in user code, as well as introducing `has_static_value`, returning True if the input has a static bool/float/int value
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,003,225,690
|
[MPS] Allow isin for mixed types
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151600
To follow pattern set by CPU and CUDA impls: define common_dtype and optionally casts `elements` and `test_elements` to common dtype if needed
- Add regression test, though skip it on MacOS-13, as `isin` seems to produce garbage there even for same dtypes
```
>>> import torch
>>> x=torch.arange(4.0, device='mps')
>>> y=torch.arange(1.0, 3.0, device='mps')
>>> x, y, torch.isin(x, y), torch.isin(y, x)
(tensor([0., 1., 2., 3.], device='mps:0'), tensor([1., 2.], device='mps:0'), tensor([False, True, False, False], device='mps:0'), tensor([False, False], device='mps:0'))
>>> torch.__version__
'2.6.0'
```
- Cleanup code a bit
Fixes https://github.com/pytorch/pytorch/issues/151443
| true
|
3,003,223,919
|
[easy] Update test/dynamo/test_utils.py
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151606
* __->__ #151599
Summary: test/dynamo/test_utils.py is out of date because of some new dynamo_timed fields. (I guess the test is disabled?). Bring it up to date
Test Plan: `python test/dynamo/test_utils.py`
Fixes #148093
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,003,218,653
|
Fix uint view copy
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151598
* #151562
Fix for https://github.com/pytorch/pytorch/issues/151156. We have some logic to undo our upcast prior to dtype bitcast. This pr cleans up that logic using dtypes in codegen.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,183,109
|
[export] allow partially specifying keys for dynamic shapes dict spec
|
pianpwk
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: export",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
Fixes #148564
Should help with exporting HF-style models, so users don't have to specify 100 Nones
| true
|
3,003,181,715
|
Add OIDC perms to windows-[build|test] workflows
|
zxiiro
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 4
|
COLLABORATOR
|
Update workflow to use OIDC authentication to access AWS resources rather than assuming the runner's default role. This is part of the multicloud effort to prepare jobs to support being run in non-AWS clouds where assumption of EC2 instance role cannot be used.
The JWT ID token requires id-token: write in order to create the token for the job.
See: https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-cloud-providers#adding-permissions-settings
Ref: pytorch-fdn/multicloud-ci-infra#3
| true
|
3,003,159,448
|
Remove `reinterpret_cast`s with undefined behavior from stable/library.h
|
swolchok
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"ci-no-td"
] | 17
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151595
There is a list of valid uses of `reinterpret_cast` (see https://en.cppreference.com/w/cpp/language/reinterpret_cast), and the use here was not on the list, hence undefined behavior. Implement what we meant using memcpy, which is well-defined.
Differential Revision: [D73200791](https://our.internmc.facebook.com/intern/diff/D73200791/)
| true
|
3,003,156,743
|
[CI][CUDA] Move cu118 distributed pull jobs to cu126, move cu124-sm75 to cu126-sm75
|
nWEIdia
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"topic: not user facing",
"keep-going"
] | 1
|
COLLABORATOR
|
See: https://github.com/pytorch/pytorch/issues/147383
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @tinglvv @eqy @ptrblck @atalman @malfet
| true
|
3,003,152,340
|
Remove `reinterpret_cast`s with undefined behavior from stable/library.h
|
swolchok
|
closed
|
[
"fb-exported"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151593
There is a list of valid uses of `reinterpret_cast` (see https://en.cppreference.com/w/cpp/language/reinterpret_cast), and the use here was not on the list, hence undefined behavior. Implement what we meant using memcpy, which is well-defined.
Differential Revision: [D73200791](https://our.internmc.facebook.com/intern/diff/D73200791/)
| true
|
3,003,148,325
|
Xcode 16+: duplicate LC_RPATH '@loader_path'
|
adamjstewart
|
open
|
[
"module: build",
"triaged"
] | 12
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I'm trying to build PyTorch 2.6.0 on macOS 15.4 with Xcode 16.3 and Apple Clang 17. After applying @malfet's #151344, I am able to successfully build PyTorch. However, when I try to import PyTorch, I see:
```pycon
>>> import torch
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
import torch
File "/Users/Adam/spack/var/spack/environments/default/.spack-env/view/lib/python3.13/site-packages/torch/__init__.py", line 405, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: dlopen(/Users/Adam/spack/var/spack/environments/default/.spack-env/view/lib/python3.13/site-packages/torch/_C.cpython-313-darwin.so, 0x0002): Library not loaded: @rpath/libtorch_cpu.dylib
Referenced from: <CE7E9E63-33DF-3DBA-A2CB-834984ACDFD2> /Users/Adam/spack/opt/spack/darwin-m2/py-torch-2.6.0-k43tdmdmktetuermjbfrfbkt5ztuwelg/lib/python3.13/site-packages/torch/lib/libtorch_python.dylib
```
According to:
```console
> otool -l /Users/Adam/spack/var/spack/environments/default/.spack-env/view/lib/python3.13/site-packages/torch/_C.cpython-313-darwin.so
...
Load command 13
cmd LC_RPATH
cmdsize 104
path /Users/Adam/spack/opt/spack/darwin-m2/py-torch-2.6.0-k43tdmdmktetuermjbfrfbkt5ztuwelg/lib (offset 12)
Load command 14
cmd LC_RPATH
cmdsize 104
path /Users/Adam/spack/opt/spack/darwin-m2/py-torch-2.6.0-k43tdmdmktetuermjbfrfbkt5ztuwelg/lib64 (offset 12)
```
`<prefix>/lib` is being added to the rpath, but not `<prefix>/lib/pythonX.Y/site-packages/torch/lib`. This worked fine in Xcode 16.0, I'm not sure what would have changed with the linker in Xcode 16.3.
### Versions
Can't run collect_env.py since PyTorch doesn't build, but here are some relevant things:
* PyTorch version: 2.6.0
* CUDA: N/A
* ROCM: N/A
* OS: macOS 15.4
* Clang version: 17.0.0
* CMake version: 3.31.6
* Python version: 3.13.2
Also:
* [build log](https://github.com/user-attachments/files/19798869/spack-build-out.txt.gz)
* [build env](https://github.com/user-attachments/files/19798871/spack-build-env.txt)
@haampie has been helping me debug this but so far we haven't figured out a fix
cc @malfet @seemethere
| true
|
3,003,137,625
|
Device Error on vmap
|
zengkaipeng
|
open
|
[
"triaged",
"actionable",
"module: vmap",
"module: functorch"
] | 0
|
NONE
|
### 🐛 Describe the bug
When using `vmap`, if you assign a tensor (or a part of it) to a single value (e.g., a scalar float), a device error may occur. It indicates that you can't assign a CPU tensor to a tensor on `cuda:0`. I'm not sure if this is a bug.
``` Python
import torch
from torch.func import vmap
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.mod = torch.nn.Linear(4, 4)
def forward(self, x, mask, z):
x[mask] = -1
x = self.mod(x * z)
x = torch.randn(4).to('cuda:0')
mask = torch.randn(4).to('cuda:0') > 0
model = Model().to('cuda:0')
z = torch.randn(5)
result = vmap(lambda v: model(x, mask, v), in_dims=0)(z)
```
and I got

### Versions
Collecting environment information...
PyTorch version: 2.2.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.20 (default, Oct 3 2024, 15:24:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce MX250
Nvidia driver version: 572.83
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz
CPU family: 6
Model: 142
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 12
BogoMIPS: 4608.01
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Unknown: Dependent on hypervisor status
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.2.0
[pip3] torchaudio==2.2.0
[pip3] torchvision==0.17.0
[pip3] triton==2.2.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.8.55 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.9.55 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py38h5eee18b_1
[conda] mkl_fft 1.3.8 py38h5eee18b_0
[conda] mkl_random 1.2.4 py38hdb19cb5_0
[conda] numpy 1.24.3 py38hf6e8229_1
[conda] numpy-base 1.24.3 py38h060ed82_1
[conda] pytorch 2.2.0 py3.8_cuda12.1_cudnn8.9.2_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.2.0 py38_cu121 pytorch
[conda] torchtriton 2.2.0 py38 pytorch
[conda] torchvision 0.17.0 py38_cu121 pytorch
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
3,003,108,264
|
[dynamic shapes] be less aggressive with runtime assert CSE for bounds
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Fixes #150540
Fixes #147772
Stops trying to CSE bound expressions, only does exact deduplication for runtime asserts. Adds the test cases to check that AOTAutograd doesn't data-dependent error out when retracing due to not seeing the asserts.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,003,103,807
|
[inductor] alignment assertion failure on certain GGUF models in ComfyUI
|
StrongerXi
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
It's a bit hard to get a standalone repro without using ComfyUI. But here's what I have:
1. Backtrace: [P1787488454](https://www.internalfb.com/intern/paste/P1787488454)
2. [tlparse](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmppw0G7V/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000).
I've confirmed that under backend="aot_eager" it passes.
The error is observed by other users: https://github.com/city96/ComfyUI-GGUF/issues/257.
### Error logs
_No response_
### Versions
main daf2ccf0235, python 3.12.5
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,003,100,785
|
Generalize installing to nonlocals
|
Lucaskabela
|
closed
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151588
* #151135
* #151134
* #151133
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,003,045,027
|
[ROCm] Use original drm repo
|
clee2000
|
closed
|
[
"module: rocm",
"topic: not user facing",
"ciflow/rocm"
] | 2
|
CONTRIBUTOR
|
Reverts https://github.com/pytorch/pytorch/pull/149380/files
It is past the date mentioned in the comment, and I started failing to build the rocm libtorch and manywheel images today
https://github.com/pytorch/pytorch/actions/runs/14504543825/job/40738794246#step:3:18481
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,003,040,026
|
Deploy to some faster PyPi mirror
|
iirekm
|
closed
|
[
"needs reproduction"
] | 3
|
NONE
|
`poetry add torch` is insanely slow (despite having good internet connection), mostly because downloading CUDA libraries. `poetry add torch` is running already for 2 hours on my laptop!
The issue seems worse and worse from version to version (libraries get bigger and/or pypi gets slower).
It works fast only in cloud (e.g. GitHub Actions), probably because cloud providers have setup their own private pypi mirrors, which aren't available by private users at homes.
Could you deploy torch and all required dependencies to fast pypi mirrors and provide links to them?
| true
|
3,003,035,648
|
Add OIDC permissions to linux-test workflow
|
zxiiro
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"ciflow/inductor",
"ciflow/inductor-perf-compare",
"ciflow/slow",
"ciflow/torchbench",
"ciflow/inductor-micro-benchmark",
"ciflow/s390",
"ciflow/inductor-micro-benchmark-cpu-x86",
"ciflow/op-benchmark"
] | 1
|
COLLABORATOR
|
Update workflow to use OIDC authentication to access AWS resources rather than assuming the runner's default role. This is part of the multicloud effort to prepare jobs to support being run in non-AWS clouds where assumption of EC2 instance role cannot be used.
The JWT ID token requires id-token: write in order to create the token for the job.
See: https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-cloud-providers#adding-permissions-settings
Ref: pytorch-fdn/multicloud-ci-infra#3
| true
|
3,003,019,484
|
Not working with python3.13t
|
cornusandu
|
closed
|
[
"module: binaries",
"module: windows",
"module: python version"
] | 3
|
NONE
|
### 🐛 Describe the bug
# Operating System
* OS: Windows 11
* Laptop: Lenovo
* Python Build Flags: ``--disable-gil``
* Cuda: None (non-nvidia gpu)
* PyTorch Version: 2.6.0+cpu
* WSL: Ubuntu (Linux Subsystem for Windows)
# Issue
I can't seem to run any code that imports PyTorch using `python3.13t`. This really sucks, because I need to use a slow AI a lot, and I want to be able to do it free-threaded, in parallel (I'm not waiting 40 minutes after every change).
Here's a CLI script and the error I get:
```bash
PS D:\Python\StockEmulateAI> python -c "import torch; print(torch.__version__)"
2.6.0+cpu
PS D:\Python\StockEmulateAI> python3.13t .\main.py
Traceback (most recent call last):
File "D:\Python\StockEmulateAI\main.py", line 1, in <module>
import torch
File "C:\Users\Bogdan\AppData\Local\Programs\Python\Python313\Lib\site-packages\torch\__init__.py", line 990, in <module>
raise ImportError(
...<14 lines>...
) from None
ImportError: Failed to load PyTorch C extensions:
It appears that PyTorch has loaded the `torch/_C` folder
of the PyTorch repository rather than the C extensions which
are expected in the `torch._C` namespace. This can occur when
using the `install` workflow. e.g.
$ python setup.py install && python -c "import torch"
This error can generally be solved using the `develop` workflow
$ python setup.py develop && python -c "import torch" # This should succeed
or by running Python from a different directory.
```
# More Details
Yes, I know torch 2.6 is not available for `python3.13t` on Windows as of right now, but I **really** need a way of using free-threaded code with PyTorch. Any suggestions are appreciated.
> [!IMPORTANT]
> WSL was **not** used for running these CLI scripts. I used PowerShell 7.5.0
cc @seemethere @malfet @osalpekar @atalman @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
3,003,018,945
|
[CUDA][NVTX] Move nvtx3 code from cmake/public/cuda.cmake to cmake/Dependencies.cmake
|
nWEIdia
|
closed
|
[
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"release notes: build",
"topic: bc breaking"
] | 5
|
COLLABORATOR
|
Fixes [#147220]
Context: In the CUDA NVTX world, there are NVTX v2 and NVTX v3. As announced in CUDA release notes, e.g. [CUDA 12.8 Update 1]( https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#deprecated-or-dropped-operating-systems) "`NVTX v2 is deprecated. To migrate to NVTX v3. Change your code from: #include <nvtoolsext.h> to #include "nvtx3/nvtoolsext.h`". This header is included in the toolkit."
On the PyTorch side, TORCH_CUDA_USE_NVTX3 compile time macro is used and it is set to true when (most of the time) nvtx3 is found. nvtx3 is found in two cases: 1) USE_SYSTEM_NVTX=0 (default), torch build process would automatically look for the nvtx3 in pytorch/third_party/nvtx. This is the most common and default case. 2) when USE_SYSTEM_NVTX=1 is used, nvtx3 is found from the installed CUDA toolkit (e.g. CUDA 12.8 and even some earlier cuda versions).
As described in #147220, the reason it can find pytorch/third_party/nvtx is because it used
https://github.com/pytorch/pytorch/blob/6f035d8462e43b1c678e5f334d52d9df0e00e6bf/cmake/public/cuda.cmake#L176
note the "PROJECT_SOURCE_DIR" usage in [pytorch/cmake/public/cuda.cmake](https://github.com/pytorch/pytorch/blob/6f035d8462e43b1c678e5f334d52d9df0e00e6bf/cmake/public/cuda.cmake#L176)
Before this PR:
PyTorch build would succeed in finding nvtx3 due to the above described process, everything is good. But downstream projects like torchvision *can* fail, and would by default fail because the following are happening:
1) USE_SYSTEM_NVTX=0 is used (and most likely it is this case because it is the default)
2) NVTX v2 can no longer be found (e.g. future CUDA versions because deprecation would eventually become removal)
3) TorchVision cannot find NVTX3 either because torchvision was invoking [pytorch/cmake/public/cuda.cmake] but the PROJECT_SOURCE_DIR is no longer the pytorch source but torchvision source!
4) One workaround is to "USE_SYSTEM_NVTX=1" but users have to explicitly set this and do the plumbing work
After this PR:
PyTorch can still find nvtx3 because the part of the code that finds nvtx3 is just moved to a new place. The CI logs are showing it being able to find nvtx3. e.g. [this job](https://productionresultssa14.blob.core.windows.net/actions-results/47f8efaa-0afe-4e1f-bc94-0a82629941cb/workflow-job-run-dc8201b1-845b-5da1-a6ea-d3360ce1b508/logs/job/job-logs.txt?rsct=text%2Fplain&se=2025-04-18T20%3A38%3A05Z&sig=yMd6egC%2Banl3lR%2BudXFX18bfUH189z0DTGLtscHQJwY%3D&ske=2025-04-19T06%3A21%3A45Z&skoid=ca7593d4-ee42-46cd-af88-8b886a2f84eb&sks=b&skt=2025-04-18T18%3A21%3A45Z&sktid=398a6654-997b-47e9-b12b-9515b896b4de&skv=2025-01-05&sp=r&spr=https&sr=b&st=2025-04-18T20%3A28%3A00Z&sv=2025-01-05), which reads "`Found nvtx3: C:/actions-runner/_work/pytorch/pytorch/pytorch/third_party/NVTX/c/include`"
For torchvision, it still invoke [pytorch/cmake/public/cuda.cmake] but it no longer tries to find nvtx3 as torchvision is not using nvtx3 (if in future it uses, it can set USE_SYSTEM_NVTX=1 by default). So it would avoid the error reported in [#147220]
cc @xwang233 @tinglvv @eqy @ptrblck @atalman @malfet
| true
|
3,002,987,644
|
[export] Warn users when 0/1 specialization happens
|
justinchuby
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 6
|
COLLABORATOR
|
It can be confusing for users when they specify an axis to be dynamic only to find it is specialized without knowing what went on. Would it be a good idea to emit a warning to suggest users change the example dim size to >1? This is usually happening when one wants a dynamic batch size and provides an example having batch_size=1.
Example issue raised
- https://github.com/pytorch/pytorch/issues/151430
@tugsbayasgalan @angelayi @xadupre @titaiwangms
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,002,908,098
|
Add OIDC permissions to linux-build workflow
|
zxiiro
|
open
|
[
"module: rocm",
"triaged",
"open source",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"ciflow/nightly",
"ciflow/inductor",
"ciflow/inductor-perf-test-nightly",
"ciflow/inductor-perf-compare",
"ciflow/slow",
"ciflow/rocm",
"ciflow/xpu",
"ciflow/torchbench",
"ciflow/inductor-micro-benchmark",
"ciflow/inductor-rocm",
"ciflow/s390",
"ciflow/inductor-periodic",
"ciflow/inductor-micro-benchmark-cpu-x86",
"ciflow/op-benchmark",
"ciflow/rocm-mi300",
"ciflow/inductor-perf-test-nightly-rocm",
"ciflow/periodic-rocm-mi300"
] | 2
|
COLLABORATOR
|
Update workflow to use OIDC authentication to access AWS resources rather than assuming the runner's default role. This is part of the multicloud effort to prepare jobs to support being run in non-AWS clouds where assumption of EC2 instance role cannot be used.
The JWT ID token requires `id-token: write` in order to create the token for the job.
See: https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-cloud-providers#adding-permissions-settings
Ref: pytorch-fdn/multicloud-ci-infra#3
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,002,885,860
|
[bazel] Fix aten generator directory path
|
jhance
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
The path we are passing to the aten directory is only correct in certain situations. If pytorch is an external repository, the path will need to be prefixed with external/pytorch. To make handling this as simple as possible, I used a sigil file from the directory to compute the path instead of hard coding it. For the sigil file I used a CMakeLists.txt as I figured thats one of the least likely files to go away.
An alternative approach might have been to find the path based on sources, but that seems more complex to implement since some of the sources might be in subdirectories.
This fixes one of the errors in the comments on [1] (by smallsunsun1) and makes progress on addressing [2].
[1] https://github.com/pytorch/pytorch/issues/35316
[2] https://github.com/pytorch/pytorch/issues/112903
| true
|
3,002,876,059
|
[BE] Consolidate PR labeling logic
|
janeyx99
|
open
|
[
"module: bootcamp",
"triaged",
"better-engineering",
"actionable",
"module: devx"
] | 3
|
CONTRIBUTOR
|
https://github.com/pytorch/test-infra/blob/main/torchci/lib/bot/autoLabelBot.ts#L37 and https://github.com/pytorch/pytorch/blob/main/.github/labeler.yml share similar logic and can be combined. I'm thinking https://github.com/pytorch/pytorch/blob/main/.github/labeler.yml is more easily searchable for PyTorch contributors so combining into there would be better.
cc @ZainRizvi @kit1980 @huydhn @clee2000
| true
|
3,002,876,005
|
[bazel] Fix unusual reference to cpuinfo workspace
|
jhance
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
It seems like this reference to cpuinfo is assuming that cpuinfo is a submodule of the project, which is not necessarily going to work when pytorch is pulled in as an external workspace.
Fix it to be a more traditional label using the @org_pytorch_cpuinfo external workspace.
Makes progress on https://github.com/pytorch/pytorch/issues/112903.
| true
|
3,002,863,377
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod5_BLOCK_SIZE3_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod5_BLOCK_SIZE3_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40724496542).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod5_BLOCK_SIZE3_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 881, in sdpa_dense_backward
grad_scores = torch.where(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 356.12 MiB is free. Including non-PyTorch memory, this process has 21.69 GiB memory in use. Of the allocated memory 6.75 GiB is allocated by PyTorch, and 14.68 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float16_score_mod5_BLOCK_SIZE3_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,002,863,104
|
DISABLED test_builtin_score_mods_float32_score_mod3_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float32_score_mod3_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40722250090).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float32_score_mod3_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,002,862,989
|
DISABLED test_non_equal_head_dims_score_mod1_float32_head_dims1_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod1_float32_head_dims1_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40716009321).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod1_float32_head_dims1_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 2159, in test_non_equal_head_dims
self.run_test(score_mod, dtype, B, H, S, qk_d, B, H, S, V_D=v_d, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 128.12 MiB is free. Including non-PyTorch memory, this process has 21.91 GiB memory in use. Of the allocated memory 6.83 GiB is allocated by PyTorch, and 14.77 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_non_equal_head_dims_score_mod1_float32_head_dims1_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,002,862,986
|
DISABLED test_non_pow_2_headdim_head_dim_94_float16_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_pow_2_headdim_head_dim_94_float16_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40716009321).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_pow_2_headdim_head_dim_94_float16_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,002,862,770
|
DISABLED test_builtin_score_mods_dynamic_float16_score_mask_mod2_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_dynamic_float16_score_mask_mod2_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40724496542).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_dynamic_float16_score_mask_mod2_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1156, in test_builtin_score_mods_dynamic
self.run_dynamic_test(score_mask_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 831, in run_dynamic_test
golden_out1.backward(backward_grad1.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 870, in sdpa_dense_backward
grad_scores, _, _, _, _, *grad_score_mod_captured = joint_score_mod(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 833, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 409, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 396, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.1098 from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py:1265 in wrapped", line 7, in forward
mul_2 = torch.ops.aten.mul.Tensor(arg5_1, arg0_1); arg5_1 = arg0_1 = None
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 4.12 MiB is free. Including non-PyTorch memory, this process has 22.03 GiB memory in use. Of the allocated memory 6.73 GiB is allocated by PyTorch, and 15.04 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_dynamic_float16_score_mask_mod2_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,002,862,658
|
DISABLED test_non_equal_head_dims_score_mod4_bfloat16_head_dims0_cuda_bfloat16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod4_bfloat16_head_dims0_cuda_bfloat16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40723194328).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod4_bfloat16_head_dims0_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 2159, in test_non_equal_head_dims
self.run_test(score_mod, dtype, B, H, S, qk_d, B, H, S, V_D=v_d, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 776.12 MiB is free. Including non-PyTorch memory, this process has 21.28 GiB memory in use. Of the allocated memory 6.78 GiB is allocated by PyTorch, and 13.85 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_non_equal_head_dims_score_mod4_bfloat16_head_dims0_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,002,862,564
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod5_BLOCK_SIZE3_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod5_BLOCK_SIZE3_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40716009321).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod5_BLOCK_SIZE3_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 881, in sdpa_dense_backward
grad_scores = torch.where(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 492.12 MiB is free. Including non-PyTorch memory, this process has 21.56 GiB memory in use. Of the allocated memory 6.77 GiB is allocated by PyTorch, and 14.52 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod5_BLOCK_SIZE3_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,002,862,456
|
DISABLED test_builtin_score_mods_float16_score_mod6_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float16_score_mod6_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40722250090).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float16_score_mod6_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,002,856,525
|
Add automatic categorization for release notes: inductor (aoti)
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151569
* #151453
| true
|
3,002,846,915
|
[Experiment] Templaterize _weight_int4pack_mm_cuda
|
desertfire
|
open
|
[
"release notes: cuda",
"topic: not user facing",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151568
Summary: Demonstrate how to templaterize an eager op implementation to make it sharable between AtenTensor and SlimTensor
| true
|
3,002,827,924
|
Add OIDC permissions to docs workflow
|
zxiiro
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/nightly"
] | 3
|
COLLABORATOR
|
Update workflow to use OIDC authentication to access AWS resources rather than assuming the runner's default role. This is part of the multicloud effort to prepare jobs to support being run in non-AWS clouds where assumption of EC2 instance role cannot be used.
The JWT ID token requires `id-token: write` in order to create the token for the job.
See: https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-cloud-providers#adding-permissions-settings
Ref: pytorch-fdn/multicloud-ci-infra#3
| true
|
3,002,734,780
|
[BE][Easy]: Normalize Dim typing in torch distributed
|
Skylion007
|
closed
|
[
"oncall: distributed",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Improve typing using prims_common dtypes
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,002,693,852
|
[BE][Easy]: Dedupe a TypeAlias in PrimsCommon
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Replaces a duplicate TypeAlias with a reference to the global constant for them
| true
|
3,002,691,412
|
run_decompositions fails with torch.scan
|
xadupre
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
``run_decompositions`` fails where a shape depends on the data.
```python
import torch
def dummy_loop(padded: torch.Tensor, pos: torch.Tensor):
copy = torch.zeros(padded.shape)
for i in range(pos.shape[0]):
p = pos[i]
copy[i, :p] = padded[i, :p]
return copy
def dummy_loop_with_scan(padded: torch.Tensor, pos: torch.Tensor):
def pad_row(padded, p):
row = torch.zeros((padded.shape[0],))
torch._check(p.item() > 0)
torch._check(p.item() < padded.shape[0])
# this check is not always true, we add it anyway to make this dimension >= 2
# and avoid raising an exception about dynamic dimension in {0, 1}
if torch.compiler.is_exporting():
torch._check(p.item() > 1)
row[: p.item()] = padded[: p.item()]
return (row,)
return torch.ops.higher_order.scan(
pad_row,
[],
[padded, pos],
[],
)
def select_when_exporting(f, f_scan):
return f_scan if torch.compiler.is_exporting() else f
class Model(torch.nn.Module):
def forward(self, images, position):
return select_when_exporting(dummy_loop, dummy_loop_with_scan)(
images, position
)
model = Model()
x = torch.randn((5, 6))
y = torch.arange(5, dtype=torch.int64) + 1
expected = model(x, y)
DYN = torch.export.Dim.DYNAMIC
ep = torch.export.export(
model,
(x, y),
dynamic_shapes={"images": {0: DYN, 1: DYN}, "position": {0: DYN}},
strict=False,
)
ep.run_decompositions()
```
Error:
```
File "torch/fx/experimental/symbolic_shapes.py", line 7061, in _evaluate_expr
raise self._make_data_dependent_error(
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression u1 < 0 (unhinted: u1 < 0). (Size-like symbols: none)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.8.0.dev20250416+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.12.9 (main, Feb 5 2025, 08:49:00) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] model-explorer-onnx==0.3.4
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.19.0
[pip3] onnx-array-api==0.3.0
[pip3] onnx-extended==0.4.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-genai-cuda==0.6.0
[pip3] onnxruntime-training==1.22.0+cu126
[pip3] onnxscript==0.3.0.dev20250301
[pip3] optree==0.14.1
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250416+cu126
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0.dev20250416+cu126
[pip3] torchmetrics==1.6.2
[pip3] torchvision==0.22.0.dev20250416+cu126
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,002,688,358
|
Lift guard checking logic to AOTAutogradCache
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151691
* __->__ #151563
This somewhat complicated PR does a few things:
- It separates out a lot of the guard checking logic into its own class, GuardedCache[T]
- It adds a new `check_guard_hit` lambda to FXGraphCache._lookup_graph, which allows callers to define their own guard checking logic
- It then uses these two combined parts to lift guard checking to AOTAutogradCache. This means that AOTAutogradCache stores its own guard expressions and evaluates them.
- FXGraphCache's guard checking logic is completely unchanged, just refactored. As part of the work, I'm able to extend a bit of the logging functionality of AOTAutogradCache into FXGraphCache, so that you can know if FXGraphCache missed due to a guard failure or a full cache miss.
# Why do this?
Lifting guards to AOTAutogradCache has a few benefits:
- First, it fixes a long standing bug in guard checking logic. Backward passes can have different symint inputs than forward passes depending on forward output, if AOTAutograd chooses to store symints for the backward. These symint inputs have the same underlying symbols as the forward, but on AOTAutogradCache hit, we don't have access to the hints backing these exact symints (we only have hints for the symints on the forward function). By lifting guard checking logic to AOTAutogradCache, we no longer need to check the backward guards, as they'll be included in the AOTAutogradCache guard expression. **I've added a unit test that failed before my diff, and now passes, as an example of this**
- Secondly, this is the first step necessary to bundle CompiledFxGraph into AOTAutogradCache. Doing so will simplify our cache logic significantly, and also make precompile logic simpler, as precompiles will only need to store AOTAutogradCacheEntrys, without needing to match them up with inductor FXGraphCache entries.
- Finally, adding guard checking logic to AOTAutogradCache my allow us in the future to handle more complicated cases like a single forward with multiple backwards, as guard checks are now storable on the cache entry itself.
# Guard checking logic of AOTAutogradCache
When AOTAutogradCache evaluates guard expressions, it no longer needs to evaluate the forward/backward guards in the FXGraphCacheEntry (since the AOTAutogradCache guard expressions will encompass them). Because of this, we still need a way for AOTAutogradCache to distinguish between multiple FXGraphCache local entries. To do so, AOTAutogradCache stores the guard string from FXGraphCache, which it uses as a second "cache key". It doesn't need to **evaluate** these guards, it just needs to find the cache entry from FXGraphCache that had the same guards as when it was stored.
After this, I will work on putting the FXGraphCache entries directly into AOTAutogradCache. If I can put CompiledFxGraphs in the cache directly, I no longer need this complicated `check_guard_hit` overriding logic.
## Test Plan
Added a new unit test. There are comprehensive guard checking unit tests in `test_aot_autograd_cache` already, and those pass.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,002,686,128
|
Remove libdevice ops in inductor
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151598
* __->__ #151562
Now that we track dtypes during codegen, we can delete all these extra ops that worked around the problem by doing dispatch at lowering time.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,002,543,811
|
2.6: vmap doesn't vectorize all args
|
Birch-san
|
open
|
[
"module: docs",
"triaged",
"module: vmap",
"module: functorch"
] | 3
|
NONE
|
### 🐛 Describe the bug
vmap unbinds batch dimension for my first argument `x`, but not for my subsequent argument `mask`.
how do I give a corresponding mask to each ensemble member?
I mean, strictly speaking for my use-case, my mask is the same for all ensemble members. if that worked I'd be happy too. but SDPA's validation of masks' batch dimensions is broken when under vmap (https://github.com/pytorch/pytorch/issues/151558).
```python
from typing import Optional
import torch
from torch import BoolTensor, FloatTensor, Tensor, inference_mode
from torch.nn import Module
from torch.func import functional_call, stack_module_state
class Printer(Module):
def forward(self, x: FloatTensor, mask: Optional[BoolTensor] = None) -> FloatTensor:
print(f"x: {x.shape}")
if mask is not None:
print(f"mask: {mask.shape}")
# vmap is required to return a tensor
return x
ensemble_size = 3
stateless, *experts = [Printer() for _ in range(ensemble_size + 1)]
params, buffers = stack_module_state(experts)
def bound_vmappable_fwd(
params: dict[str, Tensor],
buffers: dict[str, Tensor],
x: FloatTensor,
mask: Optional[BoolTensor] = None,
) -> FloatTensor:
return functional_call(
stateless,
(params, buffers),
(x, mask),
tie_weights=False,
strict=True,
)
def bound_vmappable_fwd_kwarg(
params: dict[str, Tensor],
buffers: dict[str, Tensor],
x: FloatTensor,
mask: Optional[BoolTensor] = None,
) -> FloatTensor:
return functional_call(
stateless,
(params, buffers),
(x,),
{'mask': mask},
tie_weights=False,
strict=True,
)
device = torch.device('cuda')
batch_size = 4
seq_len = 128
in_features = 320
with inference_mode():
x = torch.randn((batch_size, seq_len, in_features), device=device, dtype=torch.float16)
mask = torch.ones((batch_size, 1, seq_len, seq_len), device=device, dtype=torch.bool)
print("==running without vmap==")
# prints:
# x: torch.Size([4, 128, 320])
# mask: torch.Size([4, 1, 128, 128])
experts[0](x, mask=mask)
print("==running with vmap==")
# prints:
# x: torch.Size([128, 320])
# mask: torch.Size([4, 1, 128, 128])
torch.vmap(bound_vmappable_fwd)(params, buffers, x, mask=mask)
print("==running with vmap (kwargs)==")
# prints:
# x: torch.Size([128, 320])
# mask: torch.Size([4, 1, 128, 128])
torch.vmap(bound_vmappable_fwd_kwarg)(params, buffers, x, mask=mask)
# it makes sense to me that vmap unbinds x's batch dim and gives a different x to each expert.
# I'm surprised that it didn't explode when the batch dim didn't match the size of the ensemble. is that a bug too?
# but why doesn't vmap unbind my mask's batch dimension and dispatch to each ensemble member?
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.17.4+torch260cu128
[pip3] numpy==1.26.4
[pip3] open_clip_torch==2.31.0
[pip3] optree==0.14.1
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchdiffeq==0.2.5
[pip3] torchsde==0.2.6
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0+git35c6c7c6
[pip3] welford-torch==0.2.5
[conda] Could not collect
```
cc @svekars @sekyondaMeta @AlannaBurke @zou3519 @Chillee @samdow @kshitij12345
| true
|
3,002,506,008
|
Add device agnostic support for distributed tests
|
Alokksinha00
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"topic: not user facing",
"release notes: distributed (checkpoint)"
] | 2
|
NONE
|
/_composable/test_replicate.py and /algorithm/ddp_comm_hooks are modified for generic device
1. create_pg() method from DistributedTestBase is used for process group creation wherever accelarator/cuda is used
2. instantiate_device_type_tests is imported and device is brought in respective tests
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,002,494,774
|
SDPA doesn't document requirement for head dimension in masks
|
Birch-san
|
open
|
[
"triaged",
"module: sdpa"
] | 0
|
NONE
|
### 🐛 Describe the bug
I think there's a docs bug with SDPA.
it says that `attn_mask` doesn't require a head dimension, but my scripts only succeed if I provide a head dimension.
[SDPA docs](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
failing without head dimension:
```python
import torch
from torch import inference_mode
from torch.nn.functional import scaled_dot_product_attention
device = torch.device('cuda')
with inference_mode():
q = k = v = torch.randn((4, 8, 128, 64), device=device, dtype=torch.float16)
mask = torch.ones((4, 128, 128), device=device, dtype=torch.bool)
# RuntimeError: The expanded size of the tensor (8) must match the existing size (4) at non-singleton dimension 1. Target sizes: [4, 8, 128, 128]. Tensor sizes: [4, 128, 128]
repro
a = scaled_dot_product_attention(
q,
k,
v,
attn_mask=mask,
)
```
succeeding with head dimension:
```python
import torch
from torch import inference_mode
from torch.nn.functional import scaled_dot_product_attention
device = torch.device('cuda')
with inference_mode():
q = k = v = torch.randn((4, 8, 128, 64), device=device, dtype=torch.float16)
# broadcast over heads
mask = torch.ones((4, 1, 128, 128), device=device, dtype=torch.bool)
# works fine
a = scaled_dot_product_attention(
q,
k,
v,
attn_mask=mask,
)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.17.4+torch260cu128
[pip3] numpy==1.26.4
[pip3] open_clip_torch==2.31.0
[pip3] optree==0.14.1
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchdiffeq==0.2.5
[pip3] torchsde==0.2.6
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0+git35c6c7c6
[pip3] welford-torch==0.2.5
[conda] Could not collect
```
| true
|
3,002,482,792
|
2.6: SDPA disallows masked attention under vmap
|
Birch-san
|
open
|
[
"triaged",
"module: vmap",
"module: functorch",
"module: sdpa"
] | 1
|
NONE
|
### 🐛 Describe the bug
_note: I also ran into an SDPA docs bug https://github.com/pytorch/pytorch/issues/151559 whilst producing this repro._
_note: I also ran into a vmap dispatch bug https://github.com/pytorch/pytorch/issues/151561 whilst producing this repro._
SDPA gives us `RuntimeError: attn_bias: wrong shape (batch dimension)` when we vmap a mask over it.
the shapes of q, k and mask that I can print are all the same as in non-vmapped attention, which succeeds just fine. but under a vmap, SDPA's validation rejects the mask on account of its batch dimension.
```python
from typing import Optional
import torch
from torch import FloatTensor, BoolTensor, Tensor, inference_mode
from torch.func import functional_call, stack_module_state
from torch.nn import Module, Linear
from torch.nn.functional import scaled_dot_product_attention
from einops import rearrange
class Attention(Module):
def __init__(
self,
in_features: int,
head_dim: int,
n_heads: int,
device: Optional[torch.device | str | int] = None,
dtype: Optional[torch.dtype] = None,
):
factory_kwargs = {"device": device, "dtype": dtype}
super().__init__()
self.n_heads = n_heads
self.head_dim = head_dim
self.qkv = Linear(in_features=in_features, out_features=3 * head_dim * n_heads, bias=False, **factory_kwargs)
def init_weights(self, generator: Optional[torch.Generator] = None) -> None:
self.qkv.weight.normal_(generator=generator)
def forward(self, x: FloatTensor, mask: Optional[BoolTensor] = None) -> FloatTensor:
"""
Args:
x: tensor (batch seq chan)
mask: tensor (batch seq seq)
"""
qkv: FloatTensor = self.qkv(x)
q, k, v = rearrange(qkv, "... seq (proj n_head head_dim) -> proj ... n_head seq head_dim", proj=3, n_head=self.n_heads).unbind()
# if we follow the torch docs and don't specify a head dim in our mask, we get this error for both vmapped and non-vmapped invocation:
# q.shape
# torch.Size([4, 8, 128, 64])
# k.shape
# torch.Size([4, 8, 128, 64])
# mask.shape
# torch.Size([4, 128, 128])
# RuntimeError: The expanded size of the tensor (8) must match the existing size (4) at non-singleton dimension 1. Target sizes: [4, 8, 128, 128]. Tensor sizes: [4, 128, 128]
# if we *don't* follow the torch docs and *do* specify a head dim in our mask (which is what I'm more familiar with),
# non-vmapped works fine,
# but vmapped gives us:
# q.shape
# torch.Size([4, 8, 128, 64])
# k.shape
# torch.Size([4, 8, 128, 64])
# mask.shape
# torch.Size([4, 1, 128, 128])
# RuntimeError: attn_bias: wrong shape (batch dimension)
# https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html
a = scaled_dot_product_attention(
q,
k,
v,
attn_mask=mask,
)
return a
in_features = 320
head_dim = 64
n_heads = 8
dtype = torch.float16
device = torch.device('cuda')
ensemble_size = 3
with torch.device('meta'):
attns: list[Attention] = [Attention(
in_features=in_features,
head_dim=head_dim,
n_heads=n_heads,
dtype=dtype
) for _ in range(ensemble_size + 1)]
for attn in attns:
attn.to_empty(device=device)
attn.eval()
attn.requires_grad_(False)
stateless_attn, *experts = attns
gen = torch.Generator(device=device)
for ix, expert in enumerate(experts):
expert.init_weights(generator=gen.manual_seed(ix))
params, buffers = stack_module_state(experts)
def bound_vmappable_fwd(
params: dict[str, Tensor],
buffers: dict[str, Tensor],
x: FloatTensor,
mask: Optional[BoolTensor] = None,
) -> FloatTensor:
return functional_call(
stateless_attn,
(params, buffers),
(x,),
{'mask': mask},
tie_weights=False,
strict=True,
)
def dispatch_attn(x: FloatTensor, mask: Optional[BoolTensor] = None) -> FloatTensor:
# broadcast x over the ensemble
x_broadcast = x.unsqueeze(0).expand(ensemble_size, *(-1,)*x.ndim)
# ... I *should* broadcast mask over the ensemble too, right?
# but for some reason, it's only for x that torch.vmap unbinds-and-dispatches dim 0.
# for the mask kwarg, torch.vmap just passes the same mask to all ensemble members.
# why are they treated differently?
mask_broadcast = mask
# if mask is None:
# mask_broadcast: Optional[BoolTensor] = None
# else:
# mask_broadcast = mask.unsqueeze(0).expand(ensemble_size, *(-1,)*mask.ndim)
return torch.vmap(bound_vmappable_fwd)(params, buffers, x_broadcast, mask=mask_broadcast)
batch_size = 4
seq_len = 128
with inference_mode():
x = torch.randn((batch_size, seq_len, in_features), device=device, dtype=dtype)
# the 1 is a singleton dim for broadcasting over heads
mask = torch.ones((batch_size, 1, seq_len, seq_len), device=device, dtype=torch.bool)
# non-vmapped works fine, so long as I specify a head dim in the mask
out_novmap: FloatTensor = experts[0].forward(x, mask=mask)
out: FloatTensor = dispatch_attn(x, mask=mask)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.17.4+torch260cu128
[pip3] numpy==1.26.4
[pip3] open_clip_torch==2.31.0
[pip3] optree==0.14.1
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchdiffeq==0.2.5
[pip3] torchsde==0.2.6
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0+git35c6c7c6
[pip3] welford-torch==0.2.5
[conda] Could not collect
```
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
3,002,469,372
|
[autodeps2] Replace third-party/pyqt5 with third-party/pypi/pyqt5
|
kkolur76
|
open
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Summary: These are duplicate libraries and this is blocking Autodeps2 rollout. Title.
Test Plan: CI
Differential Revision: D73184155
| true
|
3,002,441,409
|
Compatibility Issues Between DCP, LoRA, and Hugging Face Models
|
LizzzMax
|
open
|
[
"oncall: distributed",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
I've encountered two significant issues when using PyTorch's Distributed Checkpoint (DCP) with LoRA and Hugging Face models:
Here is my code, which is modified from [here](https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html)
```
import os
import torch
import torch.distributed as dist
import torch.distributed.checkpoint as dcp
import torch.multiprocessing as mp
import torch.nn as nn
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.checkpoint.state_dict import get_state_dict, set_state_dict
from torch.distributed.checkpoint.stateful import Stateful
from torch.distributed.device_mesh import init_device_mesh
from torch.distributed.fsdp import (
fully_shard,
MixedPrecisionPolicy
)
from peft import LoraConfig, get_peft_model
from transformers import AutoModelForCausalLM, AutoTokenizer
CHECKPOINT_DIR = "checkpoint"
class AppState(Stateful):
def __init__(self, model, optimizer=None):
self.model = model
self.optimizer = optimizer
def state_dict(self):
model_state_dict, optimizer_state_dict = get_state_dict(self.model, self.optimizer)
return {
"model": model_state_dict,
"optim": optimizer_state_dict
}
def load_state_dict(self, state_dict):
set_state_dict(
self.model,
self.optimizer,
model_state_dict=state_dict["model"],
optim_state_dict=state_dict["optim"]
)
def setup(rank, world_size):
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
dist.init_process_group("nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
def cleanup():
dist.destroy_process_group()
def run_fsdp_checkpoint_save_example(rank, world_size):
print(f"Running FSDP + LoRA example on rank {rank}.")
setup(rank, world_size)
model_name = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to(rank)
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.1,
bias="none",
task_type="CAUSAL_LM",
)
model = get_peft_model(model, lora_config)
if rank == 0:
model.print_trainable_parameters()
mesh = init_device_mesh("cuda", (world_size,))
mp_policy = MixedPrecisionPolicy(
param_dtype=torch.bfloat16,
reduce_dtype=torch.bfloat16,
output_dtype=torch.bfloat16,
)
fully_shard(model, mesh=mesh, mp_policy=mp_policy)
# model = FSDP(model,
# use_orig_params=True
# )
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-4)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt").to(rank)
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
if rank == 0:
print(f"Saving checkpoint at rank {rank}")
state_dict = {"app": AppState(model, optimizer)}
dcp.save(state_dict, checkpoint_id=CHECKPOINT_DIR)
cleanup()
def run_fsdp_checkpoint_load_example(rank, world_size):
print(f"Running basic FSDP checkpoint loading example on rank {rank}.")
setup(rank, world_size)
# create a model and move it to GPU with id rank
model_name = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to(rank)
mesh = init_device_mesh("cuda", (world_size,))
mp_policy = MixedPrecisionPolicy(
param_dtype=torch.bfloat16,
reduce_dtype=torch.bfloat16,
output_dtype=torch.bfloat16,
)
fully_shard(model, mesh=mesh, mp_policy=mp_policy)
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
state_dict = { "app": AppState(model, optimizer)}
dcp.load(
state_dict=state_dict,
checkpoint_id=CHECKPOINT_DIR,
)
cleanup()
def run_checkpoint_load_example():
# create the non FSDP-wrapped toy model
model_name = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
state_dict = {
"model": model.state_dict(),
}
# since no progress group is initialized, DCP will disable any collectives.
dcp.load(
state_dict=state_dict,
checkpoint_id=CHECKPOINT_DIR,
)
model.load_state_dict(state_dict["model"])
if __name__ == "__main__":
world_size = torch.cuda.device_count()
print(f"Running FSDP + LoRA example on {world_size} devices.")
mp.spawn(
run_fsdp_checkpoint_save_example,
# run_fsdp_checkpoint_load_example,
args=(world_size,),
nprocs=world_size,
join=True,
)
# run_checkpoint_load_example()
```
DCP and LoRA Incompatibility
When trying to save/load checkpoints of a Hugging Face model with LoRA adapters using DCP
The saved checkpoint cannot be properly loaded back, you can reproduce this bug by run_fsdp_checkpoint_save_example and run_fsdp_checkpoint_load_example with 'model = get_peft_model(model, lora_config)' uncommmented
and the error shows:

`Traceback (most recent call last): (RANK 7)
File "/home/tiger/.local/lib/python3.11/site-packages/torch/distributed/checkpoint/utils.py", line 164, in reduce_scatter
local_data = map_fun()
^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/torch/distributed/checkpoint/logger.py", line 83, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/torch/distributed/checkpoint/state_dict_loader.py", line 218, in local_step
local_plan = planner.create_local_plan()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/torch/distributed/checkpoint/default_planner.py", line 233, in create_local_plan
return create_default_local_load_plan(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/torch/distributed/checkpoint/default_planner.py", line 354, in create_default_local_load_plan
raise RuntimeError(f"Missing key in checkpoint state_dict: {fqn}.")
RuntimeError: Missing key in checkpoint state_dict: app.model.model.decoder.embed_tokens.weight.`
DCP Load in non-distributed environments and HF model Incompatibility
Similar error, and you can reproduce this bug by run_fsdp_checkpoint_save_example and run_checkpoint_load_example with 'model = get_peft_model(model, lora_config)' commmented
**### How to Save model warped by FSDP2 and LoRA in Distributed multi-host GPU setup !!!**
This is my core problem, could anyone help me?
### Versions
torch = 2.6
peft = 0.15.2
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,002,429,444
|
[Inductor] Convolution auto-generation through Triton backend
|
CallMeRush
|
open
|
[
"feature",
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
Currently, it seems that Triton isn't available as the convolution backend.
I have been trying to AOT compile ResNet18 through torch._inductor, by setting both max_autotune_gemm_backends and max_autotune_conv_backends to "TRITON". But I get a NoValidChoicesError.
I think it would be valuable to have full support of convolutions by the Triton backend alone, without needing to use ATen.
### Alternatives
_No response_
### Additional context
My end goal is to easily locate all the kernel calls executed during inference time, such that I can gather them all and then run inference in a separate runtime by calling these kernels.
Would there be a workaround to easily locate the other kernels called under the hood by the PyTorch AOT inference runtime?
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,002,398,783
|
Documentation Clarification Needed for Clamping of Scale Coefficient in clip_grads_with_norm_
|
qiaoqiaoLF
|
open
|
[
"module: docs",
"module: nn",
"triaged",
"module: norms and normalization"
] | 0
|
NONE
|
### 📚 The doc issue
In the current documentation for `torch.nn.utils.clip_grads_with_norm_`, the formula for the scale coefficient is as follows:
$$
\text{grad} = \text{grad} \times \frac{\text{maxnorm}}{\text{totalnorm} + 1e-6}
$$
However, in practical usage, it appears that the scale coefficient is clamped to a maximum value of 1, which prevents the gradient from scaling the gradient up. This behavior, while important for the correct functionality of gradient clipping, is not currently mentioned in the documentation.
**Suggested Change:**
It would be helpful to explicitly mention in the documentation that the scale coefficient is clamped to 1. This will provide more clarity to users about how the function operates in practice and ensure there are no misunderstandings regarding its behavior.
### Suggest a potential alternative/fix
The formula should be updated as follows:
$$
\text{grad} = \text{grad} \times min( \frac{\text{maxnorm}}{\text{totalnorm} + 1e-6}, 1)
$$
cc @svekars @sekyondaMeta @AlannaBurke @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
3,002,356,399
|
Fix normalize mypy warning with tuple dim
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"suppress-bc-linter"
] | 6
|
CONTRIBUTOR
|
Fixes #70100
## Test Result
```python
# cat ../normal.py
import torch
import torch.nn.functional as F
F.normalize(torch.rand(2,3,4), dim=(0,1))
```
### Before
```bash
mypy ../normal.py
../normal.py:4:36: error: Argument "dim" to "normalize" has incompatible type "tuple[int, int]"; expected "int" [arg-type]
Found 1 error in 1 file (checked 1 source file)
```
### After
```bash
mypy ../normal.py
Success: no issues found in 1 source file
```
| true
|
3,002,349,451
|
fp8 dtype support for SparseSemiStructuredTensor
|
tlogn
|
open
|
[
"module: sparse",
"triaged",
"enhancement",
"module: float8"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
Hi, pytorch team! Do you have any plan to support fp8 dtype for SparseSemiStructuredTensor ? Currently the available dtypes are [int8, fp16, bf16, fp32]. Cutlass now provide examples on sparse fp8 gemm and vllm team has made use of it. I also want to do some experiment with sparse fp8 gemm but find it not suppoted yet.
```Python
_DTYPE_SHAPE_CONSTRAINTS = {
torch.int8: _SEMI_STRUCTURED_SPARSE_CONFIG(16, 128, 16, 16),
torch.float16: _SEMI_STRUCTURED_SPARSE_CONFIG(32, 64, 8, 8),
torch.bfloat16: _SEMI_STRUCTURED_SPARSE_CONFIG(32, 64, 8, 8),
torch.float32: _SEMI_STRUCTURED_SPARSE_CONFIG(32, 32, 4, 4),
}
```
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @yanbing-j @vkuzo @albanD @kadeng @penguinwu @jerryzh168
| true
|
3,002,282,500
|
[standalone_compile] support multiple returns
|
zou3519
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151502
* __->__ #151551
* #151501
We were only returning the first one. There's an edge case on what to do
if the original function returns a single Tensor. capture(f) returns a
function that returns a tuple of one Tensor in this case and we were
originally converting this back to one single Tensor. I think it's fine
to return a tuple of one Tensor (that is what the graph passed to
standalone_compile asked for!) but we can revisit.
fine
Test Plan:
- modified one test to used multiple outputs
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,002,282,214
|
How does aarch64 cpu wheel keep rpath instead of runpath?
|
dilililiwhy
|
closed
|
[
"module: build",
"triaged"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
As no disable-new-dtags and patchelf command are found in the repo, how does aarch64 cpu wheel of keep rpath instead of runpath(in default)? Will the RPATH cause problem?
in setup.py
```
def make_relative_rpath_args(path):
if IS_DARWIN:
return ["-Wl,-rpath,@loader_path/" + path]
elif IS_WINDOWS:
return []
else:
return ["-Wl,-rpath,$ORIGIN/" + path]
```
in nightly whl (20250417)
**0x000000000000000f (RPATH) Library rpath: [$ORIGIN/lib:$ORIGIN/../torch.libs]**
> $ORIGIN/../torch.libs is added by auditwheel
```
[root@4c6dc99e56ae torch]# readelf -d _C.cpython-39-aarch64-linux-gnu.so
Dynamic section at offset 0x20300 contains 27 entries:
Tag Type Name/Value
0x000000000000000f (RPATH) Library rpath: [$ORIGIN/lib:$ORIGIN/../torch.libs]
0x0000000000000001 (NEEDED) Shared library: [libtorch_python.so]
0x0000000000000001 (NEEDED) Shared library: [libpthread.so.0]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
0x000000000000000c (INIT) 0x4005d0
0x000000000000000d (FINI) 0x400738
0x0000000000000019 (INIT_ARRAY) 0x41fdb8
0x000000000000001b (INIT_ARRAYSZ) 8 (bytes)
0x000000000000001a (FINI_ARRAY) 0x41fdc0
0x000000000000001c (FINI_ARRAYSZ) 8 (bytes)
0x000000006ffffef5 (GNU_HASH) 0x430028
0x0000000000000005 (STRTAB) 0x4301f0
0x0000000000000006 (SYMTAB) 0x430070
0x000000000000000a (STRSZ) 266 (bytes)
0x000000000000000b (SYMENT) 24 (bytes)
0x0000000000000003 (PLTGOT) 0x41ffe8
0x0000000000000002 (PLTRELSZ) 72 (bytes)
0x0000000000000014 (PLTREL) RELA
0x0000000000000017 (JMPREL) 0x400588
0x0000000000000007 (RELA) 0x4004e0
0x0000000000000008 (RELASZ) 168 (bytes)
0x0000000000000009 (RELAENT) 24 (bytes)
0x000000006ffffffe (VERNEED) 0x4004c0
0x000000006fffffff (VERNEEDNUM) 1
0x000000006ffffff0 (VERSYM) 0x4004a0
0x000000006ffffff9 (RELACOUNT) 3
0x0000000000000000 (NULL) 0x0
```
build by source code
**0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN/lib]**
```
[root@578ad62a0ee1 torch]# readelf -d _C.cpython-39-aarch64-linux-gnu.so
Dynamic section at offset 0xfdd0 contains 27 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [libtorch_python.so]
0x0000000000000001 (NEEDED) Shared library: [libpthread.so.0]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN/lib]
0x000000000000000c (INIT) 0x4c0
0x000000000000000d (FINI) 0x624
0x0000000000000019 (INIT_ARRAY) 0x1fdb8
0x000000000000001b (INIT_ARRAYSZ) 8 (bytes)
0x000000000000001a (FINI_ARRAY) 0x1fdc0
0x000000000000001c (FINI_ARRAYSZ) 8 (bytes)
0x000000006ffffef5 (GNU_HASH) 0x1f0
0x0000000000000005 (STRTAB) 0x2f0
0x0000000000000006 (SYMTAB) 0x218
0x000000000000000a (STRSZ) 174 (bytes)
0x000000000000000b (SYMENT) 24 (bytes)
0x0000000000000003 (PLTGOT) 0x1ffe8
0x0000000000000002 (PLTRELSZ) 72 (bytes)
0x0000000000000014 (PLTREL) RELA
0x0000000000000017 (JMPREL) 0x478
0x0000000000000007 (RELA) 0x3d0
0x0000000000000008 (RELASZ) 168 (bytes)
0x0000000000000009 (RELAENT) 24 (bytes)
0x000000006ffffffe (VERNEED) 0x3b0
0x000000006fffffff (VERNEEDNUM) 1
0x000000006ffffff0 (VERSYM) 0x39e
0x000000006ffffff9 (RELACOUNT) 3
0x0000000000000000 (NULL) 0x0
```
### Versions
PyTorch version: 2.8.0.dev20250417+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: AlmaLinux 8.10 (Cerulean Leopard) (aarch64)
GCC version: (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9)
Clang version: Could not collect
CMake version: version 4.0.0
Libc version: glibc-2.28
Python version: 3.9.22 (main, Apr 12 2025, 03:35:14) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)] (64-bit runtime)
Python platform: Linux-5.10.0-182.0.0.95.r2220_156.hce2.aarch64-aarch64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 1
Core(s) per cluster: 8
Socket(s): -
Cluster(s): 2
NUMA node(s): 2
Model: 0
Stepping: 0x1
CPU MHz: 2600.000
CPU max MHz: 2600.0000
CPU min MHz: 2600.0000
BogoMIPS: 200.00
L1d cache: 64K
L1i cache: 64K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm
Versions of relevant libraries:
[pip3] torch==2.8.0.dev20250417+cpu
cc @malfet @seemethere
| true
|
3,002,193,228
|
multiprocessing.pool.MaybeEncodingError Reason: 'RuntimeError('_share_fd_: only available on CPU')'
|
jingxu10
|
open
|
[
"module: multiprocessing",
"triaged",
"module: xpu"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
The following code crashes on XPU device, but works well on CPU and CUDA.
```python
import torch
def func2(args):
input = torch.rand(1,1,2048)
input = input.to('xpu')
return input+input
def run__pool():
from multiprocessing import Pool
cpu_worker_num = 3
process_args = [(1, 1), (9, 9), (4, 4), (3, 3), ]
with Pool(cpu_worker_num) as p:
outputs = p.map(func2, process_args)
if __name__ =='__main__':
run__pool()
```
Traceback:
```
Traceback (most recent call last):
File "/home/jingxu10/test.py", line 16, in <module>
run__pool()
File "/home/jingxu10/test.py", line 13, in run__pool
outputs = p.map(func2, process_args)
File "/home/jingxu10/miniforge3/envs/py310/lib/python3.10/multiprocessing/pool.py", line 367, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/home/jingxu10/miniforge3/envs/py310/lib/python3.10/multiprocessing/pool.py", line 774, in get
raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: '[tensor([[[1.9042, 1.2242, 1.3467, ..., 0.1685, 1.4890, 0.8446]]],
device='xpu:0')]'. Reason: 'RuntimeError('_share_fd_: only available on CPU')'
```
### Versions
```
Collecting environment information... PyTorch version: 2.7.0+xpu Is debug build: False CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.26.0
Libc version: glibc-2.35 Python version: 3.10.17 | packaged by conda-forge | (main, Apr 10 2025, 22:19:12) [GCC 13.3.0] (64-bit runtime) Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: True
XPU used to build PyTorch: 20250004
Intel GPU driver version:
* intel_opencl: 25.05.32567.18-1099~22.04
* level_zero: 1.20.2.0-1098~22.04
Intel GPU models onboard:
N/A
Intel GPU models detected:
* [0] _XpuDeviceProperties(name='Intel(R) Arc(TM) A770 Graphics', platform_name='Intel(R) oneAPI Unified Runtime over Level-Zero', type='gpu', driver_version='1.6.32567+18', total_memory=15473MB, max_compute_units=512, gpu_eu_count=512, gpu_subslice_count=32, max_work_group_size=1024, max_num_sub_groups=128, sub_group_sizes=[8 16 32], has_fp16=1, has_fp64=0, has_atomic64=1)
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Vendor ID: GenuineIntel Model name: 13th Gen Intel(R) Core(TM) i7-13700K CPU family: 6 Model: 183 Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 1
CPU max MHz: 5400.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 24 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] pytorch-triton-xpu==3.3.0
[pip3] torch==2.7.0+xpu
[conda] numpy 2.2.4 pypi_0 pypi
[conda] pytorch-triton-xpu 3.3.0 pypi_0 pypi
[conda] torch 2.7.0+xpu pypi_0 pypi
```
cc @VitalyFedyunin @albanD @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,002,183,373
|
Fix inductor test_linear_with_in_out_buffer
|
Flamefire
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 21
|
COLLABORATOR
|
Without MKL there is only 1 epilogue, not 2 because `addmm` is used instead of `packed_linear/_mkl_linear`.
This fails first at `TestSelectAlgorithmCPU.test_linear_with_in_out_buffer_batch_size_8_in_features_3_in_features2_192_image_size_224_out_features_64_bias_True_cpu_float32`
Instead of skipping the whole test just adjust the count for the single check.
Final numbers of `test/inductor/test_cpu_select_algorithm.py` without MKL:
```
Ran 1337 tests
OK (skipped=1211)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,992,210
|
Update OpenBLAS commit
|
aditew01
|
open
|
[
"triaged",
"open source",
"module: arm",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 10
|
COLLABORATOR
|
Motivation: Update OpenBLAS and change build script to enable SBGEMM kernels . Update pytorch `jammy` builds for aarch64 to use `install_openblas.sh` instead of `conda_install`
Link to full [TorchInductor Performance Dashboard AArch64](https://hud.pytorch.org/benchmark/compilers?dashboard=torchinductor&startTime=Wed%2C%2016%20Apr%202025%2009%3A35%3A26%20GMT&stopTime=Thu%2C%2017%20Apr%202025%2009%3A35%3A26%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=cpu%20(aarch64)&lBranch=adi/update_openblas&lCommit=90701ab81bf61fd864d31e0aa7e88d97a1a8676c&rBranch=main&rCommit=40ce4fb24a536d175348df876f61956d4945778e)
1. This shows a promising speedup across most of the HF models in benchmark, specifically giving a significant boost to SDPA layers.
2. Overall torch-bench pass-rate increased `[87%, 65/75 → 96%, 72/75]`
<img width="676" alt="Screenshot 2025-04-17 at 10 32 10" src="https://github.com/user-attachments/assets/a92dce0c-ecee-4466-8175-065df664dd71" />
cc @malfet @snadampal @milpuz01 @nikhil-arm @fadara01
| true
|
3,001,983,671
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE_128_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE_128_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40701904903).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE_128_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,983,471
|
DISABLED test_non_pow_2_headdim_head_dim_121_float16_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_pow_2_headdim_head_dim_121_float16_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40702973314).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_pow_2_headdim_head_dim_121_float16_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,983,371
|
DISABLED test_non_equal_head_dims_score_mod2_float16_head_dims0_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod2_float16_head_dims0_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40699885062).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod2_float16_head_dims0_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,983,288
|
DISABLED test_njt_causal_float32_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_njt_causal_float32_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40701904903).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_njt_causal_float32_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,983,166
|
DISABLED test_builtin_score_mods_float16_score_mod5_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float16_score_mod5_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40701904903).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float16_score_mod5_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,983,022
|
DISABLED test_remove_noop_view_dtype_cuda (__main__.GPUTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: fx"
] | 7
|
NONE
|
Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remove_noop_view_dtype_cuda&suite=GPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40701077936).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remove_noop_view_dtype_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 13355, in new_test
return value(self)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 13225, in test_remove_noop_view_dtype
self.assertExpectedInline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3097, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 'def forward(self, arg0_1: "Sym(s77)", arg[241 chars]te,)' != ''
- def forward(self, arg0_1: "Sym(s77)", arg1_1: "Sym(s27)", arg2_1: "Sym(s53)", arg3_1: "u8[s77, s27, s53][s27*s53, s53, 1]cuda:0"):
- permute: "u8[s77, s53, s27][s27*s53, 1, s53]cuda:0" = torch.ops.aten.permute.default(arg3_1, [0, 2, 1]); arg3_1 = None
- return (permute,) : To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_compile_subprocess.py GPUTests.test_remove_noop_view_dtype_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_compile_subprocess.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,001,982,863
|
DISABLED test_remove_noop_view_dtype_cpu (__main__.CpuTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: fx"
] | 8
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remove_noop_view_dtype_cpu&suite=CpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40698419508).
Over the past 3 hours, it has been determined flaky in 23 workflow(s) with 46 failures and 23 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remove_noop_view_dtype_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_compile_subprocess.py`
cc @clee2000 @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,001,982,682
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod3_BLOCK_SIZE3_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod3_BLOCK_SIZE3_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40705070214).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod3_BLOCK_SIZE3_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 356.12 MiB is free. Including non-PyTorch memory, this process has 21.69 GiB memory in use. Of the allocated memory 6.75 GiB is allocated by PyTorch, and 14.68 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float16_score_mod3_BLOCK_SIZE3_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,982,586
|
DISABLED test_builtin_score_mods_float16_score_mod1_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float16_score_mod1_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40701904903).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float16_score_mod1_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,982,414
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod6_BLOCK_SIZE2_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod6_BLOCK_SIZE2_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40705070214).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod6_BLOCK_SIZE2_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 870, in sdpa_dense_backward
grad_scores, _, _, _, _, *grad_score_mod_captured = joint_score_mod(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 833, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 409, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 396, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.925 from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py:1265 in wrapped", line 9, in forward
where = torch.ops.aten.where.self(ge, add, scalar_tensor); add = scalar_tensor = where = None
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 1.00 GiB is free. Including non-PyTorch memory, this process has 21.04 GiB memory in use. Of the allocated memory 6.76 GiB is allocated by PyTorch, and 14.01 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod6_BLOCK_SIZE2_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.