id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,803,050,923
|
[inductor][triton] refactor ASTSource.make_ir integration
|
davidberard98
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"upstream triton",
"module: user triton"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
User-defined Triton kernel support in Inductor relies on being able to get the TTIR for a given kernel, so that Inductor do mutability analysis on the TTIR. (in triton_kernel_wrap.py)
As the Triton implementation changes, the implementation is getting more and more unwieldy as we copy more and more of Triton's JITFunction handling into Inductor.
We should try to see if there's a convenient way we can expose an API in Triton to simplify this handling, and then use that API from within Inductor to simplify this handling.
### Alternatives
_No response_
### Additional context
_No response_
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @bertmaher @int3 @nmacchioni @embg @peterbell10 @oulgen
| true
|
2,803,047,072
|
[inductor] let inplace-padding support cpp-wrapper
|
shunting314
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145410
* __->__ #145325
* #140249
Some context: Inplace padding is an optimization to do padding in place. E.g., if a tensor has size [2048, 2047] and stride [2048, 1]. When we need pad one extra element to the end of each row (e.g. during mm padding), we can just reuse the original tensor and do the padding inplace. This saves memory and bandwidth. One caveat for this optimization is, PyTorch does not allocate 2048 elements for the last row of the original tensor. It only allocate 2047 elements. So assuming the last row having enough space for 2048 elements may be wrong and cause OOB memory access (although I never see this happen maybe due to overallocation in the CUDACachingAllocation, this should better be fixed).
The fix is when we allocate the tensor, instead of doing something like:
```
buf0 = randn_strided([2048, 2047], [2048, 1])
```
we do some small overallocation
```
buf0 = randn_strided([2048, 2048], [2048, 1]).as_strided([2048, 2047], [2048, 1])
```
cpp_wrapper needs special handling since memory allocation goes thru different code path to python wrapper.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,803,037,398
|
Add stft option to align window for center = false
|
jackzhxng
|
closed
|
[
"release notes: onnx"
] | 7
|
CONTRIBUTOR
|
Adds a flag for aligning the start of the window to the start of the signal when center = false (aka window-based padding). The same flag was proposed [a while ago](https://github.com/librosa/librosa/issues/596) for Librosa as well.
For internal reasons, we need to add this behavior to the op and this flag allows us to do so while preserving bc compatibility
Pr chain:
- [Advance past fc window for stft center #145437](https://github.com/pytorch/pytorch/pull/145437)
- -> [Add stft option to align window for center = false #145324](https://github.com/pytorch/pytorch/pull/145324)
- [Add istft option to align window for center = false](https://github.com/pytorch/pytorch/pull/145510)
| true
|
2,803,036,276
|
fix a small typo in comments
|
haifeng-jin
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 10
|
COLLABORATOR
|
A minor typo fix.
The description was confusing with the typo.
| true
|
2,803,025,609
|
[S481486] Move MTIA dynamic library loading from __init__.py to a separate module
|
chaos5958
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Summary: As titled
Test Plan:
- Passed CI tests
buck2 test 'fbcode//mode/opt' fbcode//ai_infra/distributed_ai/pyper_local_run/tests/integration_tests:test_icvr_e2e_gpu -- --exact 'ai_infra/distributed_ai/pyper_local_run/tests/integration_tests:test_icvr_e2e_gpu - test_icvr_e2e_gpu (ai_infra.distributed_ai.pyper_local_run.tests.integration_tests.test_icvr_e2e_gpu.TestIcvrE2EGpu)' --run-disabled
```
https://www.internalfb.com/intern/testinfra/testconsole/testrun/9007199320480497/
Differential Revision: D68463242
| true
|
2,803,020,858
|
inductor: Explicitly test that torch.compile(option=...) does something
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145321
This would have prevented https://github.com/pytorch/pytorch/pull/139833 from dropping the triggers.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,803,018,380
|
[BE] Bump TIMM pin
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,803,007,264
|
Windows builds with VS2022
|
Camyll
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel",
"test-config/default"
] | 7
|
CONTRIBUTOR
|
[Fixes #ISSUE_NUMBER
](https://github.com/pytorch/pytorch/issues/128835)
| true
|
2,803,003,734
|
[CI][CUDA][MultiGPU][Regression] Skip a failure due to https://github.com/pytorch/pytorch/issues/139520
|
nWEIdia
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
COLLABORATOR
|
Related: https://github.com/pytorch/pytorch/issues/139520
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @atalman @malfet @ptrblck @eqy @tinglvv
| true
|
2,803,003,385
|
[testing] Consult test/dynamo_expected_failures and test/dynamo_skips when PYTORCH_TEST_WITH_INDUCTOR=1
|
masnesral
|
closed
|
[
"topic: not user facing",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145317
| true
|
2,803,002,365
|
[do not land] test base commit
|
FindHao
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"not4land",
"ciflow/inductor",
"keep-going"
] | 1
|
MEMBER
|
check ci errors for base commit
| true
|
2,802,986,249
|
[BE][export] Change custom_op registeration style
|
yiming0416
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Summary:
`test_unbacked_bindings_for_divisible_u_symint` has been flaky for a while due to
```
Tried to register an operator (mylib::foo(Tensor a, Tensor b) -> Tensor) with the same name and overload name multiple times.
```
It is likely due to when all variants of this test are being run (non-strict, retrace, serdes) simultaneously. In later tests, the operator has already been registered.
In this diff, we change registration style.
Test Plan:
```
buck2 test mode/dev-nosan caffe2/test:test_export -- -r test_unbacked_bindings_for_divisible_u_symint
```
Differential Revision: D68465258
| true
|
2,802,977,897
|
Fix ExecuTorch, XLA, Triton hash updates
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"test-config/default"
] | 3
|
CONTRIBUTOR
|
Fix some stale hash updates https://github.com/pytorch/pytorch/pulls/pytorchupdatebot reported by @izaitsevfb
* XLA and ExecuTorch now wait for all jobs in pull instead of hardcoding the job names which are not correct anymore and the bot waits forever there
* Trion commit hash hasn't been updated automatically since 2023 and people have been updating the pin manually with their testings from time to time, so I doubt that it would be an useful thing to keep.
The vision update failures looks more complex though and I would need to take a closer look. So, I will keep it in another PR
| true
|
2,802,976,353
|
[test] editable install pytorch sphinx
|
clee2000
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,802,917,625
|
[MPS][BE] Move vectypes from Quantized to utils
|
malfet
|
closed
|
[
"better-engineering",
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145312
That allows one to get appropriate vectorized types for templates using `c10::metal::vec2type_t<>` or `c10::metal::vec4type_t<>`
| true
|
2,802,883,829
|
[BE] Type annotation for `_inductor/dependencies.py`
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,802,875,025
|
[Utilization] post-test-process workflow
|
yangw-dev
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 18
|
CONTRIBUTOR
|
# Overview
Add reusable workflow to trigger the post-test right after each test job is complete.
Cousion with pr to setup the runner permissions:
Add m fleet instances: https://github.com/pytorch-labs/pytorch-gha-infra/pull/595/files
add to lix fleet:https://github.com/pytorch/ci-infra/pull/322/files
Currently I turn on the debug flag for testing.
| true
|
2,802,859,208
|
[MPS][BE] Prepare Gamma funcs to be moved ot headers
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145341
* __->__ #145309
----
- Use `float y = 1.0 + metal::frac(x)` instead of complex
```metal
float y = x;
int n = 0;
bool less_than_one = (y < 1.0);
// Add or subtract integers as necessary to bring y into (1,2)
if (less_than_one) {
y += 1.0;
} else {
n = static_cast<int>(floor(y)) - 1;
y -= n;
}
```
- Declare them all as templates, to avoid instantiation
- Move global arrays to be local to the specific functions
| true
|
2,802,855,518
|
[BE] Remove test_ops_gradients from FIXME_inductor_dont_reset_dynamo
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"keep-going"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145308
| true
|
2,802,853,026
|
[BE] Remove test_ops from FIXME_inductor_dont_reset_dynamo
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"keep-going"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145307
| true
|
2,802,849,762
|
[BE] Remove test_modules from FIXME_inductor_dont_reset_dynamo
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"keep-going"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145308
* __->__ #145306
| true
|
2,802,849,407
|
[dynamo] Use ConstDictVariable tracker for obj.__dict__
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145305
* #145246
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,802,842,580
|
Fix for failure in D68425364
|
aorenste
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"keep-going"
] | 10
|
CONTRIBUTOR
|
Summary: Back out change from #145166 which causes an internal model to fail.
Differential Revision: D68459095
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,802,831,897
|
Add unique identifer to bmm thread_mm functions
|
dmpots
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
Summary:
The bmm template generates code like this
```
template<bool accum>
void cpp_fused_bmm_66_micro_gemm(...) {
...
}
void single_thread_mm() {
...
cpp_fused_bmm_66_micro_gemm(...)
...
}
void threaded_mm() {
...
cpp_fused_bmm_66_micro_gemm(...)
...
}
void cpp_fused_bmm_66(...)
{
...
single_thread_mm(...);
...
threaded_mm(...);
...
}
```
The generated `fused_bmm` and `fused_bmm_microgemm` functions both have unique identifiers added to their names, but the `single_threaded_mm` and `threaded_mm` do not.
This diff adds unique identifies to those generated functions as well. The identifier is based on the kernel name. So for the example above we would generate a bmm template name like `cpp_fused_bmm_66_single_thread_mm()`.
Differential Revision: D68364772
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,802,827,591
|
[dynamo] Re-enable `test_fs` family for dynamo
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145302
Fixes #91467.
| true
|
2,802,814,342
|
Add fused rms_norm implementation for MPS backend
|
manuelcandales
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 5
|
CONTRIBUTOR
|
Adding a fused rms_norm implementation for MPS backend. This eliminates most of the current CPU overhead, making this computation GPU bound and improving latency of rms_norm by **15x** on MPS backend
The metal shader was adapted from MLX: https://github.com/ml-explore/mlx/blob/e6a7ab967530866eb89c013f833f7c525bec10ca/mlx/backend/metal/kernels/rms_norm.metal
The numbers below are averages over 1000 runs of RMSNorm, obtained on an M1 Pro.
Benchmarking Results (Before):
```
Device : MPS | CPU
Dtype : FP32 | FP16 | BF16 | FP32 | FP16 | BF16
Outputs Match : True | True | True | True | True | True
Average Time (us) : 140.5 | 171.0 | 170.4 | 10.9 | 13.3 | 13.5
```
Benchmarking Results (After):
```
Device : MPS | CPU
Dtype : FP32 | FP16 | BF16 | FP32 | FP16 | BF16
Outputs Match : True | True | True | True | True | True
Average Time (us) : 10.1 | 11.7 | 12.6 | 10.0 | 12.4 | 13.0
```
Profiling Results (Before):
```
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------
aten::rms_norm 2.35% 3.284ms 100.00% 140.038ms 140.038us 1000
aten::mul 33.61% 47.068ms 33.61% 47.068ms 23.534us 2000
aten::pow 17.04% 23.868ms 17.43% 24.402ms 24.402us 1000
aten::add_ 16.52% 23.130ms 16.78% 23.497ms 23.497us 1000
aten::mean 15.82% 22.151ms 15.82% 22.151ms 22.151us 1000
aten::rsqrt 13.63% 19.085ms 13.71% 19.198ms 19.198us 1000
aten::item 0.46% 639.370us 0.56% 788.376us 0.394us 2000
aten::type_as 0.21% 295.507us 0.27% 371.291us 0.371us 1000
aten::to 0.13% 177.742us 0.13% 177.742us 0.059us 3000
aten::_local_scalar_dense 0.11% 149.006us 0.11% 149.006us 0.075us 2000
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 140.038ms
```
Profiling Results (After):
```
----------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
----------------------- ------------ ------------ ------------ ------------ ------------ ------------
aten::rms_norm 63.21% 832.875us 100.00% 1.318ms 1.318us 1000
aten::empty_like 16.06% 211.631us 36.79% 484.681us 0.485us 1000
aten::empty_strided 20.72% 273.050us 20.72% 273.050us 0.273us 1000
----------------------- ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 1.318ms
```
Benchmarking and profiling script:
```python
import torch
import torch.nn as nn
from torch.profiler import profile
import time
def benchmark(device, dtype):
model = nn.RMSNorm(2048, device=device)
# Create example inputs
x = torch.randn(1, 1, 2048, requires_grad=False, device=device, dtype=dtype)
w = torch.randn(2048, requires_grad=False, device=device, dtype=dtype)
eps = 1e-5
# Check output
y = torch.ops.aten.rms_norm(x, [2048], w, eps)
z = torch.ops.aten.rms_norm(x.cpu(), [2048], w.cpu(), eps)
outputs_match = torch.allclose(y.cpu(), z)
# Measure time manually
torch.mps.synchronize()
start_time = time.time() * 1000
for _ in range(1000):
with torch.no_grad():
y = model(x)
torch.mps.synchronize()
end_time = time.time() * 1000
manual_delta = (end_time - start_time)
average_time = f"{manual_delta:6.1f}"
return outputs_match, average_time
outputs_match_list = []
average_time_list = []
for device in ["mps", "cpu"]:
for dtype in [torch.float32, torch.float16, torch.bfloat16]:
outputs_match, average_time = benchmark(device, dtype)
outputs_match_list.append(str(outputs_match))
average_time_list.append(average_time)
print("\nBenchmarking Results:")
print("---------------------")
print("Device : MPS | CPU")
print("Dtype : FP32 | FP16 | BF16 | FP32 | FP16 | BF16")
print(f"Outputs Match : ", " | ".join(outputs_match_list))
print(f"Average Time (us) :", " |".join(average_time_list))
device = "mps"
dtype = torch.float32
model = nn.RMSNorm(2048, device=device)
x = torch.randn(1, 1, 2048, requires_grad=False, device=device, dtype=dtype)
# Run and profile the model
with profile() as prof:
with torch.no_grad():
for _ in range(1000):
y = model(x)
torch.mps.synchronize()
# Print profiling results
print("\n\nProfiling Results (MPS/FP32):")
print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))
```
| true
|
2,802,805,896
|
[ONNX] derivative not implemented error when using torch.cond
|
ionymikler
|
closed
|
[
"module: onnx",
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 15
|
NONE
|
### 🐛 Describe the bug
I am trying to export to ONNX format a model who's traversed computation graph depends on certain calculations done on the data.
The model is a variation of the Visual Transformer (VIT); In essence, additionally to the traditional attention blocks (teal blue in the diagram down under), some intermediate blocks (`IC`s) are added (purple blocks) which determine whether or not the following attention blocks should be traversed or skipped. A 'skip-token' that informs the skip decision is included in the data that flows through the network:

As can be seen in the diagram, two conditionals are needed, one which decides to change the content of the skip-token (shown in the conditional in the purple flowchart), and another that reads that skip-token and determines whether to skip the next block or not.
Because I need to capture this dynamic behavior in the exported model, I have come to learn that `torch.cond` can help me indeed achieve this. **Caveat:** I also have [come to learn](https://github.com/pytorch/pytorch/issues/143192) that torch.cond would only work in export using the not yet released version 2.6.0.
The code for the entire model is a bit extensive, but this is the parts that implement this conditional logic. The "overall" conditional logic is this:
```python
class TransformerEnconder(nn.Module):
# self.__init__ and other methods skipped...
def forward(self, x):
x_with_fastpass = add_fast_pass(x)
for layer_idx in range(len(self.layers)):
self.layer_idx = layer_idx
fast_pass_layer = get_fast_pass(x_with_fastpass)
x_with_fastpass = torch.cond(
# torch.SymBool(fast_pass_layer.any()),
fast_pass_layer.any(),
self.fast_pass,
self.layer_forward,
(x_with_fastpass,),
)
return self.norm_post_layers(
remove_fast_pass(x_with_fastpass)
) # Remove the fast-pass token before normalization
def fast_pass(self, x_with_fastpass):
return x_with_fastpass
def layer_forward(self, x_with_fastpass):
module_i = self.layers[self.layer_idx] # (attn or IC)
x_with_fastpass = module_i(x_with_fastpass)
return x_with_fastpass
```
As for the conditional flow inside of the `IC` blocks, which determines when to modify the skip-token, is implemented as such:
```python
class ExitEvaluator:
def __init__(self, config: EarlyExitsConfig, kwargs: dict):
self.confidence_theshold = config.confidence_threshold
def should_exit(self, logits):
return confidence(logits) > self.confidence_theshold
class Highway(nn.Module):
# self.__init__ and other methods skipped...
def flip_token(self, x_with_fastpass):
"""Named function for true branch of cond"""
return flip_fast_pass_token(x_with_fastpass)
def keep_token(self, x_with_fastpass):
"""Named function for false branch of cond"""
return x_with_fastpass
def forward(self, x_with_fastpass):
hidden_states = remove_fast_pass(x_with_fastpass)
cls_embeddings = hidden_states[:, 0, :]
patch_embeddings = hidden_states[:, 1:, :]
# Process patch embeddings through highway network
if self.highway_type == "self_attention":
processed_embeddings = self.highway_head(patch_embeddings)[0]
else:
h = w = int(math.sqrt(patch_embeddings.size()[1]))
processed_embeddings = self.highway_head(patch_embeddings, h, w)
# Get logits through classifier
logits = self.classifier(processed_embeddings, cls_embeddings)
# Check if we should exit early
x_with_fastpass = torch.cond(
# torch.SymBool(self.exit_evaluator.should_exit(logits)),
self.exit_evaluator.should_exit(logits),
self.flip_token,
self.keep_token,
(x_with_fastpass,),
)
return x_with_fastpass
```
In case it is relevant, the methods that operate on the tokens are the following:
```python
def add_fast_pass(x):
return torch.cat([x, torch.zeros(x.shape[0], 1, x.shape[-1])], dim=1)
def remove_fast_pass(x_with_fastpass: torch.Tensor):
return x_with_fastpass.clone()[:, :-1, :]
def get_fast_pass(x_with_fastpass: torch.Tensor):
return x_with_fastpass.clone()[:, -1, :]
def flip_fast_pass_token(x_with_fastpass: torch.Tensor):
output = x_with_fastpass.clone()
output[:, -1, :] = 1.0
return output
def confidence(x):
softmax = torch.softmax(x, dim=-1)
# return 0.6
return torch.max(softmax)
```
The call to the `onnx` library for exporting the model is done so:
```python
def export_model(model: nn.Module, _x, onnx_filepath: str):
announce(f"Exporting model '{model.name}' to ONNX format")
onnx_program = torch.onnx.export(
model=model,
args=(_x),
dynamo=True,
report=True,
verbose=True,
)
onnx_program.save(onnx_filepath)
logger.info(f"✅ Model exported to '{onnx_filepath}'")
```
When ran, I first check that the model can indeed run, and then I call the `export_model` function. But the export fails. A report markdown is produced that displays the error encountered for each of the 'export strategies', but I think only the first one (`torch.export.export(..., strict=False)`) is relevant. Posting all of them just in case.
The error mention in the `TorchExportNonStrictStrategy` suggests something about the functions passed to the `torch.cond` method, but I really don't know how to read this.
The error logs are hereunder:
# PyTorch ONNX Conversion Error Report
```
❌ Obtain model graph with `torch.export.export(..., strict=False)`
❌ Obtain model graph with `torch.export.export(..., strict=True)`
❌ Obtain model graph with `torch.jit.trace`
⚪ Decompose operators for ONNX compatibility
⚪ Translate the graph into ONNX
⚪ Run `onnx.checker` on the ONNX model
⚪ Execute the model with ONNX Runtime
⚪ Validate model output accuracy
```
Error message:
```pytb
# ⚠️ Errors from strategy 'TorchExportNonStrictStrategy': -----------------------
Traceback (most recent call last):
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 110, in __call__
exported_program = self._capture(model, args, kwargs, dynamic_shapes)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 190, in _capture
return torch.export.export(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/__init__.py", line 368, in export
return _export(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1031, in wrapper
raise e
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1004, in wrapper
ep = fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1961, in _export
return _export_for_training(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1031, in wrapper
raise e
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1004, in wrapper
ep = fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1825, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1762, in _non_strict_export
aten_export_artifact = _to_aten_func( # type: ignore[operator]
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1551, in _export_to_aten_ir_make_fx
gm, graph_signature = transform(_make_fx_helper)(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1692, in _aot_export_non_strict
gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1476, in _make_fx_helper
gm = make_fx(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 2196, in wrapped
return make_fx_tracer.trace(f, *args)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 2134, in trace
return self._trace_inner(f, *args)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 2105, in _trace_inner
t = dispatch_trace(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 1138, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 1694, in trace
res = super().trace(root, concrete_args)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 843, in trace
(self.create_arg(fn(*args)),),
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 1193, in wrapped
out = f(*tensors) # type:ignore[call-arg]
File "<string>", line 1, in <lambda>
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1460, in wrapped_fn
return tuple(flat_fn(*args))
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/utils.py", line 184, in flat_fn
tree_out = fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 879, in functional_call
out = mod(*args[params_len:], **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 1764, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 539, in call_module
ret_val = forward(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 814, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1676, in forward
tree_out = mod(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 1764, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 539, in call_module
ret_val = forward(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 814, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/DTU/f24/thesis/code/early_exit_vit/vit.py", line 299, in forward
x = self.transformer(x)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 1764, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 539, in call_module
ret_val = forward(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 814, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/DTU/f24/thesis/code/early_exit_vit/vit.py", line 232, in forward
x_with_fastpass = torch.cond(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 201, in cond
return torch.compile(_cond_op_wrapper, backend=backend, fullgraph=True)(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 192, in _cond_op_wrapper
def _cond_op_wrapper(*args, **kwargs):
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "<eval_with_key>.259", line 32, in forward
cond = torch.ops.higher_order.cond(l_args_0_, cond_true_1, cond_false_1, [l_args_3_0_, l_args_1_self_modules_layers_modules_5_modules_classifier_modules_classifier_parameters_bias_, l_args_1_self_modules_layers_modules_5_modules_classifier_modules_classifier_parameters_weight_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_0_parameters_bias_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_0_parameters_weight_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_2_buffers_num_batches_tracked_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_2_buffers_running_mean_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_2_buffers_running_var_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_2_parameters_bias_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_2_parameters_weight_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_0_parameters_bias_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_0_parameters_weight_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_1_buffers_num_batches_tracked_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_1_buffers_running_mean_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_1_buffers_running_var_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_1_parameters_bias_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_1_parameters_weight_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_bn_buffers_num_batches_tracked_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_bn_buffers_running_mean_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_bn_buffers_running_var_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_bn_parameters_bias_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_bn_parameters_weight_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_parameters_bias_, l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_parameters_weight_]); l_args_0_ = cond_true_1 = cond_false_1 = l_args_3_0_ = l_args_1_self_modules_layers_modules_5_modules_classifier_modules_classifier_parameters_bias_ = l_args_1_self_modules_layers_modules_5_modules_classifier_modules_classifier_parameters_weight_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_0_parameters_bias_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_0_parameters_weight_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_2_buffers_num_batches_tracked_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_2_buffers_running_mean_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_2_buffers_running_var_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_2_parameters_bias_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv1_modules_2_parameters_weight_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_0_parameters_bias_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_0_parameters_weight_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_1_buffers_num_batches_tracked_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_1_buffers_running_mean_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_1_buffers_running_var_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_1_parameters_bias_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_conv2_modules_1_parameters_weight_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_bn_buffers_num_batches_tracked_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_bn_buffers_running_mean_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_bn_buffers_running_var_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_bn_parameters_bias_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_bn_parameters_weight_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_parameters_bias_ = l_args_1_self_modules_layers_modules_5_modules_highway_head_modules_proj_parameters_weight_ = None
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 63, in __call__
return super().__call__(pred, true_fn, false_fn, operands)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 439, in __call__
return wrapper()
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 430, in wrapper
return torch.overrides.handle_torch_function(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/overrides.py", line 1720, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_export/non_strict_utils.py", line 581, in __torch_function__
return func(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 63, in __call__
return super().__call__(pred, true_fn, false_fn, operands)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 439, in __call__
return wrapper()
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 435, in wrapper
return self.dispatch(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 396, in dispatch
return handler(mode, *args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 442, in inner
return trace_cond(mode, cond_op, pred, true_fn, false_fn, operands)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 361, in trace_cond
out = false_fn(*operands)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/graph_module.py", line 822, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/graph_module.py", line 400, in __call__
raise e
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/graph_module.py", line 387, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 1761, in call_module
return forward(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 814, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.257", line 39, in forward
cond = torch.ops.higher_order.cond(gt, cond_true_0, cond_false_0, [l_args_3_0__1]); gt = cond_true_0 = cond_false_0 = l_args_3_0__1 = None
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 63, in __call__
return super().__call__(pred, true_fn, false_fn, operands)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 439, in __call__
return wrapper()
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 435, in wrapper
return self.dispatch(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 302, in dispatch
return kernel(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 429, in cond_autograd
flat_out = CondAutogradOp.apply(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 396, in forward
return cond_op(pred, fw_true_graph, fw_false_graph, operands)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 63, in __call__
return super().__call__(pred, true_fn, false_fn, operands)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 439, in __call__
return wrapper()
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 435, in wrapper
return self.dispatch(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 338, in dispatch
result = handler(mode, *args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 476, in cond_fake_tensor_mode
f"\n {true_fn.__name__} returns {true_meta}"
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 808, in module_getattr_wrapper
attr_val = _orig_module_getattr(mod, attr)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1928, in __getattr__
raise AttributeError(
AttributeError: '<lambda>' object has no attribute '__name__'
# ⚠️ Errors from strategy 'TorchExportStrategy': -----------------------
Traceback (most recent call last):
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 110, in __call__
exported_program = self._capture(model, args, kwargs, dynamic_shapes)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 145, in _capture
return torch.export.export(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/__init__.py", line 368, in export
return _export(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1031, in wrapper
raise e
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1004, in wrapper
ep = fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1961, in _export
return _export_for_training(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1031, in wrapper
raise e
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1004, in wrapper
ep = fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1825, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 1283, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/export/_trace.py", line 667, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 1583, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/DTU/f24/thesis/code/early_exit_vit/vit.py", line 296, in forward
def forward(self, x):
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 1546, in result_capturing_wrapper
graph_captured_result = torch.func.functional_call(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_functorch/functional_call.py", line 148, in functional_call
return nn.utils.stateless._functional_call(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/utils/stateless.py", line 282, in _functional_call
return module(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_lazy_graph_module.py", line 126, in _lazy_forward
return self(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/graph_module.py", line 822, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/graph_module.py", line 400, in __call__
raise e
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/graph_module.py", line 387, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.309", line 34, in forward
cond = torch.ops.higher_order.cond(any_1, cond_true_0, cond_false_0, [x_with_fastpass, getattr_l__self___transformer__modules__layers___0_attention_output___0___bias, getattr_l__self___transformer__modules__layers___0_attention_output___0___weight, getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___0___bias, getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___0___weight, getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___3___bias, getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___3___weight, l__self___transformer__modules__layers___0_norm_bias, l__self___transformer__modules__layers___0_norm_mlp_norm_bias, l__self___transformer__modules__layers___0_norm_mlp_norm_weight, l__self___transformer__modules__layers___0_norm_weight, l__self___transformer__modules__layers___0_w_qkv_bias, l__self___transformer__modules__layers___0_w_qkv_weight]); any_1 = cond_true_0 = cond_false_0 = x_with_fastpass = getattr_l__self___transformer__modules__layers___0_attention_output___0___bias = getattr_l__self___transformer__modules__layers___0_attention_output___0___weight = getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___0___bias = getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___0___weight = getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___3___bias = getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___3___weight = l__self___transformer__modules__layers___0_norm_bias = l__self___transformer__modules__layers___0_norm_mlp_norm_bias = l__self___transformer__modules__layers___0_norm_mlp_norm_weight = l__self___transformer__modules__layers___0_norm_weight = l__self___transformer__modules__layers___0_w_qkv_bias = l__self___transformer__modules__layers___0_w_qkv_weight = None
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 63, in __call__
return super().__call__(pred, true_fn, false_fn, operands)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 439, in __call__
return wrapper()
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 435, in wrapper
return self.dispatch(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 302, in dispatch
return kernel(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 429, in cond_autograd
flat_out = CondAutogradOp.apply(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 396, in forward
return cond_op(pred, fw_true_graph, fw_false_graph, operands)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 63, in __call__
return super().__call__(pred, true_fn, false_fn, operands)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 439, in __call__
return wrapper()
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 435, in wrapper
return self.dispatch(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 338, in dispatch
result = handler(mode, *args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 476, in cond_fake_tensor_mode
f"\n {true_fn.__name__} returns {true_meta}"
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1928, in __getattr__
raise AttributeError(
AttributeError: '<lambda>' object has no attribute '__name__'
# ⚠️ Errors from strategy 'JitTraceConvertStrategy': -----------------------
Traceback (most recent call last):
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 110, in __call__
exported_program = self._capture(model, args, kwargs, dynamic_shapes)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 278, in _capture
jit_model = torch.jit.trace(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/jit/_trace.py", line 1000, in trace
traced_func = _trace_impl(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/jit/_trace.py", line 696, in _trace_impl
return trace_module(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/jit/_trace.py", line 1276, in trace_module
module._c._create_method_from_trace(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1729, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 270, in forward
results = self.model(*unflattened_args, **unflattened_kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1729, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/iony/DTU/f24/thesis/code/early_exit_vit/vit.py", line 299, in forward
x = self.transformer(x)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1729, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/iony/DTU/f24/thesis/code/early_exit_vit/vit.py", line 232, in forward
x_with_fastpass = torch.cond(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 201, in cond
return torch.compile(_cond_op_wrapper, backend=backend, fullgraph=True)(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 554, in _fn
raise RuntimeError(
RuntimeError: Detected that you are using FX to torch.jit.trace a dynamo-optimized function. This is not supported at the moment.
# ⚠️ Errors from strategy 'LegacyDynamoStrategy': -----------------------
Traceback (most recent call last):
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 110, in __call__
exported_program = self._capture(model, args, kwargs, dynamic_shapes)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 329, in _capture
graph_module, _ = torch._dynamo.export(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 1583, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/DTU/f24/thesis/code/early_exit_vit/vit.py", line 296, in forward
def forward(self, x):
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 1546, in result_capturing_wrapper
graph_captured_result = torch.func.functional_call(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_functorch/functional_call.py", line 148, in functional_call
return nn.utils.stateless._functional_call(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/utils/stateless.py", line 282, in _functional_call
return module(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/_lazy_graph_module.py", line 126, in _lazy_forward
return self(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/graph_module.py", line 822, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/graph_module.py", line 400, in __call__
raise e
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/fx/graph_module.py", line 387, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.355", line 35, in forward
cond = torch.ops.higher_order.cond(any_1, cond_true_0, cond_false_0, [x_with_fastpass, getattr_l__self___transformer__modules__layers___0_attention_output___0___bias, getattr_l__self___transformer__modules__layers___0_attention_output___0___weight, getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___0___bias, getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___0___weight, getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___3___bias, getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___3___weight, l__self___transformer__modules__layers___0_norm_bias, l__self___transformer__modules__layers___0_norm_mlp_norm_bias, l__self___transformer__modules__layers___0_norm_mlp_norm_weight, l__self___transformer__modules__layers___0_norm_weight, l__self___transformer__modules__layers___0_w_qkv_bias, l__self___transformer__modules__layers___0_w_qkv_weight]); any_1 = cond_true_0 = cond_false_0 = x_with_fastpass = getattr_l__self___transformer__modules__layers___0_attention_output___0___bias = getattr_l__self___transformer__modules__layers___0_attention_output___0___weight = getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___0___bias = getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___0___weight = getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___3___bias = getattr_l__self___transformer__modules__layers___0_norm_mlp_mlp___3___weight = l__self___transformer__modules__layers___0_norm_bias = l__self___transformer__modules__layers___0_norm_mlp_norm_bias = l__self___transformer__modules__layers___0_norm_mlp_norm_weight = l__self___transformer__modules__layers___0_norm_weight = l__self___transformer__modules__layers___0_w_qkv_bias = l__self___transformer__modules__layers___0_w_qkv_weight = None
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 63, in __call__
return super().__call__(pred, true_fn, false_fn, operands)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 439, in __call__
return wrapper()
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 435, in wrapper
return self.dispatch(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 302, in dispatch
return kernel(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 429, in cond_autograd
flat_out = CondAutogradOp.apply(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 396, in forward
return cond_op(pred, fw_true_graph, fw_false_graph, operands)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 63, in __call__
return super().__call__(pred, true_fn, false_fn, operands)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 439, in __call__
return wrapper()
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 435, in wrapper
return self.dispatch(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_ops.py", line 338, in dispatch
result = handler(mode, *args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 476, in cond_fake_tensor_mode
f"\n {true_fn.__name__} returns {true_meta}"
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1928, in __getattr__
raise AttributeError(
AttributeError: '<lambda>' object has no attribute '__name__'
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241226+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650
Nvidia driver version: 550.127.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Stepping: 10
CPU MHz: 2600.000
CPU max MHz: 4500.0000
CPU min MHz: 800.0000
BogoMIPS: 5199.98
Virtualization: VT-x
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 1.5 MiB
L3 cache: 12 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.19.2
[pip3] onnxscript==0.1.0.dev20241226
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241226+cu124
[pip3] torchaudio==2.6.0.dev20241226+cu124
[pip3] torchvision==0.22.0.dev20241226+cu124
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0.dev20241226+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241226+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241226+cu124 pypi_0 pypi
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,802,769,451
|
[dynamo] Re-enable a AOT-Dispatch test with Dynamo
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145299
Fixes #124590.
| true
|
2,802,754,650
|
[hop][be] add utils for more comprehensive input alias and mutation
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145298
This PR implements the idea of checking input mutations through tensor version and check aliasing via storage from @zou3519. Previously, we rely on whether there's a in place op that takes placeholder input, which doesn't take views into account.
When writing the PR, I also noticed a bug in previous input mutation checking logic: we were checking the whether there are operators functionalized_f where all the mutating ops have been replaced so we won't be able to detect any thing.
| true
|
2,802,741,277
|
Added new lines of code to force cuda aware MPI usage and also to support fp16 and reduce_scatter communication operation
|
Shoaib-git20
|
closed
|
[
"oncall: distributed",
"open source",
"release notes: distributed (c10d)"
] | 2
|
NONE
|
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,802,715,566
|
[torchbench] Fix mobilenetv2 inductor freezing fail_accuracy
|
IvanKobzarev
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145296
Issue: https://github.com/pytorch/pytorch/issues/144891
inductor freezing effectively enables inductor conv-batchnorm fusion. This fusion increases the accuracy error.
More context about this: https://github.com/pytorch/pytorch/issues/120545
For Timm models that are run through benchmarks/dynamo/timm_models.py with TimsRunner the tolerance was increased here:
https://github.com/pytorch/pytorch/blob/main/benchmarks/dynamo/timm_models.py#L367
If to comment out conv-batchnorm fusion as Elias suggested in Context issue, the accuracy is back.
=>
Increasing tolerace for mobilenetv2 to the same value via introducing the special configuration for tolerance for freezing only
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,802,695,587
|
[BE][MPS] Mark gamma inputs as const
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145312
* #145309
* __->__ #145295
* #145289
Doubt it will change the perf, but it's good to correctly mark const inputs as const
| true
|
2,802,680,792
|
[ROCm] miopen benchmark behavior now better aligns with cudnn
|
jeffdaily
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"release notes: rocm",
"release notes: nn",
"ciflow/rocm"
] | 6
|
COLLABORATOR
|
The default benchmark setting is now false. The new miopen behavior means when benchmarking is disabled, for any shape that doesn't have a find hit, then it will do a quick search (same behavior as the prior default), and use that result. Now when benchmark is enabled, it will perform an exhaustive search and update any DBs. miopen immediate mode is still available and is used when deterministic is true and benchmark is false.
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,802,656,542
|
[dynamo] Fix numpy test accuracy error induced by randomness divergence
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145293
Previously `TestGradient.test_second_order_accurate` was failing because
of a small tolerance error (0.03... which is above the 0.03 tolerance).
Upon investigating, `np.random.random` caused some divergence between
eager and compiled randomness because in compiled we are not using
`np.random`'s random seed, rather we end up using `torch`'s. This in
turn caused numerical divergence and aforementioned accuracy issue.
This patch fixes the failure by patching the test case with
`use_numpy_random_stream=True`, which forces a graph break on
`np.random.random()` and thereby falling back to eager to ensure
consistency of the numpy randomness.
Fixes #116746.
| true
|
2,802,619,995
|
[BE][export] Remove disabled floordiv test in export
|
yiming0416
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary:
Removing `test_slice_with_floordiv` as it doesn't raise the Runtime Error as expected and it has been disabled since the time it was added https://github.com/pytorch/pytorch/issues/131101
For the case that we expect to fail, it actually returns an empty tensor. This is consistent with the following snippet which prints an empty tensor
```
a = torch.ones(4)
print(a[5:])
```
Test Plan: CI
Differential Revision: D68450650
| true
|
2,802,613,413
|
[BE] Add type annotations to cudagraph_utils.py and test_cases.py
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,802,591,247
|
XPU builds validations
|
atalman
|
closed
|
[
"triaged",
"module: xpu"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
We would like to validate XPU binary builds on Linux and Windows platforms. Running following scenarios:
1. Install XPU binary on machine without Intel GPU and no Intel Driver installed. Run smoke test: ```/pytorch/.ci/pytorch/smoke_test# python smoke_test.py --package torchonly``` Post full logs
3. Install XPU binary on machine with Intel GPU and with intel Driver installed but no Intel® Deep Learning Essentials.: https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpu/2-6.html#driver-installation . Run smoke test: ```/pytorch/.ci/pytorch/smoke_test# python smoke_test.py --package torchonly```. Post full logs.
4. Test torchvision and torchaudio
### Versions
2.6.0
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,802,586,847
|
[BE][MPS] Move Gamma kernels to its own file
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145312
* #145309
* #145295
* __->__ #145289
| true
|
2,802,558,739
|
Add multi env variable support to configs
|
oulgen
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145288
| true
|
2,802,552,939
|
[export][be] Clean up local imports from export [1/n]
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Summary: as title
Test Plan: CI
Differential Revision: D68449844
| true
|
2,802,541,880
|
update get start xpu
|
pytorchbot
|
closed
|
[
"open source"
] | 3
|
COLLABORATOR
|
- Support new Intel client GPU on Windows [Intel® Arc™ B-Series graphics](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/arc/desktop/b-series/overview.html) and [Intel® Core™ Ultra Series 2 with Intel® Arc™ Graphics](https://www.intel.com/content/www/us/en/products/details/processors/core-ultra.html)
- Support vision/audio prebuilt wheels on Windows
| true
|
2,802,535,990
|
Updates NCCL user buffer registration test for NCCL 2.24.3
|
syed-ahmed
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
NCCL 2.24.3 changed the content of the debug output for NVLS registration. We use this debug output in our test suite to check if NVLS was successfully registered or not. Hence we need to specialize for the NCCL version in the test.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145285
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,802,518,264
|
[dynamo] `torch.compile` ICE on using a sourceless unspecialized NN module as branching condition
|
StrongerXi
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This was exposed by #144906; minimal repro:
```python
import torch
class Cache(torch.nn.Module):
def __init__(self):
super().__init__()
self.key_cache = []
def __len__(self):
return len(self.key_cache)
@torch.compile(backend="eager")
def f(x):
cache = Cache()
if cache:
return x + 1
return x + 2
f(torch.ones(1))
```
### Error logs
```
Traceback (most recent call last):
File "/Users/ryanguo99/Documents/work/scratch/test-cond.py", line 18, in <module>
f(torch.ones(1))
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/bytecode_transformation.py", line 1403, in transform_code_object
transformations(instructions, code_options)
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/symbolic_convert.py", line 2913, in run
super().run()
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/symbolic_convert.py", line 1083, in run
while self.step():
^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/symbolic_convert.py", line 993, in step
self.dispatch_table[inst.opcode](self, inst)
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inner
if truth_fn(mod):
^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/scratch/test-cond.py", line 9, in __len__
return len(self.key_cache)
^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/nn/modules/module.py", line 1938, in __getattr__
raise AttributeError(
torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'Cache' object has no attribute 'key_cache'
from user code:
File "/Users/ryanguo99/Documents/work/scratch/test-cond.py", line 14, in f
if cache:
```
### Versions
MacOS, Python 3.12.5, main 18638b91fe3.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,802,505,066
|
[BE][export] add "+export" logging to de/serialization
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
adds de/serialization debug logging to `TORCH_LOGS="+dynamic"`
| true
|
2,802,478,724
|
Dtype available for `torch.optim.Adam` and `torch.optim.AdamW` when `fused=True` is different from described
|
ILCSFNO
|
closed
|
[
"module: optimizer",
"triaged"
] | 2
|
CONTRIBUTOR
|
### 📚 The doc issue
The docs of [`torch.optim.AdamW()`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html#torch.optim.AdamW) and [`torch.optim.Adam()`](https://pytorch.org/docs/stable/generated/torch.optim.Adam.html#torch.optim.Adam) shows their shared parameters as shown below:
> fused (bool, optional) – whether the fused implementation is used. Currently, torch.float64, torch.float32, torch.float16, and torch.bfloat16 are supported. (default: None)
But for other dtype, e.g. torch.double, torch.half, torch.float, `torch.optim.AdamW()` and `torch.optim.Adam()` with `fused=True` also work as shown below:
### Repro
```python
import torch
import torch.nn as nn
dtype = torch.double # choice: torch.double, torch.half, torch.float
opt = torch.optim.AdamW # choice: torch.optim.AdamW, torch.optim.Adam
model = nn.Sequential(nn.Linear(5, 10), nn.ReLU(), nn.Linear(10, 5), nn.ReLU())
for param in model.parameters():
param.data = param.data.to(dtype)
optimizer = opt(model.parameters(), fused=True)
criterion = nn.CrossEntropyLoss()
x = torch.randn(100, 5).to(dtype)
y = torch.randn(100, 5).to(dtype)
z = model(x)
optimizer.zero_grad()
loss = criterion(z, y)
loss.backward()
optimizer.step()
```
The first use of `fused` in torch.optim.AdamW is [here](https://github.com/pytorch/pytorch/blob/b3e90c8c33276208ad71ad49ef297566cdbe5d69/torch/optim/adamw.py#L48).
The first use of `fused` in torch.optim.Adam is [here](https://github.com/pytorch/pytorch/blob/b3e90c8c33276208ad71ad49ef297566cdbe5d69/torch/optim/adam.py#L102).
### Suggest a potential alternative/fix
So, `fused` parameter may be described in the docs of [`torch.optim.AdamW()`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html#torch.optim.AdamW) and [`torch.optim.Adam()`](https://pytorch.org/docs/stable/generated/torch.optim.Adam.html#torch.optim.Adam) as shown below:
> fused (bool, optional) – whether the fused implementation is used. Currently, torch.float64, torch.float32, torch.float16, torch.bfloat16, torch.double, torch.half and torch.float are supported. (default: None)
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @svekars @brycebortree @sekyondaMeta @mruberry @walterddr @mikaylagawarecki
| true
|
2,802,402,833
|
When calling a custom function of a LlamaForCausalLM using FSDP causes RuntimeError
|
fingertap
|
closed
|
[
"oncall: distributed"
] | 4
|
NONE
|
### 🐛 Describe the bug
Error message:
```
[rank0]: Traceback (most recent call last):
...
[rank0]: logits = self.forward(input_ids, attention_mask=attention_mask).logits
[rank0]: File "/data/miniconda3/envs/verl/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1163, in forward
[rank0]: outputs = self.model(
[rank0]: File "/data/miniconda3/envs/verl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/data/miniconda3/envs/verl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/data/miniconda3/envs/verl/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 859, in forward
[rank0]: inputs_embeds = self.embed_tokens(input_ids)
[rank0]: File "/data/miniconda3/envs/verl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/data/miniconda3/envs/verl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/data/miniconda3/envs/verl/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 164, in forward
[rank0]: return F.embedding(
[rank0]: File "/data/miniconda3/envs/verl/lib/python3.10/site-packages/torch/nn/functional.py", line 2267, in embedding
[rank0]: return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
[rank0]: RuntimeError: Output 0 of ViewBackward0 is a view and its base or another view of its base has been modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
W0122 01:09:38.969000 139967675598656 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 97070 closing signal SIGTERM
E0122 01:09:39.284000 139967675598656 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 1 (pid: 97071) of binary: /data/miniconda3/envs/verl/bin/python
Traceback (most recent call last):
File "/data/miniconda3/envs/verl/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/data/miniconda3/envs/verl/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
return f(*args, **kwargs)
File "/data/miniconda3/envs/verl/lib/python3.10/site-packages/torch/distributed/run.py", line 901, in main
run(args)
File "/data/miniconda3/envs/verl/lib/python3.10/site-packages/torch/distributed/run.py", line 892, in run
elastic_launch(
File "/data/miniconda3/envs/verl/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/data/miniconda3/envs/verl/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
```
The minimal code to reproduce:
```python
import torch
import torch.distributed as dist
import torch.nn.functional as F
from functools import partial
from transformers import LlamaForCausalLM, AutoTokenizer
from transformers.models.llama.modeling_llama import LlamaDecoderLayer
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
class LlamaModule(LlamaForCausalLM):
def train_step(self, input_ids, attention_mask):
labels = input_ids.clone()
logits = self.forward(input_ids, attention_mask=attention_mask).logits
loss = F.cross_entropy(logits.view(-1, logits.size(-1)), labels.view(-1))
return loss
dist.init_process_group(backend="nccl")
torch.cuda.set_device(dist.get_rank())
model_path = "/checkpoints/Meta-Llama-3.1-8B-Instruct/"
model = LlamaModule.from_pretrained(model_path, device_map="cuda")
model = FSDP(
model,
auto_wrap_policy=partial(
transformer_auto_wrap_policy,
transformer_layer_cls={LlamaDecoderLayer}
)
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
inputs = tokenizer("Hello, world!", return_tensors="pt").to("cuda")
print(model.train_step(**inputs))
dist.destroy_process_group()
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 525.147.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-95
Off-line CPU(s) list: 96-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
Frequency boost: enabled
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.4.0+cu121
[pip3] torchaudio==2.4.0+cu121
[pip3] torchvision==0.19.0+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.4.0+cu121 pypi_0 pypi
[conda] torchaudio 2.4.0+cu121 pypi_0 pypi
[conda] torchvision 0.19.0+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,802,335,734
|
Use TORCH_CHECK instead of std::runtime_error in stack.h and ivalue.h
|
janeyx99
|
closed
|
[
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
TORCH_CHECK will preserve the stacktrace for when TORCH_CPP_SHOW_STACKTRACES=1, whereas std::runtime_error will not.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145280
| true
|
2,802,241,293
|
[SkipFiles] New modules added to torch.* are inlined by default
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145279
This PR:
- makes it so that new modules added to torch are inlined by default
- adds a list of the previously "skipped by default" modules to avoid
regressing anything. This is a new MOD_SKIPLIST list that is consulted
in trace_rules.check_file.
- Follow-up work will go through this list, one-by-one, and try to delete
modules. I think we should be able to delete almost everything,
except for torch._dynamo.
Test Plan
- existing tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,802,218,993
|
[DO NOT MERGE] Testing mi300 on periodic
|
ethanwee1
|
closed
|
[
"open source",
"ciflow/trunk",
"release notes: rocm",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Testing mi300 runner on periodic workflow as we only see 261 distributed tests being run for mi300's on the trunk workflow while the periodic workflow runs over 3000.
| true
|
2,802,206,602
|
Torch Compile edge case with != versus is not
|
CoffeeVampir3
|
closed
|
[
"high priority",
"triaged",
"bug",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
### 🐛 Describe the bug
Here's some reproduction code on latest torch. There's a failure to compile when there's a `!= None` but it's fine with `is not None` given the exact torch compile settings I've shown in particular. It's a strange edge case and only happens on some compile settings. Thanks for your time.
Here's repro code:
```python
import torch
from torch import nn
class LightningAttention(nn.Module):
def forward(self, query, key=None, value=None, mask=None):
batch_size, seq_len, dim = query.shape
scores = torch.matmul(query, key.transpose(-2, -1)) / (dim ** 0.5)
if mask != None: # switch to is not and suddenly it works fine
scores = scores + mask
attn = torch.softmax(scores, dim=-1)
return torch.matmul(attn, value)
batch_size, seq_len, dim = 2, 4, 8
x = torch.randn(batch_size, seq_len, dim)
model = LightningAttention()
try:
model = torch.compile(
model,
backend='inductor',
dynamic=False,
fullgraph=True,
options={
"epilogue_fusion": True,
"max_autotune": True,
}
)
mask = torch.zeros(batch_size, seq_len, seq_len)
output = model(x, x, x, mask)
except Exception as e:
print(f"Error: {e}")
```
### Versions
```
➜ minimodels python collect_env.py
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: EndeavourOS Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 19.1.6
CMake version: version 3.31.4
Libc version: glibc-2.40
Python version: 3.12.7 (main, Jan 21 2025, 05:01:27) [GCC 14.2.1 20240910] (64-bit runtime)
Python platform: Linux-6.12.9-arch1-1-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090 Ti
GPU 1: NVIDIA RTX A4000
Nvidia driver version: 565.77
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.5.1
/usr/lib/libcudnn_adv.so.9.5.1
/usr/lib/libcudnn_cnn.so.9.5.1
/usr/lib/libcudnn_engines_precompiled.so.9.5.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib/libcudnn_graph.so.9.5.1
/usr/lib/libcudnn_heuristic.so.9.5.1
/usr/lib/libcudnn_ops.so.9.5.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700KF
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 39%
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 7222.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 512 KiB (12 instances)
L1i cache: 512 KiB (12 instances)
L2 cache: 12 MiB (9 instances)
L3 cache: 25 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] lovely-numpy==0.2.13
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torch-shampoo==1.0.0
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames
| true
|
2,802,119,061
|
F.scaled_dot_product_attention get query @ key
|
JustinKai0527
|
closed
|
[
"triaged",
"module: sdpa"
] | 1
|
NONE
|
Hello, everyone, I want to know how to get the query @ key in F.scaled_dot_product_attention, I use the below code but still got OOM, I can use the F.scaled_dot_product attention and don't get the OOM, plz help...
```
def chunk_dot_product(query, key, num_chunks=2000):
# query, key shape: [batch_size, num_heads, seq_len, head_dim]
batch_size, num_heads, seq_len, head_dim = query.shape
chunk_size = seq_len // num_chunks
# 初始化輸出張量列表
attn_chunks = []
for i in range(num_chunks):
chunk_weights = []
# 取出當前 query chunk: [batch_size, num_heads, chunk_size, head_dim]
q_chunk = query[:, :, i*chunk_size:(i+1)*chunk_size]
# 對 key 也進行分塊處理
for j in range(num_chunks):
k_chunk = key[:, :, j*chunk_size:(j+1)*chunk_size]
# 計算部分注意力權重
# [batch_size, num_heads, chunk_size, chunk_size]
chunk_attn = torch.matmul(q_chunk, k_chunk.transpose(-1, -2))
chunk_weights.append(chunk_attn)
# 適時清理記憶體
if j < num_chunks - 1: # 最後一個 chunk 不需要清理
del k_chunk
torch.cuda.empty_cache()
# 在序列長度維度上連接: [batch_size, num_heads, chunk_size, seq_len]
row_weights = torch.cat(chunk_weights, dim=-1)
attn_chunks.append(row_weights)
# 清除中間結果
del chunk_weights
del q_chunk
torch.cuda.empty_cache()
# 最後將所有塊組合起來: [batch_size, num_heads, seq_len, seq_len]
attn_weight = torch.cat(attn_chunks, dim=2)
return attn_weight
# Efficient implementation equivalent to the following:
def scaled_dot_product_attention(query, key, value, attn_mask=None, dropout_p=0.0, is_causal=False, scale=None) -> torch.Tensor:
L, S = query.size(-2), key.size(-2)
scale_factor = 1 / math.sqrt(query.size(-1)) if scale is None else scale
attn_bias = torch.zeros(L, S, dtype=query.dtype).to(query.device)
if is_causal:
assert attn_mask is None
temp_mask = torch.ones(L, S, dtype=torch.bool).tril(diagonal=0)
attn_bias.masked_fill_(temp_mask.logical_not(), float("-inf"))
attn_bias.to(query.dtype)
if attn_mask is not None:
if attn_mask.dtype == torch.bool:
attn_bias.masked_fill_(attn_mask.logical_not(), float("-inf"))
else:
attn_bias += attn_mask
attn_weight = chunk_dot_product(query, key) * scale_factor
attn_weight += attn_bias
# after the lora masking do the softmax
# attn_weight = torch.softmax(attn_weight, dim=-1)
# attn_weight = torch.dropout(attn_weight, dropout_p, train=True)
return attn_weight
```
| true
|
2,802,087,224
|
[HOTFIX] Remove third_party/kleidai
|
ezyang
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145275
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
| true
|
2,801,980,430
|
AttributeError: '_OpNamespace' 'aten' object has no attribute 'momentum'
|
VascoSch92
|
open
|
[
"needs reproduction",
"triaged",
"module: custom-operators",
"module: library",
"oncall: pt2",
"module: pt2-dispatcher",
"module: core aten"
] | 4
|
NONE
|
### 🐛 Describe the bug
I have a problem with the following piece of code
```python
import torch
from torch import nn
class GatedLinearUnit(nn.Module):
def __init__(self, in_features: int) -> None:
super().__init__()
self.linear_1 = nn.Linear(in_features=in_features, out_features=in_features)
self.linear_2 = nn.Linear(in_features=in_features, out_features=in_features)
def forward(self, x: torch.Tensor) -> torch.Tensor:
return self.linear_1(x) * self.linear_2(x).sigmoid()
def test_gated_linear_unit_shape(in_features):
"""Tests if the output shape of the GatedLinearUnit is correct."""
gated_linear_unit = GatedLinearUnit(in_features=in_features)
input_tensor = torch.randn(8, 16)
output_tensor = gated_linear_unit(input_tensor)
assert output_tensor.shape == (8, 16)
if __name__ == "__main__":
test_gated_linear_unit_shape()
```
if you run it with pytest or python the output is the same, i.e.,
```text
uv run python tests/test_gated_linear_unit.py
Traceback (most recent call last):
File "/Users/argo/git/models/tests/test_gated_linear_unit.py", line 1, in <module>
import torch
File "/Users/argo/git/models/.venv/lib/python3.12/site-packages/torch/__init__.py", line 2486, in <module>
from torch import _meta_registrations
File "/Users/argo/git/models/.venv/lib/python3.12/site-packages/torch/_meta_registrations.py", line 10, in <module>
from torch._decomp import (
File "/Users/argo/git/models/.venv/lib/python3.12/site-packages/torch/_decomp/__init__.py", line 249, in <module>
import torch._decomp.decompositions
File "/Users/argo/git/models/.venv/lib/python3.12/site-packages/torch/_decomp/decompositions.py", line 20, in <module>
from torch._higher_order_ops.out_dtype import out_dtype
File "/Users/argo/git/models/.venv/lib/python3.12/site-packages/torch/_higher_order_ops/out_dtype.py", line 22, in <module>
torch.ops.aten.momentum.default,
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/argo/git/models/.venv/lib/python3.12/site-packages/torch/_ops.py", line 1225, in __getattr__
raise AttributeError(
AttributeError: '_OpNamespace' 'aten' object has no attribute 'momentum'
```
The environment I'm using is:
```text
uv pip list
Package Version
----------------- ---------
filelock 3.16.1
fsspec 2024.12.0
iniconfig 2.0.0
jinja2 3.1.5
markupsafe 3.0.2
mpmath 1.3.0
mypy 1.14.1
mypy-extensions 1.0.0
networkx 3.4.2
numpy 2.2.2
packaging 24.2
pluggy 1.5.0
pytest 8.3.4
ruff 0.9.2
setuptools 75.8.0
sympy 1.13.1
torch 2.5.1
typing-extensions 4.12.2
```
Does anyone have an idea of what might be happening? I searched and found similar problems, but no definitive answers.
I’m running on a MacBook Pro M2, with the Apple M2 chip, and macOS Sonoma version 14.6.1.
Thanks for your help!
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 14.6.1 (x86_64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.13 (main, Aug 25 2022, 18:29:29) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Apple M2
Versions of relevant libraries:
[pip3] flake8==4.0.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] numpydoc==1.4.0
[pip3] pytorch-lightning==1.7.7
[pip3] torch==1.10.2
[pip3] torchmetrics==0.10.0
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 hecd8cb5_637
[conda] mkl-service 2.4.0 py39h9ed2024_0
[conda] mkl_fft 1.3.1 py39h4ab4a9b_0
[conda] mkl_random 1.2.2 py39hb2f4e1b_0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] numpydoc 1.4.0 py39hecd8cb5_0
[conda] pytorch 1.10.2 cpu_py39h903acac_0
[conda] pytorch-lightning 1.7.7 pyhd8ed1ab_0 conda-forge
[conda] torchmetrics 0.10.0 pyhd8ed1ab_0 conda-forge
cc @anjali411 @chauhang @penguinwu @zou3519 @bdhirsh @yf225 @manuelcandales @SherlockNoMad @angelayi
| true
|
2,801,883,856
|
cloning third_party/kleidiai fails
|
AmdSampsa
|
closed
|
[
"triage review",
"module: build",
"module: ci"
] | 22
|
COLLABORATOR
|
### 🐛 Describe the bug
When trying to compile pytorch from scratch, starting with:
```bash
git submodule update --init --recursive
```
I hit
```
Cloning into '/root/pytorch-main/third_party/kleidiai'...
remote: GitLab is not responding
fatal: unable to access 'https://git.gitlab.arm.com/kleidi/kleidiai.git/': The requested URL returned error: 502
fatal: clone of 'https://git.gitlab.arm.com/kleidi/kleidiai.git' into submodule path '/root/pytorch-main/third_party/kleidiai' failed
Failed to clone 'third_party/kleidiai'. Retry scheduled
Cloning into '/root/pytorch-main/third_party/kleidiai'...
remote: GitLab is not responding
fatal: unable to access 'https://git.gitlab.arm.com/kleidi/kleidiai.git/': The requested URL returned error: 502
fatal: clone of 'https://git.gitlab.arm.com/kleidi/kleidiai.git' into submodule path '/root/pytorch-main/third_party/kleidiai' failed
Failed to clone 'third_party/kleidiai' a second time, aborting
```
Doesn't relying on a arm gitlab server for pytorch to clone succesfully sound a bit risky?
### Versions
Happens for latest pytorch main (hash 803017f3cb73bb115eda5ec0e0a19688ccafbf4e)
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @seemethere @pytorch/pytorch-dev-infra
| true
|
2,801,883,039
|
Remove unnecessary HPUHooksInterface method
|
moksiuc
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
getDefaultHPUGenerator is no longer necessary
| true
|
2,801,851,934
|
[Test][Inductor] Fix test_tma_graph_breaks
|
Aidyn-A
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Per title. Before these changes, below tests:
```
test_triton_kernels.py::KernelTests::test_tma_graph_breaks_after_data_ptr_False_after_create_desc_False
test_triton_kernels.py::KernelTests::test_tma_graph_breaks_after_data_ptr_False_after_create_desc_True
test_triton_kernels.py::KernelTests::test_tma_graph_breaks_after_data_ptr_True_after_create_desc_False
test_triton_kernels.py::KernelTests::test_tma_graph_breaks_after_data_ptr_True_after_create_desc_True
```
fail with the following message:
```
__________________________________________________________________ KernelTests.test_tma_graph_breaks_after_data_ptr_True_after_create_desc_True ___________________________________________________________________
Traceback (most recent call last):
File "/usr/lib/python3.12/unittest/case.py", line 58, in testPartExecutor
yield
File "/usr/lib/python3.12/unittest/case.py", line 634, in run
self._callTestMethod(testMethod)
File "/usr/lib/python3.12/unittest/case.py", line 589, in _callTestMethod
if method() is not None:
^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/testing/_internal/common_utils.py", line 3114, in wrapper
method(*args, **kwargs)
File "/usr/local/lib/python3.12/dist-packages/torch/testing/_internal/common_utils.py", line 557, in instantiated_test
test(self, **param_kwargs)
File "~/git/pytorch/test/inductor/test_triton_kernels.py", line 1760, in test_tma_graph_breaks
eager_out = f(a, b)
^^^^^^^
File "~/git/pytorch/test/inductor/test_triton_kernels.py", line 1740, in f
t.element_size(),
^
UnboundLocalError: cannot access local variable 't' where it is not associated with a value
To execute this test, run the following from the base repo dir:
python test/inductor/test_triton_kernels.py KernelTests.test_tma_graph_breaks_after_data_ptr_True_after_create_desc_True
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,801,803,076
|
[NVIDIA] RTX50 Blackwell Support codegen
|
johnnynunez
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"topic: new features"
] | 6
|
CONTRIBUTOR
|
cc @ptrblck @msaroufim @eqy
| true
|
2,801,785,959
|
Improve the caching allocator test for raw alloc
|
1274085042
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 31
|
CONTRIBUTOR
|
1 Prevent block allocated by torch._C._cuda_cudaCachingAllocator_raw_alloc from affecting torch.cuda.empty_cache() in other unit tests
2 Additionally, tested the changes to raw_delete in https://github.com/pytorch/pytorch/pull/131114
@jeffdaily @albanD @houseroad @eqy @aaronenyeshi
| true
|
2,801,780,958
|
`torch.compile` may produce wrong result with `torch.nn.functional.interpolate`.
|
Zoeeeeey
|
closed
|
[
"oncall: pt2"
] | 1
|
NONE
|
### 🐛 Describe the bug
Hi! I found that the following model gives different results after compile.
And this inconsistency appears in torch 2.7.0, while the process is normal in version 2.5.1.
```python
import torch
inp_args = [
torch.nn.Parameter(torch.randn([23, 1, 1, 1, 1], dtype=torch.float32), requires_grad=True)
]
def fn():
getitem = inp_args[0]
interpolate = torch.nn.functional.interpolate(getitem, size=[1, 1, 1], scale_factor=None, mode='trilinear',
align_corners=None, recompute_scale_factor=None, antialias=False)
linear_layer = torch.nn.Linear(in_features=1, out_features=34, bias=True)
m2 = linear_layer(getitem)
interpolate_1 = torch.nn.functional.interpolate(m2, size=[1, 39, 34], scale_factor=None, mode='trilinear',
align_corners=None, recompute_scale_factor=None, antialias=False)
mean = interpolate_1.mean(0)
gt = torch.gt(m2, interpolate_1)
return (interpolate, mean, gt)
ret_eager = fn()
compiled = torch.compile(fn)
ret_compiled = compiled()
torch.testing.assert_close(ret_eager[1], ret_compiled[1])
# assert torch.allclose(ret_eager[1], ret_compiled[1]), '\n'.join(map(str, ["", ret_eager[1], ret_compiled[1]]))
# torch.testing.assert_close(ret_eager[2], ret_compiled[2])
# assert torch.allclose(ret_eager[2], ret_compiled[2]), '\n'.join(map(str, ["", ret_eager[2], ret_compiled[2]]))
```
### Error logs
```python
# AssertionError: Tensor-likes are not close!
#
# Mismatched elements: 1326 / 1326 (100.0%)
# Greatest absolute difference: 1.6459496021270752 at index (0, 0, 0, 8) (up to 1e-05 allowed)
# Greatest relative difference: 12.975427627563477 at index (0, 0, 0, 33) (up to 1.3e-06 allowed)
# ...
# AssertionError:
# tensor([[[[-0.0624, 0.5362, -0.5600, ..., 0.7437, -0.3323, -0.2239],
# [-0.0624, 0.5362, -0.5600, ..., 0.7437, -0.3323, -0.2239],
# [-0.0624, 0.5362, -0.5600, ..., 0.7437, -0.3323, -0.2239],
# ...,
# [-0.0624, 0.5362, -0.5600, ..., 0.7437, -0.3323, -0.2239],
# [-0.0624, 0.5362, -0.5600, ..., 0.7437, -0.3323, -0.2239],
# [-0.0624, 0.5362, -0.5600, ..., 0.7437, -0.3323, -0.2239]]]],
# grad_fn=<MeanBackward1>)
# tensor([[[[ 0.4851, 0.7255, 0.5718, ..., -0.0023, 0.5366, -0.8882],
# [ 0.4851, 0.7255, 0.5718, ..., -0.0023, 0.5366, -0.8882],
# [ 0.4851, 0.7255, 0.5718, ..., -0.0023, 0.5366, -0.8882],
# ...,
# [ 0.4851, 0.7255, 0.5718, ..., -0.0023, 0.5366, -0.8882],
# [ 0.4851, 0.7255, 0.5718, ..., -0.0023, 0.5366, -0.8882],
# [ 0.4851, 0.7255, 0.5718, ..., -0.0023, 0.5366, -0.8882]]]],
# grad_fn=<CompiledFunctionBackward>)
# ...
# AssertionError: Tensor-likes are not equal!
#
# Mismatched elements: 698 / 30498 (2.3%)
# Greatest absolute difference: 1 at index (0, 0, 0, 28, 10)
# Greatest relative difference: inf at index (0, 0, 0, 28, 10)
```
### Versions
```bash
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-100-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] torch==2.7.0.dev20250116+cpu
[pip3] torchaudio==2.6.0.dev20250116+cpu
[pip3] torchvision==0.22.0.dev20250116+cpu
[conda] numpy 2.2.1 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cpu pypi_0 pypi
```
cc @chauhang @penguinwu
| true
|
2,801,663,496
|
`torch.ops.aten.embedding_dense_backward` Crashes with Out-of-Bounds Indices On CPU
|
WLFJ
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"module: embedding",
"topic: fuzzer"
] | 1
|
NONE
|
### 🐛 Describe the bug
An error occurs in the `torch.ops.aten.embedding_dense_backward` function because the indices tensor contains values that exceed the num_weights parameter. In the provided code, num_weights is set to 0, causing all index accesses to be out of bounds. This results in a crash due to out-of-bounds access during memory allocation and indexing operations.
provided code example:
```python
import torch
print(torch.__version__)
sym_0 = (1,)
sym_1 = 'cpu'
sym_2 = False
sym_6 = 0
sym_7 = -4611686018427387905
sym_8 = True
var_726 = torch.randn(size=sym_0, dtype=None, layout=None, device=sym_1, pin_memory=sym_2)
var_75 = torch.tensor([100000000000000000], dtype=torch.long)
torch.ops.aten.embedding_dense_backward(grad_output=var_726, indices=var_75, num_weights=sym_6, padding_idx=sym_7, scale_grad_by_freq=sym_8)
```
result:
```
2.7.0.dev20250116+cu124
fish: Job 3, 'python3 test.py' terminated by signal SIGSEGV (Address boundary error)
```
The reason is related to [here](https://github.com/pytorch/pytorch/blob/803017f3cb73bb115eda5ec0e0a19688ccafbf4e/aten/src/ATen/native/Embedding.cpp#L143):
```cpp
AT_DISPATCH_INDEX_TYPES(indices.scalar_type(), "embedding_dense_backward_cpu", [&] () {
auto indices_data = indices_contig.const_data_ptr<index_t>();
// NOLINTNEXTLINE(modernize-avoid-c-arrays,cppcoreguidelines-avoid-c-arrays)
std::unique_ptr<index_t[]> counts;
if (scale_grad_by_freq) {
counts.reset(new index_t[num_weights]); // num_weights not checked, assume 0 here.
for (const auto i : c10::irange(numel)) {
counts[indices_data[i]] = 0; // memory access indices should be checked in range [0, num_weights).
}
for (const auto i : c10::irange(numel)) {
counts[indices_data[i]]++;
}
}
// ...
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
cc @malfet
| true
|
2,801,648,413
|
Calculation Results Become NaN After Using `torch.compile` with `Matmul+Concat4+Mul+Linear+Tan`.
|
Zoeeeeey
|
closed
|
[
"oncall: pt2"
] | 1
|
NONE
|
### 🐛 Describe the bug
Hi! I found that the following model gives Nan results after compile.
```python
import torch
inp_args = [
torch.nn.Parameter(torch.randn(size, dtype=torch.float32), requires_grad=True)
for size in [[1], [33, 1, 1, 1, 1], [33, 1, 1, 1, 1]]
]
def fn():
v10_0 = torch.nn.Parameter(torch.empty([1], dtype=torch.float32), requires_grad=True)
v8_0 = torch.nn.Parameter(torch.empty([33, 1, 1, 1, 1], dtype=torch.float32), requires_grad=True)
v7_0 = torch.nn.Parameter(torch.empty([33, 1, 1, 1, 1], dtype=torch.float32), requires_grad=True)
getitem = inp_args[0]
getitem_1 = inp_args[1]
getitem_2 = inp_args[2]
matmul = torch.matmul(getitem, v10_0)
cat = torch.cat((getitem_2, getitem_1, v7_0, v8_0), dim=3)
mul = torch.mul(matmul, cat)
linear_layer = torch.nn.Linear(in_features=1, out_features=36, bias=True)
m9 = linear_layer(mul)
tan = torch.tan(m9)
return (tan,)
ret_eager = fn()
compiled = torch.compile(fn)
ret_compiled = compiled()
torch.testing.assert_close(ret_eager[0], ret_compiled[0])
# assert torch.allclose(ret_eager[0], ret_compiled[0]), '\n'.join(map(str, ["", ret_eager[0], ret_compiled[0]]))
```
### Error logs
```python
# AssertionError: Tensor-likes are not close!
#
# Mismatched elements: 4752 / 4752 (100.0%)
# Greatest absolute difference: nan at index (0, 0, 0, 0, 0) (up to 1e-05 allowed)
# Greatest relative difference: nan at index (0, 0, 0, 0, 0) (up to 1.3e-06 allowed)
# ...
# AssertionError:
# tensor([[[[[-9.6969e+00, 7.8080e-02, -9.2353e-01, ..., 1.9866e+00,
# 3.4518e-01, 4.8450e+00],
# [-1.1124e+00, -8.5060e-01, 3.2113e+00, ..., -3.0666e-01,
# -1.7086e-01, -6.7271e+00],
# [-3.0328e-01, -7.8792e-01, 6.5650e-01, ..., -9.0377e-01,
# -6.4501e-01, 5.9481e+00],
# [ 1.0956e-01, 6.0032e-01, 5.1952e-01, ..., 1.4848e-01,
# 1.4129e-01, -4.0779e-01]]]] ...], )
# tensor([[[[[nan, nan, nan, ..., nan, nan, nan],
# [nan, nan, nan, ..., nan, nan, nan],
# [nan, nan, nan, ..., nan, nan, nan],
# [nan, nan, nan, ..., nan, nan, nan]]]] ...],)
```
### Versions
```bash
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-100-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] torch==2.7.0.dev20250116+cpu
[pip3] torchaudio==2.6.0.dev20250116+cpu
[pip3] torchvision==0.22.0.dev20250116+cpu
[conda] numpy 2.2.1 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cpu pypi_0 pypi
```
cc @chauhang @penguinwu
| true
|
2,801,555,566
|
Fix SEGFAULT when None arg was passed in GraphContext.op(..)
|
Tytskiy
|
closed
|
[
"oncall: distributed",
"module: onnx",
"module: cpu",
"triaged",
"open source",
"module: amp (automated mixed precision)",
"onnx-triaged",
"release notes: onnx",
"release notes: quantization",
"topic: bug fixes",
"module: inductor",
"module: dynamo"
] | 7
|
NONE
|
Fixes #145261
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,801,551,056
|
`torch.compile` may produce wrong result with `BicubicInterp+Neg+Linear+Tan`.
|
Zoeeeeey
|
closed
|
[
"oncall: pt2"
] | 1
|
NONE
|
### 🐛 Describe the bug
Hi! I found that the following model gives different results after compile.
```python
import torch
inp = torch.nn.Parameter(torch.randn([8, 1, 4, 1], dtype=torch.float32), requires_grad=True)
def fn():
v5_0 = torch.nn.functional.interpolate(inp, size=[36, 1], scale_factor=None, mode='bicubic',
align_corners=None, recompute_scale_factor=None, antialias=False)
v3_0 = torch.neg(v5_0)
linear_layer = torch.nn.Linear(in_features=1, out_features=25, bias=True)
v2_0 = linear_layer(v3_0)
v1_0 = v2_0.to(torch.float64)
tan = torch.tan(v1_0)
return (tan,)
ret_eager = fn()
compiled = torch.compile(fn)
ret_compiled = compiled()
torch.testing.assert_close(ret_eager[0], ret_compiled[0])
```
### Error logs
```python
# AssertionError: Tensor-likes are not close!
#
# Mismatched elements: 7200 / 7200 (100.0%)
# Greatest absolute difference: 4132.387735664385 at index (7, 0, 14, 5) (up to 1e-07 allowed)
# Greatest relative difference: 28269.87316096882 at index (7, 0, 31, 20) (up to 1e-07 allowed)
```
### Versions
```bash
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-100-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] torch==2.7.0.dev20250116+cpu
[pip3] torchaudio==2.6.0.dev20250116+cpu
[pip3] torchvision==0.22.0.dev20250116+cpu
[conda] numpy 2.2.1 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cpu pypi_0 pypi
```
cc @chauhang @penguinwu
| true
|
2,801,486,683
|
[Pipelining] Problem using `torch.distributed.pipelining` on `Gemma2ForCausalLM`
|
sadra-barikbin
|
open
|
[
"oncall: distributed",
"triaged"
] | 3
|
CONTRIBUTOR
|
Hi there!
I have problem testing torch pipelining against gemma2 model. Here is the code snippet:
```python
config = AutoConfig.from_pretrained("google/gemma-2-2b-it")
model = Gemma2ForCausalLM(config)
pipeline(model, mb_args=(torch.LongTensor([[1,2,3],[4,5,6]]),), split_spec={"model.layers.13": SplitPoint.BEGINNING})
```
Output:
```
/home/sbarikbin/.venv/lib/python3.12/site-packages/torch/export/_unlift.py:60: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer
getattr_node = gm.graph.get_attr(lifted_node)
/home/sbarikbin/.venv/lib/python3.12/site-packages/torch/fx/graph.py:1586: UserWarning: Node model_lifted_tensor_0 target model.lifted_tensor_0 lifted_tensor_0 of model does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does '
Traceback (most recent call last):
File "/home/sbarikbin/test_pp.py", line 42, in <module>
test1()
File "/home/sbarikbin/test_pp.py", line 18, in test1
p = pipeline(model, mb_args=(torch.LongTensor([[1,2,3],[4,5,6]]),), split_spec={"model.layers.13": SplitPoint.BEGINNING})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sbarikbin/.venv/lib/python3.12/site-packages/torch/distributed/pipelining/_IR.py", line 1231, in pipeline
return Pipe.from_tracing(
^^^^^^^^^^^^^^^^^^
File "/home/sbarikbin/.venv/lib/python3.12/site-packages/torch/distributed/pipelining/_IR.py", line 1051, in from_tracing
pipe = Pipe._from_traced(
^^^^^^^^^^^^^^^^^^
File "/home/sbarikbin/.venv/lib/python3.12/site-packages/torch/distributed/pipelining/_IR.py", line 750, in _from_traced
new_submod = _outline_submodules(submodule.graph)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sbarikbin/.venv/lib/python3.12/site-packages/torch/distributed/pipelining/_unflatten.py", line 24, in _outline_submodules
).run_outer()
^^^^^^^^^^^
File "/home/sbarikbin/.venv/lib/python3.12/site-packages/torch/export/unflatten.py", line 1014, in run_outer
self.run_from(node_idx)
File "/home/sbarikbin/.venv/lib/python3.12/site-packages/torch/export/unflatten.py", line 1094, in run_from
).run_from(node_idx)
^^^^^^^^^^^^^^^^^^
File "/home/sbarikbin/.venv/lib/python3.12/site-packages/torch/export/unflatten.py", line 1094, in run_from
).run_from(node_idx)
^^^^^^^^^^^^^^^^^^
File "/home/sbarikbin/.venv/lib/python3.12/site-packages/torch/export/unflatten.py", line 1094, in run_from
).run_from(node_idx)
^^^^^^^^^^^^^^^^^^
File "/home/sbarikbin/.venv/lib/python3.12/site-packages/torch/export/unflatten.py", line 1071, in run_from
self.finalize_outputs()
File "/home/sbarikbin/.venv/lib/python3.12/site-packages/torch/export/unflatten.py", line 993, in finalize_outputs
_verify_graph_equivalence(self.cached_graph_module, self.module)
File "/home/sbarikbin/.venv/lib/python3.12/site-packages/torch/export/unflatten.py", line 655, in _verify_graph_equivalence
assert graph_dump(x.graph) == graph_dump(y.graph)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
```
I would be grateful if you help me on that. For example I couldn't find info about what is `torch.export.unflatten._ModuleFrame` (which `_outline_submodules` uses) for.
@H-Huang , @kwen2501
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,801,483,220
|
Missing create_graph arguments in torch.func apis
|
Luciennnnnnn
|
open
|
[
"triaged",
"module: functorch"
] | 3
|
NONE
|
### 🐛 Describe the bug
Hi, I want replace `torch.autograd.functional.jvp`/`torch.autograd.functional.vjp` with `torch.func.jvp`/`torch.func.vjp`. Since `torch.func.jvp` is more efficient compared to autograd ops.
However, I notice these apis are not equivalent, since I calculate jvp to create a loss, thus second-order gradient is required, and we have to set `create_graph=True`. There is no `create_graph` argument in `torch.func` api, so is `torch.func` does not intent to use in my case or it support higher-order gradient by default?
### Versions
N/A
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,801,477,785
|
Custom symbolic functions for ONNX export with None args causes SEGFAULT
|
Tytskiy
|
open
|
[
"module: crash",
"module: onnx",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
`torch.onn.export` with custom symbolic function produces SEGFAULT.
It happens when I set some aguments optional with `None`. OnnxRuntime opset allows it https://github.com/microsoft/onnxruntime/blob/3e4c5e64877c6d9814e4ebce5dcbb1fe71588ec5/docs/ContribOperators.md#commicrosoftpackedmultiheadattention
**Repro:**
```python
import torch
class Function(torch.autograd.Function):
@staticmethod
def forward(ctx, x: torch.Tensor, cu_seqlens: torch.Tensor, token_offset: torch.Tensor) -> torch.Tensor:
if torch.onnx.is_in_onnx_export():
# doesn't matter – onnx only check number of ouputs and their shapes
return x
# do something
...
@staticmethod
def symbolic(g: torch.Graph, x: torch.Value, cu_seqlens: torch.Value, token_offset: torch.Value) -> torch.Value:
return g.op(
'com.microsoft::PackedMultiHeadAttention',
x,
None,
None,
None,
token_offset,
cu_seqlens,
None,
num_heads_i=1,
).setType(x.type())
class Net(torch.nn.Module):
def forward(self, x: torch.Tensor, cu_seqlens: torch.Tensor, token_offset: torch.Tensor) -> torch.Tensor:
return Function.apply(x, cu_seqlens, token_offset)
net = Net()
embeddings = torch.tensor([10])
token_offset = torch.tensor([10])
cu_seqlens = torch.tensor([10])
torch.onnx.export(
net,
(embeddings, cu_seqlens, token_offset),
'graph.onnx',
input_names=['embeddings', 'cu_seqlens', 'lengths'],
dynamo=False,
verbose=True,
opset_version=20,
)
```
**Output:**
As is:
```bash
❯ python example.py
[1] 4159913 segmentation fault python example.py
```
with GDB:
```bash
❯ gdb python
GNU gdb (Ubuntu 12.1-0ubuntu1~22.04.2) 12.1
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from python...
(gdb) run example.py
Starting program: /home/tytskiy/miniforge3/bin/python example.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/x86_64-linux-gnu/libthread_db.so.1".
Program received signal SIGSEGV, Segmentation fault.
0x00007fffe641d36e in torch::jit::Node::addInput(torch::jit::Value*) () from /home/tytskiy/miniforge3/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
(gdb)
```
**Expected**:
Get onnx file with PackedMultiHeadAttention:
[packed_attention.onnx.zip](https://github.com/user-attachments/files/18489000/packed_attention.onnx.zip)
www.netron.app shows it:
<img width="323" alt="Image" src="https://github.com/user-attachments/assets/2a0f8864-687d-4e97-9a0e-abdd97d20c74" />
**Why:**
I guess it happens because we pass None here
https://github.com/pytorch/pytorch/blob/b5655d9821b7214af200d0b8796a10ad34b85229/torch/onnx/_internal/jit_utils.py#L290-L292
and interpret it as Value* here
https://github.com/pytorch/pytorch/blob/803017f3cb73bb115eda5ec0e0a19688ccafbf4e/torch/csrc/jit/python/python_ir.cpp#L585-L589
**Possible fix 1:**
Replace all ```None``` arguments with magic line```g.op('prim::Constant').setType(_C.OptionalType.ofTensor())```. But it seems unclear and it not covered by docs.
**Possible fix 2:**
Just replace this line
https://github.com/pytorch/pytorch/blob/b5655d9821b7214af200d0b8796a10ad34b85229/torch/onnx/_internal/jit_utils.py#L265-L267
with
```python
def _const_if_tensor(graph_context: GraphContext, arg):
if arg is None:
return graph_context.op('prim::Constant').setType(_C.OptionalType.ofTensor())
```
### Versions
It seems to happen in any environment
...
Versions of relevant libraries:
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20241203
[pip3] onnxsim==0.4.36
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
| true
|
2,801,465,890
|
[ARM] Add test_ops and test_memory_profiler to aarch64 tests
|
robert-hardwick
|
open
|
[
"triaged",
"open source",
"module: arm",
"topic: not user facing"
] | 13
|
COLLABORATOR
|
Fixes #142371
cc @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01
| true
|
2,801,238,281
|
No Range Check for `storage_offset` in `as_strided` Function
|
WLFJ
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"topic: fuzzer"
] | 2
|
NONE
|
### 🐛 Describe the bug
An issue has been identified in the `as_strided` function of PyTorch related to the `storage_offset` parameter (referred to as sym_3 in the provided example). The function does not perform a range check on storage_offset, which leads to memory access errors when the value is out of bounds. In the example provided, this results in a segmentation fault (SIGSEGV) during execution.
```python
import torch
print(torch.__version__)
sym_0 = 2
sym_1 = [7, 8, 1, 8]
sym_2 = [1, 1, 2, 0]
sym_3 = 9223369837831520255 # Invalid storage_offset value
var_161 = torch.randperm(sym_0, dtype=torch.long, device='cpu')
var_26 = torch.as_strided(var_161, sym_1, sym_2, sym_3)
print(var_26)
```
Running result:
```
$ python3 test.py
fish: Job 2, 'python3 test.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
cc @malfet
| true
|
2,801,183,921
|
Missing Length Check for `reflection_pad3d` `padding` Argument
|
WLFJ
|
open
|
[
"module: crash",
"module: cpp",
"module: error checking",
"triaged",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
An issue was identified with the `reflection_pad3d` function in PyTorch regarding the handling of its `padding` argument. The function expects padding to be an array of length 6, but currently, there is no validation check for this length. This oversight can cause undefined behavior when padding is of incorrect length.
```cpp
struct test_31948B271C680B9_args {
std::array<long, 4> sym_0;
at::ScalarType sym_1;
at::DeviceType sym_2;
bool sym_3;
std::array<long, 1> sym_4;
};
void test_31948B271C680B9(const test_31948B271C680B9_args &args) {
try {
auto var_87 = at::rand(args.sym_0, args.sym_1, std::nullopt, args.sym_2, args.sym_3);
auto var_643 = at::reflection_pad3d(var_87, args.sym_4); // Incorrect padding length
} catch (std::exception &e) {
// Exception handling
}
}
```
The reflection_pad3d function accesses the padding array assuming it is of length 6. However, if padding is shorter, the function reads out-of-bounds, leading to potential memory issues. This behavior is present in ReflectionPad.cpp (line 106):
https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/ReflectionPad.cpp#L106
```cpp
TORCH_META_FUNC(reflection_pad3d)(const Tensor& input, IntArrayRef padding) {
// TORCH_CHECK(padding.size() == 6, "Expected padding of length 6, but got ", padding.size()); // missed check here.
int64_t pad_left = padding[0];
int64_t pad_right = padding[1];
int64_t pad_top = padding[2];
int64_t pad_bottom = padding[3];
int64_t pad_front = padding[4];
int64_t pad_back = padding[5];
// Further processing
}
```
This issue can be reproduced using AddressSanitizer. Would it be possible to submit a PR to add this validation check? A similar scenario was addressed here: [https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/ReflectionPad.cpp#L164](url)
CC: @malfet
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
cc @jbschlosser @malfet
| true
|
2,801,132,393
|
nn.Embedding backwards pass for nested tensors
|
kkj15dk
|
closed
|
[
"triaged",
"module: nestedtensor"
] | 4
|
NONE
|
### 🚀 The feature, motivation and pitch
It seems there is no backwards pass for nn.Embedding implemented for nested tensors, while the forward pass works. Could be a very useful feature
```
import torch
import torch.nn as nn
def packed_tensor_from_jagged(tensor):
offsets = tensor.offsets()
return torch.cat([t for t in tensor.unbind()], dim = 0), offsets
class test_model(nn.Module):
def __init__(self, dim):
super().__init__()
self.embedding = nn.Embedding(20, dim)
def forward(self, x):
x = self.embedding(x)
return x
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = test_model(64).to(device)
### Bug here, when using nested tensors
batch_list = [torch.randint(0, 20, (l,), dtype=torch.long) for l in [64, 128, 256, 512]]
batch = torch.nested.nested_tensor(batch_list, layout=torch.jagged, device=device)
output = model(batch)
output, offsets = packed_tensor_from_jagged(output)
loss = output.sum(dim=-1).mean()
loss.backward()
```
Error:
```
(.venv) kkj@KKJ:~/axolotl$ /home/kkj/axolotl/.venv/bin/python /home/kkj/axolotl/nested_embedding.py
Traceback (most recent call last):
File "/home/kkj/axolotl/nested_embedding.py", line 29, in <module>
loss.backward()
File "/home/kkj/axolotl/.venv/lib/python3.10/site-packages/torch/_tensor.py", line 581, in backward
torch.autograd.backward(
File "/home/kkj/axolotl/.venv/lib/python3.10/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/home/kkj/axolotl/.venv/lib/python3.10/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/kkj/axolotl/.venv/lib/python3.10/site-packages/torch/nested/_internal/nested_tensor.py", line 295, in __torch_dispatch__
raise NotImplementedError(func)
NotImplementedError: aten.embedding_dense_backward.default
```
### Alternatives
Using nn.Parameter, and nested_tensor.unbind() to pass each subtensor through the parameter layer.
### Additional context
Might be related to https://github.com/pytorch/pytorch/issues/93843
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,801,054,155
|
loss.backward() breaking somewhere when modulating a nested tensor using scale and shift (RuntimeError: Function AddBackward0 returned an invalid gradient at index 0 - got [1, 4, 64] but expected shape compatible with [4, 1, 64])
|
kkj15dk
|
closed
|
[
"module: autograd",
"triaged",
"module: nestedtensor"
] | 2
|
NONE
|
### 🐛 Describe the bug
When using a label to modulate a nested tensor, the backwards pass breaks. I am not entirely sure on the nature of the bug, but it happens because of the line "x = x * (1 + scale)", as the bug disappears when this is commented out.
```
import torch
import torch.nn as nn
def packed_tensor_from_jagged(tensor):
offsets = tensor.offsets()
return torch.cat([t for t in tensor.unbind()], dim = 0), offsets
def modulate(x, shift, scale):
if scale is not None: # comment out this line => no error
x = x * (1 + scale) # comment out this line => no error
if shift is not None:
x = x + shift
return x
class test_model(nn.Module):
def __init__(self, dim):
super().__init__()
self.modulation = nn.Linear(dim, 2 * dim)
def forward(self, x, c): # x is a ragged tensor (batch_size=4, j, dim=64), c is a regular tensor (batch_size=4, dim=64)
shift, scale = self.modulation(c).chunk(2, dim=-1)
shift, scale = shift.unsqueeze(1), scale.unsqueeze(1) # I think it has something to do with this unsqueeze
return modulate(x, shift, scale)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = test_model(64).to(device)
### This seems to work fine
batch =torch.randn(4, 512, 64, device=device, requires_grad=True) # batch_size=4, j=512, dim=64
c = torch.randn(4, 64, device=device, requires_grad=True) # batch_size=4, dim=64
output = model(batch, c)
loss = output.sum(dim=-1).mean()
loss.backward()
###
### Bug here, when using nested tensors
batch = torch.nested.nested_tensor([torch.randn(64, 64), torch.randn(128, 64), torch.randn(256, 64), torch.randn(512, 64)], device=device, requires_grad=True, layout=torch.jagged) # batch_size=4, j=jagged, dim=64
c = torch.randn(4, 64, device=device, requires_grad=True) # batch_size=4, dim=64
output = model(batch, c)
output, offsets = packed_tensor_from_jagged(output)
loss = output.sum(dim=-1).mean()
loss.backward()
# This last line throws an error (Function AddBackward0 returned an invalid gradient at index 0 - got [1, 4, 64] but expected shape compatible with [4, 1, 64])
###
```
Error:
```(.venv) kkj@KKJ:~/axolotl$ /home/kkj/axolotl/.venv/bin/python /home/kkj/axolotl/modulation_bug.py
Traceback (most recent call last):
File "/home/kkj/axolotl/modulation_bug.py", line 47, in <module>
loss.backward()
File "/home/kkj/axolotl/.venv/lib/python3.10/site-packages/torch/_tensor.py", line 581, in backward
torch.autograd.backward(
File "/home/kkj/axolotl/.venv/lib/python3.10/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/home/kkj/axolotl/.venv/lib/python3.10/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Function AddBackward0 returned an invalid gradient at index 0 - got [1, 4, 64] but expected shape compatible with [4, 1, 64]```
### Versions
--2025-01-21 09:23:06-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24353 (24K) [text/plain]
Saving to: ‘collect_env.py’
collect_env.py 100%[=========================================================================================================================================>] 23.78K --.-KB/s in 0.006s
2025-01-21 09:23:06 (3.93 MB/s) - ‘collect_env.py’ saved [24353/24353]
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4050 Laptop GPU
Nvidia driver version: 556.13
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i5-13500H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 10 MiB (8 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] rotary-embedding-torch==0.8.6
[pip3] torch==2.5.1
[pip3] torchviz==0.0.3
[pip3] triton==3.1.0
[conda] _anaconda_depends 2024.10 py312_mkl_0
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.10 py312h5eee18b_0
[conda] mkl_random 1.2.7 py312h526ad5a_0
[conda] numpy 1.26.4 py312hc5e2394_0
[conda] numpy-base 1.26.4 py312h0da6c21_0
[conda] numpydoc 1.7.0 py312h06a4308_0
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @cpuhrsch @jbschlosser @bhosmer @drisspg @davidberard98 @YuqingJ
| true
|
2,801,011,568
|
[CD] Disable Kineto for XPU Windows CD
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 13
|
COLLABORATOR
|
Due to issue #145155, disable Kineto for XPU Windows CD temporally.
| true
|
2,800,976,931
|
[TEST] tmp storage with CONSTANTHANDLE
|
muchulee8
|
closed
|
[
"Stale",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145254
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Differential Revision: [D68430118](https://our.internmc.facebook.com/intern/diff/D68430118)
| true
|
2,800,890,099
|
PyObject preservation does not prevent weakrefs being cleared by Python garbage collector
|
soulitzer
|
open
|
[
"triaged",
"module: python frontend"
] | 8
|
CONTRIBUTOR
|
### 🐛 Describe the bug
If a dead cycle holds the last reference to a resurrectable Tensor, gc will clear its weakrefs. One implication of this interaction is that entries can spuriously vanish from WeakTensorKeyDictionary, which is used in various places across the code base.
Repro:
```python
import gc; gc.disable()
a = torch.tensor(1.)
param = torch.tensor(2., requires_grad=True)
param.grad = a
b = [a]
b.append(b) # create a cycle
def callback(x):
print("callback called!")
a_ref = weakref.ref(a, callback)
del a, b
gc.collect()
print("done collecting")
assert a_ref() is None
```
https://github.com/pytorch/pytorch/issues/75932 is a good read for more background; this issue is the same as that one except we also check weakrefs. The TLDR:
Today, PyObject preservation logic already has special handling for interaction with Python's garbage collector in two ways:
1) [In the original PR to add PyObject preservation](https://github.com/pytorch/pytorch/pull/56017), @ezyang and @albanD foresaw that if gc detected a unreachable cycle that included a ressurectable Tensor, `tp_clear` may be called on it, preventing a clean resurrection. The fix here is to tell `tp_traverse` to not traverse its members in the case we are resurrectable (thus preventing any cycle involving the resurrectable Tensor from being detected).
2) Later @Chillee [investigated](https://github.com/pytorch/pytorch/issues/75932) a separate case where the ressurectable Tensor is not directly part of the cycle, but is instead kept alive by a reference from a dead cycle. The `tp_traverse` fix did not address this because `tp_clear` will be called on all unreachable objects whether they are directly part of the cycle or not. The resolution here is to bail from `tp_clear` if the Tensor is resurrectable.
The `tp_clear` fix, however, does not prevent weakrefs from already being cleared because gc clears weakrefs before calling `tp_clear`!
https://github.com/python/cpython/blob/e65a1eb93ae35f9fbab1508606e3fbc89123629f/Python/gc.c#L1725-L1744
https://github.com/python/cpython/blob/main/InternalDocs/garbage_collector.md#destroying-unreachable-objects
### Versions
main
cc @albanD
| true
|
2,800,828,393
|
change the test wheel to release wheel when release wheel available
|
ZhaoqiongZ
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: docs",
"release notes: xpu"
] | 13
|
CONTRIBUTOR
|
change the test wheel to release wheel when release wheel available
| true
|
2,800,812,703
|
Fix IdentationError of code example
|
ZhaoqiongZ
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
I found there is IndentationError when try to copy paste the example of inference with torch.compile
fix the format in this pr
| true
|
2,800,804,995
|
[Inductor][CPU] Add a lowering pass for _weight_int4pack_mm_for_cpu
|
Xia-Weiwen
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146756
* __->__ #145250
* #145245
**Summary**
It's part of the task to enable max-autotune with GEMM template for WoQ INT4 GEMM on CPU.
This PR adds a lowering pass for `torch.ops.aten_weight_int4pack_mm_for_cpu`. This op is used for WoQ int4 in Torchao. The lowering pass is a prerequisite for max-autotune, which is planed to be enabled for this op in subsequent PRs.
**Test plan**
```
python test/inductor/test_mkldnn_pattern_matcher.py -k test_woq_int4
python test/inductor/test_cpu_cpp_wrapper.py -k test_woq_int4
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,800,798,784
|
fake PR for validating oneDNN upgrade to v3.7 on windows
|
LifengWang
|
closed
|
[
"module: mkldnn",
"open source",
"release notes: releng",
"ciflow/binaries_wheel",
"module: inductor"
] | 1
|
CONTRIBUTOR
|
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,800,794,015
|
[Break XPU][Inductor UT] Set input tensors to corresponding device for test case in test_aot_indutor.py
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146763
* #146880
* __->__ #145248
* #146762
Fix #145247
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,800,786,116
|
[Break XPU] device type in test_aot_inductor.py is not passed correctly to cpp_builder.
|
etaf
|
closed
|
[
"triaged",
"module: xpu"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
#### Description
Currently XPU Ci is broken by the newly added test case `AOTInductorTestABICompatibleGpu.test_assert_tensor_meta_xpu`
The CI log: https://github.com/pytorch/pytorch/actions/runs/12869396818/job/35880652376
Error msg:
```
=================================== FAILURES ===================================
_________ AOTInductorTestABICompatibleGpu.test_assert_tensor_meta_xpu __________
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 12376, in new_test
return value(self)
File "/var/lib/jenkins/pytorch/test/inductor/test_aot_inductor.py", line 4252, in test_assert_tensor_meta
self.check_model(
File "/var/lib/jenkins/pytorch/test/inductor/test_aot_inductor_utils.py", line 185, in check_model
actual = AOTIRunnerUtil.run(
File "/var/lib/jenkins/pytorch/test/inductor/test_aot_inductor_utils.py", line 137, in run
optimized = AOTIRunnerUtil.load(device, so_path)
File "/var/lib/jenkins/pytorch/test/inductor/test_aot_inductor_utils.py", line 119, in load
return torch._export.aot_load(so_path, device)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_export/__init__.py", line 167, in aot_load
runner = torch._C._aoti.AOTIModelContainerRunnerXpu(so_path, 1, device) # type: ignore[assignment, call-arg]
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 81
To execute this test, run the following from the base repo dir:
python test/inductor/test_aot_inductor.py AOTInductorTestABICompatibleGpu.test_assert_tensor_meta_xpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
#### The root cause analysis:
The case raised runtime error at:
https://github.com/pytorch/pytorch/blob/00ffeca1b1fe753571412778996fe78deef49059/torch/csrc/inductor/aoti_runtime/model.h#L104-L109
here it should go to the USE_XPU part but actually not. Because the model for `toch.compile` is not set to XPU.
The model in newly added test case:
https://github.com/pytorch/pytorch/blob/00ffeca1b1fe753571412778996fe78deef49059/test/inductor/test_aot_inductor.py#L4237-L4257
In the `check_model`, it only set the model to `xpu`, and input tensors are not converted to `xpu`. Since there is no model internal tensor, `torch.compile` can not find any `xpu` tensor , so it's still a cpu model.
Except for set the model to device, We should also set the input tensors to corresponding device too.
### Versions
PyTorch version: 2.7.0a0+git225a10f
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease24.6.13-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,800,737,013
|
[dynamo] Support types.MappingProxyType
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145246
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,800,716,149
|
[Quant][CPU] add a wrapper op for _weight_int4pack_mm_for_cpu with tensor args
|
Xia-Weiwen
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"release notes: linalg_frontend",
"intel",
"module: inductor",
"ciflow/inductor"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146756
* #145250
* __->__ #145245
**Summary**
It's part of the task to enable max-autotune with GEMM template for WoQ INT4 GEMM on CPU.
This PR adds a wrapper op in `quantized` namespace for `torch.ops.aten_weight_int4pack_mm_for_cpu`, whose arguments are all tensors. It will be used in Inductor lowering with max-autotune where scalar arguments are difficult to handle.
The new op is not registered to
- `aten` because it will require changing `native_functions.yaml`, which is not recommended.
- `quantized_decomposed` because it will only have a Python implementation, which cannot be used for cpp wrapper in Inductor.
**Test plan**
```
python test/test_linalg.py -k test__int4_mm
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,800,697,414
|
Improve typing by using bool and int
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,800,686,989
|
[inductor] Make serialized inductor patterns path configurable instead of using …
|
kareemshaik80
|
open
|
[
"triaged",
"open source",
"module: inductor",
"release notes: inductor"
] | 9
|
CONTRIBUTOR
|
…fixed path in the inductor module
Fixes [145242](https://github.com/pytorch/pytorch/issues/145242)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,800,686,289
|
Expose configurable path instead of using fixed path in the inductor module for serialized pattern generation
|
kareemshaik80
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Expose configurable path instead of using fixed path in the inductor module for serialized pattern generation
### Alternatives
_No response_
### Additional context
_No response_
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,800,677,357
|
add grad_output shape check for adaptive_avg_pool2d_backward
|
jiayisunx
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145241
Fix https://github.com/pytorch/pytorch/issues/145070.
| true
|
2,800,610,784
|
Teach dynamo to handle GenericAlias without a graph break
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Dynamo wasn't handling the new PEP585 type annotations:
```
x = list[Foo]
```
Although this worked in py3.9 this was causing an `unimplemented` (Unexpected type in sourceless builder) in py3.12.
This fixes it to treat them as a BuiltinVariable.
Fixes #145226
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145240
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,800,593,844
|
Turn Stream into protocol and improve typing in torch/_C/__init__.pyi.in
|
cyyever
|
open
|
[
"oncall: jit",
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,800,582,103
|
Improve typing in torch/__init__.py
|
cyyever
|
closed
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,800,573,890
|
Improve typing in torch/types.py
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 18
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,800,560,375
|
[Don't Review] test CI
|
guangyey
|
closed
|
[
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145236
| true
|
2,800,524,531
|
Fix broken gpt_fast micro benchmark after #144315
|
huydhn
|
closed
|
[
"Merged",
"release notes: releng",
"ciflow/inductor-micro-benchmark",
"ciflow/inductor-micro-benchmark-cpu-x86"
] | 5
|
CONTRIBUTOR
|
The benchmark is failing with the following error
```
File "/var/lib/jenkins/workspace/benchmarks/gpt_fast/benchmark.py", line 333, in <module>
main(output_file=args.output, only_model=args.only)
File "/var/lib/jenkins/workspace/benchmarks/gpt_fast/benchmark.py", line 308, in main
lst = func(device)
File "/var/lib/jenkins/workspace/benchmarks/gpt_fast/benchmark.py", line 66, in run_mlp_layer_norm_gelu
us_per_iter = benchmarker.benchmark(compiled_mod, (x,)) * 1000
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/runtime/benchmarking.py", line 39, in wrapper
return fn(self, *args, **kwargs)
TypeError: benchmark() missing 1 required positional argument: 'fn_kwargs'
```
An example error is https://github.com/pytorch/pytorch/actions/runs/12862761823/job/35858912555
I also assign `oncall: pt2` as the owner of this job going forward.
| true
|
2,800,479,096
|
Binaries Python 3.13t failing linux-aarch64-binary-manywheel and linux-binary-manywheel
|
atalman
|
closed
|
[
"module: binaries",
"oncall: releng"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Binaries Python 3.13t failing linux-aarch64-binary-manywheel and linux-binary-manywheel since 01.17.2025
Linux manywheel-py3_13t-cpu-build : https://github.com/pytorch/pytorch/actions/runs/12863127095/job/35859167010
```
pip install -qr requirements.txt
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [85 lines of output]
Ignoring cffi: markers 'python_version <= "3.12"' don't match your environment
Collecting cffi==1.17.0rc1
Downloading cffi-1.17.0rc1.tar.gz (516 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting setuptools<69.0.0
Downloading setuptools-68.2.2-py3-none-any.whl.metadata (6.3 kB)
Collecting pycparser (from cffi==1.17.0rc1)
Downloading pycparser-2.22-py3-none-any.whl.metadata (943 bytes)
Downloading setuptools-68.2.2-py3-none-any.whl (807 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 807.9/807.9 kB 72.7 MB/s eta 0:00:00
Downloading pycparser-2.22-py3-none-any.whl (117 kB)
Building wheels for collected packages: cffi
Building wheel for cffi (pyproject.toml): started
Building wheel for cffi (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
× Building wheel for cffi (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [55 lines of output]
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
Package 'libffi', required by 'virtual:world', not found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
Package 'libffi', required by 'virtual:world', not found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
Package 'libffi', required by 'virtual:world', not found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
Package 'libffi', required by 'virtual:world', not found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
Package 'libffi', required by 'virtual:world', not found
running bdist_wheel
running build
running build_py
creating build/lib.linux-x86_64-cpython-[313](https://github.com/pytorch/pytorch/actions/runs/12863127095/job/35859167010#step:14:314)t/cffi
copying src/cffi/__init__.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/_imp_emulation.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/_shimmed_dist_utils.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/api.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/backend_ctypes.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/cffi_opcode.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/commontypes.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/cparser.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/error.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/ffiplatform.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/lock.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/model.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/pkgconfig.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/recompiler.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/setuptools_ext.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/vengine_cpy.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/vengine_gen.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/verifier.py -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/_cffi_include.h -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/parse_c_type.h -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/_embedding.h -> build/lib.linux-x86_64-cpython-313t/cffi
copying src/cffi/_cffi_errors.h -> build/lib.linux-x86_64-cpython-313t/cffi
running build_ext
building '_cffi_backend' extension
creating build/temp.linux-x86_64-cpython-313t/src/c
gcc -pthread -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -fPIC -DFFI_BUILDING=1 -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/usr/include/ffi -I/usr/include/libffi -I/opt/_internal/cpython-3.13.1-nogil/include/python3.13t -c src/c/_cffi_backend.c -o build/temp.linux-x86_64-cpython-313t/src/c/_cffi_backend.o
src/c/_cffi_backend.c:15:10: fatal error: ffi.h: No such file or directory
15 | #include <ffi.h>
| ^~~~~~~
compilation terminated.
error: command '/opt/rh/gcc-toolset-11/root/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for cffi
Failed to build cffi
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (cffi)
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
```
Linux aarch64 failures: https://github.com/pytorch/pytorch/actions/runs/12824671471/job/35761211905
```
5h Created wheel for psutil: filename=psutil-6.1.1-cp313-cp313t-linux_aarch64.whl size=318360 sha256=0bafcd3871dff1d5775f2700461f10840da1346820984eb743f6eee11a96db45
Stored in directory: /root/.cache/pip/wheels/ff/fe/eb/59cac25690b1a9600e50b007a414ddabb88c04e3ca5df008d9
Building wheel for zstandard (pyproject.toml) ... 25l- \ | / error
error: subprocess-exited-with-error
× Building wheel for zstandard (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [25 lines of output]
<string>:41: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
<string>:42: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
<frozen importlib._bootstrap>:488: RuntimeWarning: The global interpreter lock (GIL) has been enabled to load module '_cffi_backend', which has not declared that it can run safely without the GIL. To override this behavior and keep the GIL disabled (at your own risk), run with PYTHON_GIL=0 or -Xgil=0.
not modified: 'build/zstandard/_cffi.c'
generating build/zstandard/_cffi.c
(already up-to-date)
running bdist_wheel
running build
running build_py
creating build/lib.linux-aarch64-cpython-313
creating build/lib.linux-aarch64-cpython-313/zstandard
copying zstandard/__init__.py -> build/lib.linux-aarch64-cpython-313/zstandard
copying zstandard/backend_cffi.py -> build/lib.linux-aarch64-cpython-313/zstandard
copying zstandard/__init__.pyi -> build/lib.linux-aarch64-cpython-313/zstandard
copying zstandard/py.typed -> build/lib.linux-aarch64-cpython-313/zstandard
running build_ext
building 'zstandard.backend_c' extension
creating build/temp.linux-aarch64-cpython-313
creating build/temp.linux-aarch64-cpython-313/c-ext
gcc -pthread -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -fPIC -Ic-ext -Izstd -I/opt/_internal/cpython-3.13.1-nogil/include/python3.13t -c c-ext/backend_c.c -o build/temp.linux-aarch64-cpython-313/c-ext/backend_c.o -DZSTD_SINGLE_FILE -DZSTDLIB_VISIBLE= -DZDICTLIB_VISIBLE= -DZSTDERRORLIB_VISIBLE= -fvisibility=hidden
c-ext/backend_c.c: In function ‘safe_pybytes_resize’:
c-ext/backend_c.c:316:15: error: ‘PyObject’ {aka ‘struct _object’} has no member named ‘ob_refcnt’
316 | if ((*obj)->ob_refcnt == 1) {
| ^~
error: command '/opt/rh/gcc-toolset-11/root/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for zstandard
25hSuccessfully built psutil
Failed to build zstandard
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (zstandard)
```
### Versions
2.7.0
cc @seemethere @malfet @osalpekar
| true
|
2,800,470,058
|
Not using set_num_threads results in very slow .all()
|
arogozhnikov
|
open
|
[
"module: performance",
"module: cpu",
"triaged"
] | 5
|
NONE
|
### 🐛 Describe the bug
```python
import torch
import time
# print(f'{torch.get_num_threads()}') # default is 256, it isn't important how many we set, but the fact that we set num_threads is critical
# torch.set_num_threads(32)
x = torch.zeros(13456, 4) # starts at some size of first dim
z_out = torch.zeros(1024, 4, dtype=torch.uint8)
start = time.time()
for _ in range(64):
(x == x).all()
print(time.time() - start)
```
with code commented: > 6 seconds
with code uncommented: <0.1 seconds.
I initially thought it is linked to https://github.com/pytorch/pytorch/issues/90760 , but that issue mentions only matmul and cholecky (also, because of that, I was initially searching for those operations), nothing about .all(), but difference observed in this minimal setup is more dramatic.
### Versions
```
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-1025-oracle-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-254
Off-line CPU(s) list: 255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7J13 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3673.0950
CPU min MHz: 0.0000
BogoMIPS: 4900.16
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-254
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-lightning==2.2.1
[pip3] torch==2.3.1
[pip3] torchmetrics==1.3.2
[pip3] triton==2.3.1
[conda] Could not collect
```
cc @msaroufim @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,800,463,213
|
[inductor] Simplify mode options, only apply CompilerBisector changes once
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145232
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,800,460,550
|
Flaky Dynamo test: TestAutograd.test_gradcheck_nondeterministic
|
yanboliang
|
open
|
[
"module: autograd",
"triaged",
"module: testing",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Reproduce command:
```
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_autograd.py TestAutograd.test_gradcheck_nondeterministic
```
This issue was discovered while I was working on #142830. The core problem is that this unit test produces different results on CI compared to running it locally. The test passes in the main branch, but if I remove the `assertRaisesRegex` assertion (#145205), the test still passes locally but fails on CI. I suspect that even though it currently passes on CI, it’s likely coincidental.
After making changes in #142830, this test began failing. However, I’ve been unable to reproduce the failure locally.
There is a similar flaky test marked at #127115.
### Versions
main
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu
| true
|
2,800,441,148
|
[BE] [mps] Refactor UnaryConstants to be its own kernel.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor"
] | 7
|
MEMBER
|
In preparation for using this file for inductor (for erfinv).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,800,397,882
|
CPU-only PyTorch on M1 MacBook always gets "RuntimeError: Placeholder storage has not been allocated on MPS device!"
|
gregchapman-dev
|
closed
|
[
"needs reproduction",
"triaged",
"module: macos"
] | 1
|
NONE
|
### 🐛 Describe the bug
I'm trying to run CPU-only on an M1 MacBook. I installed the CPU-only build: pip install torch --extra-index-url https://download.pytorch.org/whl/cpu
I always get "RuntimeError: Placeholder storage has not been allocated on MPS device!", but of course that's true, I'm running CPU only. (Note: I am also specifying 'cpu' device everywhere I can.). Honestly, I'm not sure why this mps-specific code is even present, much less running, in the CPU-only build of PyTorch.
Expected: no complaint, since I'm putting everything on the CPU, and running the CPU-only build of PyTorch.
Got: Consistent failure with "RuntimeError: Placeholder storage has not been allocated on MPS device!"
### Versions
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.7.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.8 (main, Dec 6 2024, 15:32:23) [Clang 16.0.0 (clang-1600.0.26.4)] (64-bit runtime)
Python platform: macOS-14.7.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.5.1
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.20.1
[conda] Could not collect
cc @malfet @albanD
| true
|
2,800,383,395
|
Expose the rendezvous keepalive arguments
|
carmocca
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (torchelastic)"
] | 3
|
CONTRIBUTOR
|
Enables support for this:
```python
from torch.distributed.launcher.api import LaunchConfig
config = LaunchConfig(
...,
rdzv_configs={"keep_alive_interval": 1122, "heartbeat_timeout": 321, "keep_alive_max_attempt" 5},
)
```
These arguments are currently hard-coded inside torchrun. The default values are not suitable for jobs with thousands of ranks.
Today, `rdzv_configs` only allows the keys `join_timeout`, `last_call_timeout`, `close_timeout`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,800,375,916
|
[ROCm] fix test_cublas_workspace_explicit_allocation for gfx12
|
dnikolaev-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"rocm",
"ciflow/rocm"
] | 10
|
CONTRIBUTOR
|
gfx12 passes the condition `torch.cuda.get_device_capability() >= (9, 4)` and uses `default_workspace_size=128MB`, but it required only for MI300
Fix condition to use `("gfx94" in gcn_arch)` instead of `torch.cuda.get_device_properties()` to detect MI300.
Now `default_workspace_size=32MB` is used for gfx12 and the test passes
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.