id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,748,565,642
|
Build jobs intermittently timeout
|
malfet
|
closed
|
[
"module: ci",
"triaged"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
For example https://github.com/pytorch/pytorch/actions/runs/12392465209/job/34591708343
And following jobs are also slow:
I.e. https://github.com/pytorch/pytorch/actions/runs/12392605222/job/34592144299 took 3.5h to finish and sccache stats are:
```
+ sccache --show-stats
Compile requests 8563
Compile requests executed 8671
Cache hits 7063
Cache hits (C/C++) 6699
Cache hits (CUBIN) 203
Cache hits (CUDA) 79
Cache hits (PTX) 82
Cache misses 1289
Cache misses (C/C++) 704
Cache misses (CUBIN) 87
Cache misses (CUDA) 290
Cache misses (PTX) 208
Cache hits rate 84.57 %
Cache hits rate (C/C++) 90.49 %
Cache hits rate (CUBIN) 70.00 %
Cache hits rate (CUDA) 21.41 %
Cache hits rate (PTX) 28.28 %
```
Next [job](https://github.com/pytorch/pytorch/actions/runs/12393382203/job/34594520498) finished in 2.3h and sccache stats are
```
+ sccache --show-stats
Compile requests 8563
Compile requests executed 8665
Cache hits 7489
Cache hits (C/C++) 7134
Cache hits (CUBIN) 197
Cache hits (CUDA) 81
Cache hits (PTX) 77
Cache misses 859
Cache misses (C/C++) 269
Cache misses (CUBIN) 91
Cache misses (CUDA) 288
Cache misses (PTX) 211
Cache hits rate 89.71 %
Cache hits rate (C/C++) 96.37 %
Cache hits rate (CUBIN) 68.40 %
Cache hits rate (CUDA) 21.95 %
Cache hits rate (PTX) 26.74 %
```
Next [job](https://github.com/pytorch/pytorch/actions/runs/12394446893/job/34597937899) finishes under 2 hours with following cache stats
```
+ sccache --show-stats
Compile requests 8563
Compile requests executed 7942
Cache hits 7501
Cache hits (C/C++) 7168
Cache hits (CUBIN) 7
Cache hits (CUDA) 322
Cache hits (PTX) 4
Cache misses 365
Cache misses (C/C++) 235
Cache misses (CUBIN) 40
Cache misses (CUDA) 47
Cache misses (PTX) 43
Cache hits rate 95.36 %
Cache hits rate (C/C++) 96.83 %
Cache hits rate (CUBIN) 14.89 %
Cache hits rate (CUDA) 87.26 %
Cache hits rate (PTX) 8.51 %
Cache timeouts 0
Cache read errors 0
Forced recaches 0
Cache write errors 1
Cache errors 23
Cache errors (C/C++) 23
Compilations 412
Compilation failures 6
Non-cacheable compilations 0
Non-cacheable calls 423
Non-compilation calls 339
Unsupported compiler calls 0
Average cache write 0.127 s
Average compiler 33.825 s
Average cache read hit 0.032 s
```
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra
| true
|
2,748,517,773
|
[BE] Move Mac BB test to its own step
|
malfet
|
closed
|
[
"Merged",
"release notes: releng",
"ciflow/binaries_wheel",
"ciflow/binaries_libtorch"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143513
| true
|
2,748,504,186
|
[BE] Delete `install sccache` step from MacBB
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143513
* __->__ #143512
* #143511
To the best of my knowledge, this step never executed and there were no MacOS binary build running on trunk for a while
| true
|
2,748,503,733
|
[BE] Integrate 5 line build script into template
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143513
* #143512
* __->__ #143511
| true
|
2,748,456,907
|
Add support for differentiable LR in SGD + test v2.0
|
EmmettBicker
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: optim"
] | 10
|
CONTRIBUTOR
|
Second PR in a larger project to broader support for differentiable optimizers with @janeyx99 ! The first one had an issue near the end so this is the second PR on that subject. See #143122 for the development up until this point.
| true
|
2,748,403,650
|
leaking c++ singleton specifically
|
duduyi2013
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Summary:
fix forward for S477887
leaking c++ singleton specifically
when c++ shutdown, it tries to destruct the singleton and acquire GIL, at this moment python runtime exists already, causing undefined behavior.
Leaking here specifically so that we won't try to destroy singleton at the shutdown phase
Test Plan: n/a
Differential Revision: D67400633
| true
|
2,748,386,329
|
Upgrade submodule ideep for bf16f32 matmul changes
|
aditew01
|
closed
|
[
"module: cpu",
"triaged",
"module: mkldnn",
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/linux-aarch64"
] | 3
|
COLLABORATOR
|
This change will enable this PR #140159 to pick proper kernels in bf16 mode for SDPA layer.
cc: @yanbing-j
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01
| true
|
2,748,379,403
|
[ROCm] Fix unit test: matmul_offline_mgpu_tunableop
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend",
"ciflow/periodic"
] | 12
|
COLLABORATOR
|
Fixes #141652
This PR contains:
- Fix for `matmul_offline_mgpu_tunableop`
- Modifications to _checking_tuning_assertions to enable TunableOp if it is disabled. Also moved it into the concurrent futures initializer.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,748,320,790
|
Prevent torch.jit.load path in torch.load when weights_only=True
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143403
* __->__ #143326
| true
|
2,748,286,378
|
[Dynamo] torch._dynamo.exc.Unsupported: sort with non-constant keys
|
SamGinzburg
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This error was encountered while trying to implement a version of [Autotuner.prune_configs](https://github.com/triton-lang/triton/blob/137bc62102f4a261cc921998221cea2b046a6c1b/python/triton/runtime/autotuner.py#L214) from Triton.
This function was modified from operating on a dict to a list (dict with config keys is also not supported).
A minimal repro would look something like:
```python
est_timing: List[Tuple[triton.runtime.Config, float]]
est_timing = [
(config, perf_model(**named_args, **kwargs, **config.all_kwargs()))
for config in configs
]
configs = sorted(est_timing, key=lambda x: est_timing[1])[:top_k]
```
Here is the complete function which triggered the error (for reproducibility):
```python
def call_prune_configs( # type: ignore[no-untyped-def]
autotuner,
early_config_prune: Callable,
perf_model: Callable,
top_k: float,
is_top_k_float: bool,
configs: List,
named_args: Dict,
kwargs: Dict,
):
if early_config_prune:
configs = early_config_prune(configs, named_args, **kwargs)
if perf_model:
# we assert top_k is a float before calling this
if is_top_k_float and top_k <= 1.0:
top_k = int(len(configs) * top_k)
if len(configs) > top_k:
est_timing = [
(config, perf_model(**named_args, **kwargs, **config.all_kwargs()))
for config in configs
]
configs = sorted(est_timing, key=lambda x: est_timing[1])[:top_k]
return configs
# Called in torch/_higher_order_ops/triton_kernel_wrap.py
pruned_configs = self.call_user_defined_fn(
call_prune_configs,
[
variable,
wrapped_early_configs_prune,
wrapped_perf_model,
wrapped_configs_top_k,
wrapped_is_top_k_float,
wrapped_configs,
named_args,
kwargs,
],
{},
tx,
variable.source,
)
```
Here is a stack trace of the generated bytecode leading up to the error:
```bash
"/data/users/ginzburg/pytorch/torch/_higher_order_ops/triton_kernel_wrap.py", line 1023> [BuiltinVariable(sorted), ListVariable(length=2), TupleVariable(length=1)]
V1218 08:22:05.910000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE LOAD_CONST call_prune_configs.<locals>.<lambda> [BuiltinVariable(sorted), ListVariable(length=2), TupleVariable(length=1), ConstantVariable(code: <code object <lambda> at 0x7f9e3c5fbb50, file "/data/users/ginzburg/pytorch/torch/_higher_order_ops/triton_kernel_wrap.py", line 1023>)]
V1218 08:22:05.910000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE MAKE_FUNCTION 8 [BuiltinVariable(sorted), ListVariable(length=2), TupleVariable(length=1), ConstantVariable(code: <code object <lambda> at 0x7f9e3c5fbb50, file "/data/users/ginzburg/pytorch/torch/_higher_order_ops/triton_kernel_wrap.py", line 1023>), ConstantVariable(str: 'call_prune_configs.<locals>.<lambda>')]
V1218 08:22:05.910000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE LOAD_CONST ('key',) [BuiltinVariable(sorted), ListVariable(length=2), NestedUserFunctionVariable()]
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_KW 2 [BuiltinVariable(sorted), ListVariable(length=2), NestedUserFunctionVariable(), TupleVariable(length=1)]
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE LOAD_DEREF est_timing []
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE LOAD_CONST 1 [ListVariable(length=2)]
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE BINARY_SUBSCR None [ListVariable(length=2), ConstantVariable(int: 1)]
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TupleVariable(length=2)]
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE LOAD_DEREF est_timing []
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE LOAD_CONST 1 [ListVariable(length=2)]
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE BINARY_SUBSCR None [ListVariable(length=2), ConstantVariable(int: 1)]
V1218 08:22:05.912000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TupleVariable(length=2)]
inline_call [('sort with non-constant keys', 1)]
```
### Versions
PyTorch version: 2.6.0a0+git28e242f
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: 18.1.8 (CentOS 18.1.8-3.el9)
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.16.1
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0a0+git28e242f
[conda] blas 1.0 mkl
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.10 py310h5eee18b_0
[conda] mkl_random 1.2.7 py310h1128e8f_0
[conda] numpy 1.26.4 py310h5f9d8c6_0
[conda] numpy-base 1.26.4 py310hb5e798b_0
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0a0+git28e242f dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,748,239,659
|
Fix old-compiler-unfriendly zero init of bfloat16_t array
|
swolchok
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"ciflow/linux-aarch64"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
clang versions before 17 don't like to assign 0 to a bfloat16_t. gcc versions before 13 also won't assign 0.0 to a bfloat16_t. (Citation: https://godbolt.org/z/Gzs5ebdej)
Differential Revision: [D67396740](https://our.internmc.facebook.com/intern/diff/D67396740/)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,748,201,659
|
fix: resolve recursion overflow issue by hashing weak references
|
aeeeeeep
|
closed
|
[
"triaged",
"open source",
"Stale"
] | 6
|
NONE
|
Issue
Using `weakref` with recursive objects in PyTorch causes recursion overflow due to the __hash__ method using id(key).
Fix
Changed self._id = id(key) to self._id = id(ref(key)) in the `__hash__` method to base the hash on the weak reference, preventing recursion overflow.
Fixes #ISSUE_NUMBER
| true
|
2,748,153,149
|
AssertionError: not bool like VR[1.00000000000000, 1.00000000000000]
|
ezyang
|
closed
|
[
"triage review",
"oncall: pt2",
"module: dynamic shapes",
"module: inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I triggered this bug while bisecting, it is not blocking me.
Backtrace:
```
Traceback (most recent call last):
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/convert_frame.py", line 979, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/convert_frame.py", line 709, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_utils_internal.py", line 306, in wrapper_function
result = StrobelightCompileTimeProfiler.profile_compile_time(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/caffe2/fb/strobelight/compile_time_profiler.py", line 162, in profile_compile_time
return func(*args, **kwargs)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/convert_frame.py", line 744, in _compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/bytecode_transformation.py", line 1348, in transform_code_object
transformations(instructions, code_options)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/convert_frame.py", line 233, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/convert_frame.py", line 661, in transform
tracer.run()
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/symbolic_convert.py", line 2900, in run
super().run()
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/symbolic_convert.py", line 3091, in RETURN_VALUE
self._return(inst)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/symbolic_convert.py", line 3076, in _return
self.output.compile_subgraph(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/output_graph.py", line 1110, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/output_graph.py", line 1349, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/output_graph.py", line 1399, in call_user_compiler
return self._call_user_compiler(gm)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/output_graph.py", line 1448, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/output_graph.py", line 1429, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/__init__.py", line 2301, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/compile_fx.py", line 1449, in compile_fx
return compile_fx(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/compile_fx.py", line 1733, in compile_fx
return aot_autograd(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_functorch/aot_autograd.py", line 1103, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_functorch/aot_autograd.py", line 1079, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_functorch/aot_autograd.py", line 527, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_functorch/aot_autograd.py", line 778, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 655, in aot_dispatch_autograd
compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/compile_fx.py", line 1546, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/compile_fx.py", line 1615, in _fw_compiler_base
return inner_compile(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/runtime/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/compile_fx.py", line 599, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/fb/utils.py", line 167, in newFunction
return old_func(*args, **kwargs)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/compile_fx.py", line 756, in _compile_fx_inner
compiled_graph = FxGraphCache.load(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/codecache.py", line 1515, in load
compiled_graph = compile_fx_fn(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/compile_fx.py", line 663, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/compile_fx.py", line 974, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/graph.py", line 2043, in compile_to_fn
return self.compile_to_module().call
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/graph.py", line 1954, in compile_to_module
return self._compile_to_module()
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/graph.py", line 1960, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/graph.py", line 1896, in codegen
self.scheduler.codegen()
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/scheduler.py", line 3489, in codegen
return self._codegen()
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/scheduler.py", line 3568, in _codegen
self.get_backend(device).codegen_node(node)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/codegen/cuda_combined_scheduling.py", line 80, in codegen_node
return self._triton_scheduling.codegen_node(node)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/codegen/simd.py", line 1195, in codegen_node
return self.codegen_node_schedule(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/codegen/simd.py", line 1263, in codegen_node_schedule
self.codegen_node_schedule_with_kernel(node_schedule, kernel)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/codegen/simd.py", line 1353, in codegen_node_schedule_with_kernel
node.codegen(index_vars)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/scheduler.py", line 1020, in codegen
with V.set_ops_handler(
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/runtime/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/codegen/common.py", line 2071, in set_current_node
self.node_to_bounds = node._body.bounds().get_bounds()
File "<string>", line 5, in get_bounds_cache_on_self
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/bounds.py", line 74, in get_bounds
interpreter.run(V.get_ops_handler(), initial_env=self._bounds)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/loop_body.py", line 60, in run
return super().run(*args, **kwargs)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_inductor/loop_body.py", line 56, in run_node
return super().run_node(n)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/fx/interpreter.py", line 228, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/fx/interpreter.py", line 332, in call_method
return getattr(self_obj, target)(*args_tail, **kwargs)
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/utils/_sympy/value_ranges.py", line 853, in where
a = a.boolify()
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/utils/_sympy/value_ranges.py", line 223, in boolify
raise AssertionError(f"not bool like {self}")
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: not bool like VR[1.00000000000000, 1.00000000000000]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
This tlparse isn't great, I need to make a cleaner one.
tlparse: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpOsBxLd/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100#[7/0]
I think https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpOsBxLd/7_0_0/inductor_post_grad_graph_900.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100 is what induces the problem though.
I'm working on the internal repro instructions.
### Versions
main
cc @chauhang @penguinwu @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,748,132,812
|
Fix docs load state dict
|
joldov
|
closed
|
[
"triaged",
"open source",
"Stale",
"module: dynamo",
"release notes: AO frontend"
] | 2
|
NONE
|
Fixes #141364:
- Added proper indentation and formatting
- Improved readability for assign by breaking the text into shorter sentences
- Added "NamedTuple:" before the return description to clarify the type for Sphinx
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,748,078,907
|
[dynamo] add two-point iter test
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143500
Implements the last checkbox for https://github.com/pytorch/pytorch/issues/112532.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,747,950,887
|
[Reland 2.6][dynamo][pytree] make CXX pytree traceable: `tree_{flatten,unflatten,structure,map,map_}`
|
XuehaiPan
|
closed
|
[
"open source",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 4
|
COLLABORATOR
|
Reland PRs:
- #137398
- #137399
These two PRs are in a series of PRs there the first one is in the release branch before the branch cut.
- 78543e60020b9fabd73d32ee7b1d5803a07d5e94
- #137397
This PR tries to add the follow-ups into the release branch as well.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,747,762,194
|
AOTI_TORCH_CHECK failed in aot_compile-d model
|
mstebelev
|
closed
|
[
"triaged",
"oncall: pt2",
"ciflow/inductor",
"oncall: export",
"oncall: cpu inductor",
"module: aotinductor"
] | 10
|
NONE
|
### 🐛 Describe the bug
I exported some model using `torch.export(strict=False)`. Exported model itself works well, but if I compile it using `torch._inductor.aot_compile`, the process crashes with some internal check in generated code.
Reproducer:
https://colab.research.google.com/drive/1U8fe9k85_S4fRurxz_M7g9kYf7Yq2CRy?usp=sharing
Exported program to reproduce is here:
[exported_program.pt2.zip](https://github.com/user-attachments/files/18181526/exported_program.pt2.zip)
Another reproducer with model generation:
https://colab.research.google.com/drive/1W930GmsJEDVMdsBKHuTQruqV6IyoLqBa?usp=sharing
The same in pytorch nightly build
https://colab.research.google.com/drive/1WcRHyac8K2G6Ed4v1NywCoHSGDAMEstq?usp=sharing
the generated code is here
[c5dq6ajvevkzbzmo54sijfiqey4wp7tumw7wdk34ethlfoqcf2by.cpp.zip](https://github.com/user-attachments/files/18181571/c5dq6ajvevkzbzmo54sijfiqey4wp7tumw7wdk34ethlfoqcf2by.cpp.zip)
### Error logs
```
prediction/deployable_modules/tests/test_something.py !!! Uncaught exception: index out of bounds: 0 <= tmp7 < ks1
Exception raised from cpp_fused_index_index_put_stack_zeros_4 at /tmp/torchinductor_vscode/c5dq6ajvevkzbzmo54sijfiqey4wp7tumw7wdk34ethlfoqcf2by.cpp:1084 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x9e (0x7ff115c286de in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x80 (0x7ff115bcd0a8 in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: <unknown function> + 0x5024044 (0x7ff173653044 in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #3: cpp_fused_index_index_put_stack_zeros_4 + 0xdcd (0x7ff0bf4a4acd in /tmp/torchinductor_vscode/cxe2kfjxcuzlrkqbjd6yr6psu3h364iewxfja7zwiewr56krpm3n.so)
frame #4: torch::aot_inductor::AOTInductorModel::run_impl(AtenTensorOpaque**, AtenTensorOpaque**, void*, AOTIProxyExecutorOpaque*) + 0xded (0x7ff0bf4a621d in /tmp/torchinductor_vscode/cxe2kfjxcuzlrkqbjd6yr6psu3h364iewxfja7zwiewr56krpm3n.so)
frame #5: torch::aot_inductor::AOTInductorModelContainer::run(AtenTensorOpaque**, AtenTensorOpaque**, void*, AOTIProxyExecutorOpaque*) + 0xf1 (0x7ff0bf4b4c01 in /tmp/torchinductor_vscode/cxe2kfjxcuzlrkqbjd6yr6psu3h364iewxfja7zwiewr56krpm3n.so)
frame #6: AOTInductorModelContainerRun + 0x86 (0x7ff0bf4a9686 in /tmp/torchinductor_vscode/cxe2kfjxcuzlrkqbjd6yr6psu3h364iewxfja7zwiewr56krpm3n.so)
frame #7: torch::inductor::AOTIModelContainerRunner::run(std::vector<at::Tensor, std::allocator<at::Tensor> >&, AOTInductorStreamOpaque*) + 0x115 (0x7ff1736394e5 in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #8: torch::inductor::AOTIModelContainerRunnerCpu::run(std::vector<at::Tensor, std::allocator<at::Tensor> >&) + 0x22 (0x7ff17363a192 in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #9: <unknown function> + 0x8cbcbf (0x7ff17c609cbf in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #10: <unknown function> + 0x48edbf (0x7ff17c1ccdbf in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
, terminating !!!
```
### Versions
I had the problem in torch 2.5.1 built from nixpkgs, but it reproduces also in vanilla torch 2.5.1+cu121 from google colab.
Also I checked it on nightly build 2.6. The difference is that I see some error in python, and ipynb kernel doesn't crash
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78
| true
|
2,747,683,610
|
No reproducibility after ONNX export of fully converted QAT model
|
onnxruntime-user
|
open
|
[
"module: onnx",
"oncall: quantization",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
When I use quantization-aware training, results are not reproducible between:
1. fake quantized model
2. real quantized model
3. exported ONNX model
### Code example
```python
import torch
import onnxruntime as ort
torch.manual_seed(42)
def dummy_training(model):
model.train()
image = torch.rand(1, 3, 224, 224)
model(image)
# ...
class DummyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.quant = torch.ao.quantization.QuantStub()
self.conv = torch.nn.Conv2d(3, 1, 1)
self.bn = torch.nn.BatchNorm2d(1)
self.relu = torch.nn.ReLU()
self.dequant = torch.ao.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
x = self.bn(x)
x = self.relu(x)
x = self.dequant(x)
return x
def fuse_model(self):
torch.ao.quantization.fuse_modules(self, ['conv', 'bn', 'relu'], inplace=True)
model_fp32 = DummyModel()
# prepare qat
model_fp32.eval()
model_fp32.qconfig = torch.ao.quantization.get_default_qat_qconfig('x86')
model_fp32.fuse_model()
model_fp32.train()
model_fake_quant = torch.ao.quantization.prepare_qat(model_fp32)
# run training
dummy_training(model_fake_quant)
# quantize
model_fake_quant.eval()
model_fake_quant.apply(torch.ao.quantization.disable_observer)
model_fake_quant.apply(torch.nn.intrinsic.qat.freeze_bn_stats)
model_real_quant = torch.ao.quantization.convert(model_fake_quant)
# create onnx model
torch.onnx.export(model_real_quant, torch.rand(1, 3, 224, 224), "quantized.onnx", input_names=["input"], output_names=["output"])
model_onnx = ort.InferenceSession("quantized.onnx", providers=["CPUExecutionProvider"])
# test reproducability
x = torch.rand(1, 3, 224, 224)
res_fake_quant = model_fake_quant(x)
res_real_quant = model_real_quant(x)
res_onnx = model_onnx.run(None, {"input": x.numpy()})[0]
# all asserts will fail!!!
# torch.testing.assert_close(res_fake_quant, res_real_quant)
torch.testing.assert_close(res_real_quant, torch.tensor(res_onnx))
# torch.testing.assert_close(res_fake_quant, torch.tensor(res_onnx))
```
### Error
```
Traceback (most recent call last):
File "D:\Projekte\Quantization 2.0\qat_reproduce3.py", line 62, in <module>
torch.testing.assert_close(res_real_quant, torch.tensor(res_onnx))
File "D:\Tools\Venv\Pytorch_2.5\lib\site-packages\torch\testing\_comparison.py", line 1524, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 49 / 50176 (0.1%)
Greatest absolute difference: 0.011374831199645996 at index (0, 0, 105, 1) (up to 1e-05 allowed)
Greatest relative difference: 0.021739039570093155 at index (0, 0, 116, 173) (up to 1.3e-06 allowed)
```
### Expected Behavior
According to https://discuss.pytorch.org/t/173684 we cannot expect a fake quantized model to behave the same as the quantized model. However, I would expect that once the model is fully quantized, it behaves the same between Pytorch and ONNX Runtime. This was the case for all float32 models I tested.
In this minimal example, the problem ist not severe, but for a real model (ResNet18), the number of mismatched elements grows to over 40% and the greatest absolute difference to 0.05 (greated rel diff to infinity).
### Versions
```
Collecting environment information...
PyTorch version: 2.4.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro (10.0.19045 64-Bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:48:03) [MSC v.1928 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 11.6.55
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Quadro RTX 5000
GPU 1: Quadro RTX 5000
Nvidia driver version: Could not collect
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
Manufacturer: GenuineIntel
Family: 179
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2201
MaxClockSpeed: 2201
L2CacheSize: 3072
L2CacheSpeed: None
Revision: 20225
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.19.2
[pip3] torch==2.4.1+cu118
[pip3] torchvision==0.19.1+cu118
[conda] Could not collect
```
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim
| true
|
2,747,451,048
|
infer_size(a, b) fails when it could return a value
|
xadupre
|
open
|
[
"triaged",
"oncall: pt2",
"module: fakeTensor"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
In function [infer_size](https://github.com/pytorch/pytorch/blob/main/torch/_subclasses/fake_impls.py#L845), the case where both conditions sizeA == 1 and sizeB == 1 are unknown, assuming the model is valid, the function could set ``expandedSizes[i]`` instead of raising an exception:
```python
if (
guard_size_oblivious(sizeA == 1)
or guard_size_oblivious(sizeB == 1)
or sizeA == sizeB
):
expandedSizes[i] = sizeB if guard_size_oblivious(sizeA == 1) else sizeA
else:
expandedSizes[i] = torch.sym_max(sizeA, sizeB)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241216+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] clip-anytorch==2.6.0
[pip3] CoCa-pytorch==0.1.0
[pip3] dalle2-pytorch==1.15.6
[pip3] ema-pytorch==0.7.0
[pip3] executorch==0.4.0
[pip3] flake8==7.1.1
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxconverter-common==1.14.0
[pip3] onnxruntime==1.20.1
[pip3] onnxruntime-genai==0.5.2
[pip3] onnxruntime-gpu==1.21.0
[pip3] onnxruntime-training==1.21.0+cu121
[pip3] onnxscript==0.1.0.dev20240905
[pip3] open_clip_torch==2.26.1
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.8.4
[pip3] torch==2.6.0.dev20241216+cu126
[pip3] torch-fidelity==0.3.0
[pip3] torch_geometric==2.4.0
[pip3] torchao==0.5.0
[pip3] torchaudio==2.6.0.dev20241216+cu126
[pip3] torchmetrics==1.4.3
[pip3] torchvision==0.22.0.dev20241216+cu126
[pip3] triton==3.1.0
[pip3] vector-quantize-pytorch==1.18.1
[conda] Could not collect
```
cc @chauhang @penguinwu @eellison @zou3519 @bdhirsh @yf225
| true
|
2,747,434,072
|
sympy.C.ConstantInteger has no method name
|
xadupre
|
open
|
[
"needs reproduction",
"triaged",
"module: fx",
"oncall: pt2",
"module: dynamic shapes"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
In line, https://github.com/pytorch/pytorch/blob/main/torch/fx/experimental/symbolic_shapes.py#L1652 instruction ``src.name()`` fails when src is One or Zero (sympy.S.One or numpy.S.Zero) because it does not exist for singleton.
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241216+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] clip-anytorch==2.6.0
[pip3] CoCa-pytorch==0.1.0
[pip3] dalle2-pytorch==1.15.6
[pip3] ema-pytorch==0.7.0
[pip3] executorch==0.4.0
[pip3] flake8==7.1.1
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxconverter-common==1.14.0
[pip3] onnxruntime==1.20.1
[pip3] onnxruntime-genai==0.5.2
[pip3] onnxruntime-gpu==1.21.0
[pip3] onnxruntime-training==1.21.0+cu121
[pip3] onnxscript==0.1.0.dev20240905
[pip3] open_clip_torch==2.26.1
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.8.4
[pip3] torch==2.6.0.dev20241216+cu126
[pip3] torch-fidelity==0.3.0
[pip3] torch_geometric==2.4.0
[pip3] torchao==0.5.0
[pip3] torchaudio==2.6.0.dev20241216+cu126
[pip3] torchmetrics==1.4.3
[pip3] torchvision==0.22.0.dev20241216+cu126
[pip3] triton==3.1.0
[pip3] vector-quantize-pytorch==1.18.1
[conda] Could not collect
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @chauhang @penguinwu @bobrenjc93
| true
|
2,747,251,771
|
Fix torch.histogramdd description
|
zeshengzong
|
closed
|
[
"open source",
"Stale",
"release notes: python_frontend"
] | 2
|
CONTRIBUTOR
|
Fixes #124435
| true
|
2,747,233,054
|
[Inductor UT] Mark test case test_linear_and_cel as requires_cuda as
|
etaf
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #142322
* __->__ #143492
* #143491
it's only for cuda now.
Fix #143479
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,747,232,941
|
[Inductor XPU] Add XPU check for `is_big_gpu()`.
|
etaf
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #142322
* __->__ #143491
Fix #143472
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,747,230,673
|
Segmentation fault (core dumped) in `replication_pad2d`
|
LongZE666
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"topic: fuzzer"
] | 2
|
NONE
|
### 🐛 Describe the bug
Under specific inputs, `replication_pad2d` triggered a crash.
```python
import torch
self = torch.full((9, 9, 2, 4, 3,), 1.251e+12, dtype=torch.float)
padding = [0, 0, 0, 0]
torch._C._nn.replication_pad2d(self, padding)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet
| true
|
2,747,218,170
|
Floating point exception (core dumped) in `thnn_conv2d`
|
LongZE666
|
closed
|
[
"module: crash",
"module: nn",
"module: error checking",
"module: convolution",
"triaged",
"topic: fuzzer"
] | 1
|
NONE
|
### 🐛 Describe the bug
Under specific inputs, `thnn_conv2d` triggered a crash.
```python
import torch
self = torch.full((9, 2, 3, 9,), 1e+13, dtype=torch.float)
weight = torch.full((8, 2, 3, 3,), 7.89645e+16, dtype=torch.float)
kernel_size = [36028797018963968, 36028797018963968]
bias = None
stride = [1048576, 1048576]
padding = [36028797018963968, 36028797018963968]
torch._C._nn.thnn_conv2d(self, weight, kernel_size, bias, stride, padding)
```
Output
```
Floating point exception (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet
| true
|
2,747,211,871
|
Aborted (core dumped) in `replication_pad3d`
|
LongZE666
|
closed
|
[
"module: crash",
"module: nn",
"module: error checking",
"triaged",
"topic: fuzzer"
] | 2
|
NONE
|
### 🐛 Describe the bug
Under specific inputs, `replication_pad3d` triggered a crash.
```python
import torch
self = torch.full((9, 1, 1, 9, 1, 8, 8, 7, 8,), 1.4013e-45, dtype=torch.float)
padding = [0, 0, 0, 0, 0, 0]
torch.ops.aten.replication_pad3d(self, padding)
```
Output
```
double free or corruption (out)
Aborted (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet
| true
|
2,747,204,933
|
Aborted (core dumped) in `replication_pad1d`
|
LongZE666
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"actionable",
"topic: fuzzer"
] | 2
|
NONE
|
### 🐛 Describe the bug
Under specific inputs, `replication_pad1d` triggered a crash.
```python
import torch
self = torch.full((9, 9, 7, 1,), 3.5e+35, dtype=torch.float)
padding = [-2, -2]
torch.ops.aten.replication_pad1d(self, padding)
```
Output
```
corrupted size vs. prev_size
Aborted (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet
| true
|
2,747,201,678
|
torch cumsum gives incorrect output for large tensors
|
mzaidi59
|
closed
|
[
"high priority",
"module: cuda",
"triaged",
"module: 64-bit"
] | 6
|
NONE
|
### 🐛 Describe the bug
We (@akhilkedia @anshmn) observed that torch.cumsum(() returns incorrect output for large tensors
Correct Case with small tensor -
```
import torch
a = torch.ones((4096*8, 100000), dtype=torch.float, device='cuda')
a /= 100000
c = a.cumsum(dim=-1)
print(c[0,-5:])
```
Output
```
tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000], device='cuda:0')
```
Incorrect Case with large tensor -
```
import torch
b = torch.ones((2*4096*8, 100000), dtype=torch.float, device='cuda')
b /= 100000
d = b.cumsum(dim=-1)
print(d[0,-5:])
```
Output
```
tensor([0.6729, 0.6729, 0.6729, 0.6730, 0.6730], device='cuda:0')
```
In the incorrect case, the cumulative sum is wrong starting from index (32702) (replacing cumsum with original value in tensor). The incorrect sum is then propagated till the end.
```
print(b[0, 32702:32702+4])
print(d[0, 32702:32702+4])
```
Output
```
tensor([1.0000e-05, 1.0000e-05, 1.0000e-05, 1.0000e-05], device='cuda:0')
tensor([3.2703e-01, 3.2704e-01, 1.0000e-05, 2.0000e-05], device='cuda:0')
```
### Versions
Collecting environment information...
PyTorch version: 2.2.0a0+81ea7a4
Is debug build: False
CUDA used to build PyTorch: 12.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.9
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] nvtx==0.2.5
[pip3] onnx==1.15.0rc2
[pip3] optree==0.10.0
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.2.0a0+81ea7a4
[pip3] torch-tensorrt==2.2.0a0
[pip3] torchdata==0.7.0a0
[pip3] torchtext==0.17.0a0
[pip3] torchvision==0.17.0a0
[pip3] triton==2.1.0+6e4932c
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @ptrblck @eqy
| true
|
2,747,199,702
|
Aborted (core dumped) in `mkldnn_rnn_layer`
|
LongZE666
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"module: mkldnn",
"topic: fuzzer"
] | 1
|
NONE
|
### 🐛 Describe the bug
Under specific inputs, `mkldnn_rnn_layer` triggered a crash.
```python
import torch
input = torch.full((1, 8, 1,), 4.13506, dtype=torch.float)
weight0 = torch.full((5, 8,), 2.47475, dtype=torch.float)
weight1 = torch.full((5, 8,), 8.52373, dtype=torch.float)
weight2 = torch.full((5,), 5.73429, dtype=torch.float)
weight3 = torch.full((5,), 6.42933, dtype=torch.float)
hx_ = torch.full((1, 8,), 9.12846, dtype=torch.float)
cx_ = torch.full((1, 1,), 6.00218, dtype=torch.float)
reverse = False
batch_sizes = []
mode = 2
hidden_size = 8
num_layers = 2
has_biases = True
bidirectional = False
batch_first = False
train = False
torch.mkldnn_rnn_layer(input, weight0, weight1, weight2, weight3, hx_, cx_, reverse, batch_sizes, mode, hidden_size, num_layers, has_biases, bidirectional, batch_first, train)
```
Output
```
double free or corruption (!prev)
Aborted (core dumped
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,747,195,247
|
Segmentation fault (core dumped) in `gru_cell`
|
LongZE666
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"actionable",
"module: empty tensor",
"topic: fuzzer"
] | 1
|
NONE
|
### 🐛 Describe the bug
Under specific inputs, `gru_cell` triggered a crash.
```python
import torch
input = torch.full((0, 8,), 0, dtype=torch.float)
hx = torch.full((0, 9,), 0, dtype=torch.float)
w_ih = torch.full((1, 8,), 1.251e+12, dtype=torch.float)
w_hh = torch.full((1, 9,), 1.4013e-45, dtype=torch.float)
b_ih = None
b_hh = None
torch.gru_cell(input, hx, w_ih, w_hh, b_ih, b_hh)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet
| true
|
2,747,188,022
|
Segmentation fault (core dumped) in `embedding_bag.padding_idx`
|
LongZE666
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"module: embedding",
"module: empty tensor",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
Under specific inputs, `embedding_bag.padding_idx` triggered a crash.
```python
import torch
weight = torch.full((3, 4,), 1.11111e+15, dtype=torch.float)
indices = torch.full((5,), -2147483648, dtype=torch.long)
offsets = torch.full((0,), 0, dtype=torch.long)
scale_grad_by_freq = False
mode = 3046875451
sparse = False
per_sample_weights = None
include_last_offset = False
padding_idx = None
torch.ops.aten.embedding_bag.padding_idx(weight, indices, offsets, scale_grad_by_freq, mode, sparse, per_sample_weights, include_last_offset, padding_idx)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet
| true
|
2,747,179,041
|
Segmentation fault (core dumped) in `embedding_backward`
|
LongZE666
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"module: embedding",
"module: empty tensor",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
Under specific inputs, `embedding_backward` triggered a crash.
```python
import torch
grad = torch.full((8, 0, 3, 7, 6, 1, 0,), 0, dtype=torch.float)
indices = torch.full((2,), 1250999896764, dtype=torch.long)
num_weights =536870912
padding_idx = 4194304
scale_grad_by_freq = True
sparse = False
torch.ops.aten.embedding_backward(grad, indices, num_weights, padding_idx, scale_grad_by_freq, sparse)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet
| true
|
2,747,174,664
|
Segmentation fault (core dumped) in `conv3d`
|
LongZE666
|
open
|
[
"module: crash",
"module: nn",
"module: error checking",
"module: convolution",
"triaged",
"topic: fuzzer"
] | 1
|
NONE
|
### 🐛 Describe the bug
Under specific inputs, `conv3d` triggered a crash.
```python
import torch
input = torch.full((3, 1, 3, 4, 3,), 4.44444e+12, dtype=torch.float)
weight = torch.full((3, 1, 3, 1, 3,), 1e+13, dtype=torch.float)
bias = None
stride = [1, 1, 1]
padding = "same"
dilation = [3046875451, 3046875451, 3046875451]
groups = 1
#torch.ops.aten.conv3d.padding(input, weight, bias, stride, padding, dilation, groups)
torch.nn.functional.conv3d(input, weight, bias, stride, padding, dilation, groups)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet
| true
|
2,747,167,862
|
Segmentation fault (core dumped) in `conv1d`
|
LongZE666
|
open
|
[
"module: crash",
"module: nn",
"module: error checking",
"triaged",
"module: edge cases",
"topic: fuzzer"
] | 2
|
NONE
|
### 🐛 Describe the bug
Under specific inputs, `conv1d` triggered a crash.
```python
import torch
input = torch.full((10, 10, 9,), 0, dtype=torch.float)
weight = torch.full((2, 10, 9,), 9.0072e+15, dtype=torch.float)
bias = None
stride = [1]
padding = "same"
dilation = [2147483648]
groups = 1
# torch.ops.aten.conv1d.padding(input, weight, bias, stride, padding, dilation, groups)
torch.nn.functional.conv1d(input, weight, bias, stride, padding, dilation, groups)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet
| true
|
2,747,158,019
|
[Break XPU] Newly added test case with CUDA hard code failed on XPU.
|
etaf
|
closed
|
[
"triaged",
"module: xpu"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
The newly added test case `test_linear_and_cel` in test/inductor/test_inplace_padding.py has "cuda" hard code but run on XPU.:https://hud.pytorch.org/pr/pytorch/pytorch/142322#34573031104
```
2024-12-18T04:27:31.8569895Z =================================== FAILURES ===================================
2024-12-18T04:27:31.8570211Z ____________________ InplacePaddingTest.test_linear_and_cel ____________________
2024-12-18T04:27:31.8570515Z Traceback (most recent call last):
2024-12-18T04:27:31.8570900Z File "/var/lib/jenkins/pytorch/test/inductor/test_inplace_padding.py", line 146, in test_linear_and_cel
2024-12-18T04:27:31.8571333Z x = torch.randn(B * T, C, requires_grad=True).cuda().bfloat16()
2024-12-18T04:27:31.8571744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py", line 311, in _lazy_init
2024-12-18T04:27:31.8572183Z raise AssertionError("Torch not compiled with CUDA enabled")
2024-12-18T04:27:31.8572499Z AssertionError: Torch not compiled with CUDA enabled
2024-12-18T04:27:31.8572673Z
2024-12-18T04:27:31.8572814Z To execute this test, run the following from the base repo dir:
2024-12-18T04:27:31.8573196Z python test/inductor/test_inplace_padding.py InplacePaddingTest.test_linear_and_cel
```
Seems the test case only suitable for CUDA as it has device bias code like `os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"`
We should mark it as requires_cuda.
### Versions
PyTorch version: 2.6.0a0+gite6c7400
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease24.6.13-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,747,116,491
|
[DONT MERGE]Xpu win whl
|
chuanqi129
|
closed
|
[
"open source",
"ciflow/binaries"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,747,092,342
|
Tensor size for `masked_fill` exceeds the limit supported by the MPS backend: must be less than 2**32 elements
|
rusnov
|
closed
|
[
"module: crash",
"triaged",
"module: mps"
] | 8
|
NONE
|
### 🐛 Describe the bug
I get the following error, when using `masked_fill` on larger tensors. See error and the minimal code below.
**Error:**
```
/AppleInternal/Library/BuildRoots/.../Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:850: failed assertion `[MPSNDArray initWithDevice:descriptor:isTextureBacked:] Error: total bytes of NDArray > 2**32'
```
**Code:**
```python
import torch
device = torch.device("mps")
mask_bool = torch.triu(torch.ones(1024, 1024, device=device), diagonal=1).bool()
attn_scores = torch.rand(48, 25, 1024, 1024, device=device)
attn_scores.masked_fill_(mask_bool, 0)
```
### Versions
```
PyTorch version: 2.6.0.dev20241217
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:26:25) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Max
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.13.1
[pip3] torch==2.6.0.dev20241217
[pip3] torchaudio==2.6.0.dev20241217
[pip3] torchvision==0.22.0.dev20241217
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.6.0.dev20241217 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241217 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241217 pypi_0 pypi
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,747,056,176
|
Address source code building command for Intel GPU support
|
ZailiWang
|
closed
|
[
"triaged",
"open source",
"Merged",
"topic: not user facing"
] | 14
|
CONTRIBUTOR
|
As the title
| true
|
2,747,022,409
|
reduce import torch time.
|
xuhancn
|
closed
|
[
"open source",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Fixes #140970
Original code:
<img width="413" alt="Image" src="https://github.com/user-attachments/assets/8035580c-f261-4b4c-a652-61d1666da894" />
It takes 2.1s
This PR, `load torch_cpu` modules to replace `import torch`:
<img width="438" alt="Image" src="https://github.com/user-attachments/assets/d6d5fe31-ae67-4cbe-b3cb-38187405b1f5" />
It takes 1.18s
The performance is speed up.
But,
1. The code is hard to be maintained.
2. Mac OS still not work.
@gjong5
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,746,904,505
|
[ONNX] Failed to export PyTorch-2-Export-Quantized model to onnx
|
veritas-Qiu
|
open
|
[
"module: onnx",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
try to quantize a model like [this link](https://pytorch.org/tutorials/prototype/pt2e_quant_qat.html)
(only different in model structures and datasets)
then export the quantized model to onnx by `torch.onnx.export` (original model is able to output), and get
```Traceback (most recent call last):
File "d:\my_project\train_quantized.py", line 798, in <module>
onnx_program = torch.onnx.export(model, torch_input, "my_quantized.onnx")
File "D:\anaconda3\envs\hm\lib\site-packages\torch\onnx\__init__.py", line 375, in export
export(
File "D:\anaconda3\envs\hm\lib\site-packages\torch\onnx\utils.py", line 502, in export
_export(
File "D:\anaconda3\envs\hm\lib\site-packages\torch\onnx\utils.py", line 1564, in _export
graph, params_dict, torch_out = _model_to_graph(
File "D:\anaconda3\envs\hm\lib\site-packages\torch\onnx\utils.py", line 1117, in _model_to_graph
graph = _optimize_graph(
File "D:\anaconda3\envs\hm\lib\site-packages\torch\onnx\utils.py", line 639, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "D:\anaconda3\envs\hm\lib\site-packages\torch\onnx\utils.py", line 1848, in _run_symbolic_function
raise errors.UnsupportedOperatorError(
torch.onnx.errors.UnsupportedOperatorError: ONNX export failed on an operator with unrecognized namespace quantized_decomposed::quantize_per_tensor. If you are trying to export a custom operator, make sure you registered it with the right domain and version.
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 专业版 (10.0.22631 64 位)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:40:08) [MSC v.1938 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090 D
Nvidia driver version: 560.94
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel(R) Core(TM) i9-14900KF
Manufacturer: GenuineIntel
Family: 207
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3200
MaxClockSpeed: 3200
L2CacheSize: 32768
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] efficientnet_pytorch==0.7.1
[pip3] flake8==7.1.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] onnx==1.16.0
[pip3] onnx-tool==0.9.0
[pip3] onnxruntime==1.17.1
[pip3] onnxscript==0.1.0.dev20241218
[pip3] onnxsim==0.4.36
[pip3] optree==0.12.1
[pip3] pytorch-lightning==2.4.0
[pip3] segmentation-models-pytorch==0.3.4
[pip3] torch==2.5.1
[pip3] torch-pruning==1.5.1
[pip3] torch-tb-profiler==0.4.3
[pip3] torch_tensorrt==2.5.0
[pip3] torchaudio==2.5.1
[pip3] torchmetrics==1.4.2
[pip3] torchvision==0.20.1
[conda] blas 1.0 mkl defaults
[conda] cuda-cudart 11.8.89 0 nvidia
[conda] cuda-cudart-dev 11.8.89 0 nvidia
[conda] cuda-cupti 11.8.87 0 nvidia
[conda] cuda-libraries 11.8.0 0 nvidia
[conda] cuda-libraries-dev 11.8.0 0 nvidia
[conda] cuda-nvrtc 11.8.89 0 nvidia
[conda] cuda-nvrtc-dev 11.8.89 0 nvidia
[conda] cuda-nvtx 11.8.86 0 nvidia
[conda] cuda-opencl 12.5.39 he0c23c2_1 conda-forge
[conda] cuda-opencl-dev 12.5.39 he0c23c2_1 conda-forge
[conda] cuda-runtime 11.8.0 0 nvidia
[conda] efficientnet-pytorch 0.7.1 pypi_0 pypi
[conda] libcublas 11.11.3.6 0 nvidia
[conda] libcublas-dev 11.11.3.6 0 nvidia
[conda] libcufft 10.9.0.58 0 nvidia
[conda] libcufft-dev 10.9.0.58 0 nvidia
[conda] libcurand 10.3.6.82 he0c23c2_0 conda-forge
[conda] libcurand-dev 10.3.6.82 he0c23c2_0 conda-forge
[conda] libcusolver 11.4.1.48 0 nvidia
[conda] libcusolver-dev 11.4.1.48 0 nvidia
[conda] libcusparse 11.7.5.86 0 nvidia
[conda] libcusparse-dev 11.7.5.86 0 nvidia
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] libnvjitlink-dev 12.4.127 0 nvidia
[conda] mkl 2021.4.0 pypi_0 pypi
[conda] mkl-service 2.4.0 py310h2bbff1b_1 defaults
[conda] mkl_fft 1.3.11 py310h827c3e9_0 defaults
[conda] mkl_random 1.2.8 py310hc64d2fc_0 defaults
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] pytorch 2.5.1 py3.10_cuda11.8_cudnn9_0 pytorch
[conda] pytorch-cuda 11.8 h24eeafa_6 pytorch
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] segmentation-models-pytorch 0.3.4 pypi_0 pypi
[conda] torch-pruning 1.5.1 pypi_0 pypi
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torch-tensorrt 2.5.0 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchmetrics 1.4.2 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
| true
|
2,746,898,464
|
Fix space typo in warning message
|
SilverSoldier
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"Stale",
"release notes: distributed (fsdp)"
] | 15
|
CONTRIBUTOR
|
Warning shows up like this (no space between willbe):
```
/home/xxx/.local/lib/python3.11/site-packages/torch/distributed/fsdp/_state_dict_utils.py:827:
UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,746,897,879
|
[Break XPU] The device-bias hard code in `is_big_gpu` cause case failures on XPU.
|
etaf
|
closed
|
[
"triaged",
"module: xpu"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
We found the recent XPU CI failure https://hud.pytorch.org/pr/pytorch/pytorch/142322#34573031104 which is caused by #143339
```
Z _______________ AOTInductorTestABICompatibleGpu.test_conv3d_xpu ________________
2024-12-18T04:17:23.7324890Z Traceback (most recent call last):
2024-12-18T04:17:23.7325205Z File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 11965, in new_test
2024-12-18T04:17:23.7325511Z return value(self)
2024-12-18T04:17:23.7325794Z File "/var/lib/jenkins/pytorch/test/inductor/test_aot_inductor.py", line 4116, in test_conv3d
2024-12-18T04:17:23.7326123Z if self.device != GPU_TYPE or not is_big_gpu():
2024-12-18T04:17:23.7326472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/utils.py", line 1134, in is_big_gpu
2024-12-18T04:17:23.7326825Z prop = DeviceProperties.create(device)
2024-12-18T04:17:23.7327170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/runtime/hints.py", line 139, in create
2024-12-18T04:17:23.7327539Z props = device_interface.get_device_properties(device)
2024-12-18T04:17:23.7327919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py", line 524, in get_device_properties
2024-12-18T04:17:23.7328298Z _lazy_init() # will define _get_device_properties
2024-12-18T04:17:23.7328643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py", line 311, in _lazy_init
2024-12-18T04:17:23.7329071Z raise AssertionError("Torch not compiled with CUDA enabled")
2024-12-18T04:17:23.7329345Z AssertionError: Torch not compiled with CUDA enabled
2024-12-18T04:17:23.7329493Z
2024-12-18T04:17:23.7329613Z To execute this test, run the following from the base repo dir:
2024-12-18T04:17:23.7330007Z python test/inductor/test_aot_inductor.py AOTInductorTestABICompatibleGpu.test_conv3d_xpu
```
The root cause is the "cuda" hard code in `is_big_gpu`.
### Versions
PyTorch version: 2.6.0a0+gite6c7400
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease24.6.13-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,746,892,469
|
NFS errors during DataLoader shutdown when num_workers > 1 when temporary directory is on NFS
|
edoyango
|
open
|
[
"triaged",
"module: data"
] | 0
|
NONE
|
### 🐛 Describe the bug
Hi,
This is more of a mild annoyance rather than a show-stopping issue. This issue occurs when on Linux and when using an NFS-mounted directory as the temporary directory.
When finished iterating over a DataLoader object, I get the following errors:
```
Traceback (most recent call last):
File "/usr/lib64/python3.9/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/usr/lib64/python3.9/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib64/python3.9/multiprocessing/util.py", line 133, in _remove_temp_dir
rmtree(tempdir)
File "/usr/lib64/python3.9/shutil.py", line 734, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/usr/lib64/python3.9/shutil.py", line 690, in _rmtree_safe_fd
onerror(os.unlink, fullname, sys.exc_info())
File "/usr/lib64/python3.9/shutil.py", line 688, in _rmtree_safe_fd
os.unlink(entry.name, dir_fd=topfd)
OSError: [Errno 16] Device or resource busy: '.nfs8b2479d03841bd4400015e16'
Traceback (most recent call last):
File "/usr/lib64/python3.9/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/usr/lib64/python3.9/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib64/python3.9/multiprocessing/util.py", line 133, in _remove_temp_dir
rmtree(tempdir)
File "/usr/lib64/python3.9/shutil.py", line 734, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/usr/lib64/python3.9/shutil.py", line 690, in _rmtree_safe_fd
onerror(os.unlink, fullname, sys.exc_info())
File "/usr/lib64/python3.9/shutil.py", line 688, in _rmtree_safe_fd
os.unlink(entry.name, dir_fd=topfd)
OSError: [Errno 16] Device or resource busy: '.nfs17203ac1c489d74f00015e15'
```
Code to reproduce:
```
from torch.utils.data import DataLoader, Dataset
class ExampleDataset(Dataset):
def __len__(self):
return 100
def __getitem__(self, index):
return index
dataset = ExampleDataset()
dl = DataLoader(dataset, num_workers=2)
for i in dl:
print(i)
```
I believe this is related to shutdown/cleanup of multiprocessing managers/workers https://github.com/python/cpython/issues/58186. The error occurs precisely when shutting down the workers https://github.com/pytorch/pytorch/blob/main/torch/utils/data/dataloader.py#L1582, but I don't understand enough about how the dataloader works to suggest a fix.
I know in most cases it's easier to just use a local directory as tmp, but our cluster (academic HPC) is setup such that each node has minimal local disk space and local disk space is shared by multiple users.
Thanks,
Ed
### Versions
Collecting environment information...
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 9.1 (Plow) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.9.18 (main, Jul 3 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (64-bit runtime)
Python platform: Linux-5.14.0-162.23.1.el9_1.x86_64-x86_64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 45 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
CPU family: 6
Model: 79
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 48
Stepping: 1
BogoMIPS: 5187.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap xsaveopt arat md_clear flush_l1d arch_capabilities
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 12 MiB (48 instances)
L3 cache: 1.6 GiB (48 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.19.2
[pip3] torch==1.13.1
[pip3] torchvision==0.14.1
[conda] Could not collect
cc @andrewkho @divyanshk @VitalyFedyunin @dzhulgakov
| true
|
2,746,885,941
|
[c10d] thread safety issue with CUDAEventCache
|
suo
|
closed
|
[
"oncall: distributed",
"module: c10d"
] | 4
|
MEMBER
|
The following race can happen if we ever schedule NCCL work from a different thread than the original Python thread, and that thread dies before process shutdown.
1. The CUDAEventCache is [thread-local](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L839-L841).
2. WorkNCCL [stores a cached Event](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L479-L484).
3. The cached Event holds a reference to the cache that created it, via a [captured `this` pointer](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L810).
4. The thread that created the WorkNCCL could die at any time, destructing its thread-local CUDAEventCache and leaving the reference in (3) dangling.
5. On destruction, we [attempt to drain](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L2245) completed `Work`s, and try to dereference this dangling reference and explode.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,746,853,880
|
Larger numerical divergence after applying torch.compile on a batch-linear model
|
maybeLee
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Hi I am trying to use torch.compile to optimize a model's performance. However, I notice that the optimized model has larger numerical divergence compared to the original one.
Here is the simplified reproducible script:
```
import torch
from torch import nn
torch.manual_seed(0)
NUM_INPUT=50
INPUT_SIZE=500
NUM_LINEAR=2
DEVICE="cuda"
class SimpleModel(nn.Module):
def __init__(self, device=DEVICE):
super().__init__()
self.device = device
self.weights = torch.nn.ParameterList(
[torch.nn.Parameter(torch.randn(NUM_INPUT, INPUT_SIZE//NUM_LINEAR)) for _ in range(NUM_LINEAR)]
).to(self.device)
self.biases = torch.nn.ParameterList(
[torch.randn(NUM_INPUT).to(self.device) for _ in range(NUM_LINEAR)]
).to(self.device)
def to_float(self):
for layer in [self.weights, self.biases]:
layer = layer.cpu().float().to(self.device)
def to_double(self):
for layer in [self.weights, self.biases]:
layer = layer.cpu().double().to(self.device)
def forward(self, x):
l1_out = torch.split(x.to(self.device), INPUT_SIZE//NUM_LINEAR, dim=1)
l1_linear = []
for i in range(len(l1_out)):
l1_linear.append(
torch.nn.functional.linear(
l1_out[i], self.weights[i], self.biases[i])
)
l1_out = torch.cat(l1_linear, dim=1)
return l1_out
arg = torch.randn(NUM_INPUT, INPUT_SIZE, device=DEVICE)
arg = arg + torch.randn(NUM_INPUT, INPUT_SIZE, device=DEVICE) + torch.tensor(100, dtype=torch.float32, device=DEVICE)
low_input = arg.to(torch.float32)
high_input = arg.to(torch.float64)
model = SimpleModel()
fp32_origin = model(low_input)
model.to_double()
fp64_ref = model(high_input)
optimized_model = torch.compile(model).to(DEVICE)
optimized_model.to_float()
fp32_compiled = optimized_model(low_input)
print("Eager divergence", torch.max(torch.abs(fp32_origin - fp64_ref)))
print("Compile divergence", torch.max(torch.abs(fp32_compiled - fp64_ref)))
```
Output:
```
Eager divergence tensor(0.0008, device='cuda:0', dtype=torch.float64, grad_fn=<MaxBackward1>)
Compile divergence tensor(0.0018, device='cuda:0', dtype=torch.float64, grad_fn=<MaxBackward1>)
```
Essentially, the model is quite simple (with only two linear layers), however the numerical divergence (i.e., fp64-fp32) increases from 0.0008 to 0.0018.
Here I did some simple checks:
1. This issue only occurs in GPU but not CPU.
2. If the NUM_INPUT is small (e.g., 30), this issue does not occur.
I am very interested in your opinion regarding this issue. In particular, may I ask to what extend of precision degradation introduced by torch.compile do you think is acceptable?
### Versions
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20241112
[pip3] optree==0.13.1
[pip3] torch==2.5.0
[pip3] torchvision==0.20.1
[pip3] torchviz==0.0.2
[pip3] triton==3.1.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.0 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,746,738,299
|
dummy pr
|
xuhancn
|
closed
|
[
"open source",
"topic: not user facing",
"ciflow/xpu"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,746,731,051
|
log guard_size_oblivious call sites
|
bobrenjc93
|
closed
|
[
"release notes: fx",
"topic: not user facing",
"fx",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143467
This makes it much easier to know what's going on when we guard on data dependent operations. Currently if we throw on guard on data dependent, we only show the python invocation that caused it (not the underlying leaf c++ call which truly causes it).
For reviewers: I was torn on whether or not to make this a special thing separate to TORCH_LOGS="dynamic" but decided against it since dynamic is already opt in. The benefit is we now get much more visibility when users run with this flag on, but logs do get a bit more spewy. I'm still open to the idea of moving these logs somewhere else and maybe building a UI on top of it to make it easier to manage the information overload.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,746,685,478
|
Support Dict Parameter Type for custom_op
|
xinyu-intel
|
open
|
[
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Is it possible to support infer_schema for custom_op which has Dict as input parameters? I think opschema can support such sig `(Tensor t, Dict(str, Any) meta) -> Tensor`. Also, can such inputs be mutated?
```python
import torch
from typing import Dict, Any
@torch.library.custom_op("host_code::collect_max", mutates_args=(), device_types="cpu")
def fn(t: torch.Tensor, meta: Dict[str, Any]) -> torch.Tensor:
meta["max"] = t.max().item()
return t.clone()
@torch.library.register_fake("host_code::collect_max")
def fn_fake(t: torch.Tensor, meta: Dict[str, Any]) -> torch.Tensor:
return t
t = torch.randn((3,3))
meta = {}
fn(t, meta)
print(meta)
```
### Error logs
```
Traceback (most recent call last):
File "/Users/chenxiny/workspace/dynamo_case/custom_op.py", line 5, in <module>
def fn(t: torch.Tensor, meta: Dict[str, Any]):
File "/Users/chenxiny/miniforge3/envs/torch-metal/lib/python3.10/site-packages/torch/_library/custom_ops.py", line 121, in inner
schema_str = torch.library.infer_schema(fn, mutates_args=mutates_args)
File "/Users/chenxiny/miniforge3/envs/torch-metal/lib/python3.10/site-packages/torch/_library/infer_schema.py", line 106, in infer_schema
error_fn(
File "/Users/chenxiny/miniforge3/envs/torch-metal/lib/python3.10/site-packages/torch/_library/infer_schema.py", line 58, in error_fn
raise ValueError(
ValueError: infer_schema(func): Parameter meta has unsupported type typing.Dict[str, typing.Any]. The valid types are: dict_keys([<class 'torch.Tensor'>, typing.Optional[torch.Tensor], typing.Sequence[torch.Tensor], typing.List[torch.Tensor], typing.Sequence[typing.Optional[torch.Tensor]], typing.List[typing.Optional[torch.Tensor]], <class 'int'>, typing.Optional[int], typing.Sequence[int], typing.List[int], typing.Optional[typing.Sequence[int]], typing.Optional[typing.List[int]], <class 'float'>, typing.Optional[float], typing.Sequence[float], typing.List[float], typing.Optional[typing.Sequence[float]], typing.Optional[typing.List[float]], <class 'bool'>, typing.Optional[bool], typing.Sequence[bool], typing.List[bool], typing.Optional[typing.Sequence[bool]], typing.Optional[typing.List[bool]], <class 'str'>, typing.Optional[str], typing.Union[int, float, bool], typing.Union[int, float, bool, NoneType], typing.Sequence[typing.Union[int, float, bool]], typing.List[typing.Union[int, float, bool]], <class 'torch.dtype'>, typing.Optional[torch.dtype], <class 'torch.device'>, typing.Optional[torch.device]]). Got func with signature (t: torch.Tensor, meta: Dict[str, Any]))
```
### Versions
```
PyTorch version: 2.6.0.dev20241126
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:20) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] torch==2.6.0.dev20241126
[pip3] torchaudio==2.5.0.dev20241126
[pip3] torchvision==0.20.0.dev20241126
[conda] numpy 2.1.2 pypi_0 pypi
[conda] torch 2.6.0.dev20241126 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241126 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241126 pypi_0 pypi
```
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225
| true
|
2,746,631,749
|
[ROCm] MI300X FP8 scaled_mm is extremely slow on nightly
|
OrenLeung
|
open
|
[
"module: performance",
"module: rocm",
"triaged"
] | 22
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Hi AMD Team,
`torch._scaled_mm` is extremely slow on MI300X at ~100TFLOP/s verus ~1200TFLOP/s on H100
Can you look into this?
cc: @hliuca
## ROCm
```
m=16384 n=8192 k=1280: 108.07154472843483
m=16384 n=1024 k=8192: 110.56206220309926
m=16384 n=8192 k=7168: 109.66662842248034
m=16384 n=3584 k=8192: 110.59228182207659
m=8192 n=8192 k=8192: 109.86138366796457
```
## H100
```
m=16384 n=8192 k=1280: 1239.4133451945781
m=16384 n=1024 k=8192: 1347.0844475792383
m=16384 n=8192 k=7168: 1332.2623882545472
m=16384 n=3584 k=8192: 1309.4453003269748
m=8192 n=8192 k=8192: 1304.5406858844613
```
## Reprod
```
import time
import torch
from triton.testing import do_bench
torch.manual_seed(0)
repeats = 200
warmup = 30
timeout = 0.5
device = 'cuda'
# GEMM Shapes
shapes = [
(16384, 8192, 1280),
(16384, 1024, 8192),
(16384, 8192, 7168),
(16384, 3584, 8192),
(8192, 8192, 8192)
]
results = []
for (m, n, k) in shapes:
# FLOPS
nFLOPS = 2 * m * n * k
a_fp8_e5m2 = torch.randn(m, k, device=device).to(torch.float8_e5m2fnuz)
b_fp8_e5m2 = torch.randn(n, k, device=device).to(torch.float8_e4m3fnuz).transpose(-1, -2)
scale_a = torch.tensor(1.0, device=device, dtype=torch.float32)
scale_b = torch.tensor(1.0, device=device, dtype=torch.float32)
ms_fp8_scaled_mm_e4m3 = do_bench(lambda: torch._scaled_mm(a_fp8_e5m2, b_fp8_e5m2, scale_a, scale_b), warmup=warmup, rep=repeats)
tflops_fp8_scaled_mm_e4m3 = nFLOPS / ms_fp8_scaled_mm_e4m3 * 1e-9
time.sleep(timeout)
print(f"{m=} {n=} {k=}: {tflops_fp8_scaled_mm_e4m3}")
```
cc: @hliuca
### Versions
```bash
pip list | grep torch
pytorch-triton-rocm 3.2.0+git35c6c7c6
torch 2.6.0.dev20241216+rocm6.2.4
```
cc @msaroufim @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,746,624,804
|
Add a register_replacement to fix float8 delayed scaling kernel fusion issues
|
y-sq
|
closed
|
[
"fb-exported",
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary:
We previously tried the `defer_reduction_split_after_fusion` way to fix the fusion issue.
However, as we agree that the longer-term solution is cooperative reduction + tiled reduction, the defer reduction split approach will also be a shorter-term solution. And we want to keep the shorter-term solution simpler.
This pr uses the `pattern_matcher` to match the fp8 delayed scaling pattern, simply replace every `max(abs(x))` with `max(abs(x), dim=-1), max()`. It generates the same result as the defer reduction split approach .
Test Plan:
Run float8 training script. Amax and cast are fused in delayed scaling; dynamic scaling is not affected.
The delayed scaling kernel also looks reasonable to me, https://fburl.com/phabricator/iqmlollk
```
TORCH_LOGS="fusion" TORCHINDUCTOR_LOOP_ORDERING_AFTER_FUSION=1 buck run mode/opt scripts/shuqiyang/test_inductor:test_float8 -- ~/local/tmp/20241120_test --dtype_filter float8 --scaling_type_input delayed --scaling_type_weight delayed --scaling_type_grad_output delayed 2>&1 | tee ~/test_compile.txt
```
```
TORCH_LOGS="fusion" TORCHINDUCTOR_LOOP_ORDERING_AFTER_FUSION=1 buck run mode/opt scripts/shuqiyang/test_inductor:test_float8 -- ~/local/tmp/20241120_test --dtype_filter float8 --scaling_type_input dynamic --scaling_type_weight dynamic --scaling_type_grad_output dynamic 2>&1 | tee ~/test_compile.txt
```
Differential Revision: D67135795
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,746,613,893
|
unreasonable ConstraintViolationError when using torch dynamo to compile torch model
|
Jason3900
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: dynamo",
"oncall: export"
] | 3
|
NONE
|
### 🐛 Describe the bug
I'm using torch dynamo backend to compile model to export to tensorrt.
```python
inputs = [torch.randn(1, 3, 28, 288, 512).cuda().to(torch.float16)]
dynamic_h = torch.export.Dim("dim_3", min=224, max=640)
dynamic_w = torch.export.Dim("dim_4", min=224, max=640)
dynamic_t = torch.export.Dim("dim_2", min=1, max=200)
dynamic_shapes={"x": {2:dynamic_t, 3: dynamic_h, 4: dynamic_w}}
exp_program = torch.export.export(encoder_model, args=tuple(inputs), dynamic_shapes=dynamic_shapes, strict=True)
trt_model = torch_tensorrt.dynamo.compile(
exported_program=exp_program,
assume_dynamic_shape_support=True,
inputs=inputs,
make_refitable=True,
disable_tf32=True,
debug=True,
enabled_precisions={torch.half, torch.float},
torch_executed_ops={},
min_block_size=17,
truncate_double=True,
use_python_runtime=False)
```
I've confirmed that the h, w dim can be dynamic without constraint error but the dim_2 cannot. It's quite strange.
I trace the cause and found it's in here
```python
class GroupNormSpatial(nn.Module):
"""GroupNorm with spatial dimensions ignored."""
def __init__(self, num_groups, num_channels, epsilon: float = 1e-5, affine=True):
super().__init__()
# affine=False # TODO: for tensorrt only
self.norm_fn = nn.GroupNorm(num_groups=num_groups, num_channels=num_channels, eps=epsilon, affine=affine)
def forward(self, inputs: torch.Tensor) -> torch.Tensor:
if int(inputs.ndim) == 5: # video
b, c, t, h, w = inputs.shape
inputs = inputs.permute(0,2,1,3,4).flatten(start_dim=0, end_dim=1) # ERROR
out = self.norm_fn(inputs)
out = out.reshape(b, t, c, h, w).permute(0,2,1,3,4) # ERROR
return out
else: # Image, b c h w -> b c h w
out = self.norm_fn(inputs)
return out
```
```
I1218 02:40:31.595000 155463 torch/_utils_internal.py:116] [0/0] CompilationMetrics(compile_id='0/0', frame_key='1', co_name='forward', co_filename='xxx,py', co_firstlineno=50, cache_size=0, accumulated_cache_size=0, guard_count=None, shape_env_guard_count=None, graph_op_count=None, graph_node_count=None, graph_input_count=None, start_time=1734489622.2484767, entire_frame_compile_time_s=None, backend_compile_time_s=None, inductor_compile_time_s=None, code_gen_time_s=None, fail_type="<class 'torch.fx.experimental.symbolic_shapes.ConstraintViolationError'>", fail_reason='Constraints violated (dim_2)! For more information, run with TORCH_LOGS="+dynamic".\n - Not all values of dim_2 = L[\'x\'].size()[2] in the specified range dim_2 <= 200 satisfy the generated guard Ne(Mod(1, ((L[\'x\'].size()[2] - 1)//2) + 1), 0).\n - Not all values of dim_2 = L[\'x\'].size()[2] in the specified range dim_2 <= 200 satisfy the generated guard Ne(Mod(1, ((L[\'x\'].size()[2] - 1)//4) + 1), 0).\n - Not all values of dim_2 = L[\'x\'].size()[2] in the specified range dim_2 <= 200 satisfy the generated guard 9 <= L[\'x\'].size()[2] and L[\'x\'].size()[2] <= 200', fail_user_frame_filename=None, fail_user_frame_lineno=None, non_compliant_ops=set(), compliant_custom_ops=set(), restart_reasons=set(), dynamo_time_before_restart_s=9.347490787506104, has_guarded_code=False, possibly_missed_reinplacing_opportunities=None)
```
### Versions
I'm using ngc torch `nvcr.io/nvidia/pytorch:24.10-py3`
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,746,588,868
|
Fix torch._refs.tensor error with empty list
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Fixes #143216
**Test Result**
**Before**
```python
>>> import torch
>>> torch._refs.tensor([])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zong/code/pytorch/torch/_refs/__init__.py", line 6614, in tensor
new_tensor = _internal_new_from_data(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zong/code/pytorch/torch/_refs/__init__.py", line 6596, in _internal_new_from_data
tensor = _recursive_build(inferred_scalar_type, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zong/code/pytorch/torch/_refs/__init__.py", line 6545, in _recursive_build
return torch.stack([_recursive_build(scalarType, item) for item in seq])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: stack expects a non-empty TensorList
```
**After**
```python
>>> torch._refs.tensor([])
tensor([])
>>> torch._refs.tensor([], device='cuda')
tensor([], device='cuda:0')
```
```bash
$ pytest test/test_tensor_creation_ops.py -k test_refs_tensor
```

```bash
$ lintrunner
```

cc @ezyang @albanD
| true
|
2,746,555,992
|
[Inductor][CPU] disable bernoulli_p decomposition
|
blzheng
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143460
Fix https://github.com/pytorch/pytorch/issues/142853
`fallback_random=True` should cause RNG to match between compile/eager (by having compile fall back to eager for RNG ops), but the `bernoulli_p` decompose function is not fully consistent with the eager CPU implementation.
We remove the decomp and keep the version for` fallback_random=False`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,746,520,622
|
Add save_config and load_config arguments to torch.save/load
|
mikaylagawarecki
|
closed
|
[
"Stale",
"release notes: python_frontend",
"topic: improvements"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143459
* #143342
* #143324
| true
|
2,746,502,522
|
[Inductor] move custom pre pass
|
Valentine233
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Fixes #143363.
Move `joint_custom_pre` pass after `remove_noop_ops`/`constant_folding`, in order to get the same behavior as `pattern_matcher`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,746,476,513
|
[while_loop][jit inductor] auto-unspecialize int input and output to unbacked symints
|
ydwu4
|
open
|
[
"Stale",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143457
cpp_wrapper codegen doesn't work yet because:
1. wrapper codegen logic assumes tensor output, we need to support int outputs
2. since cpp is strongly typed, we must declare the variable to be either tensor or int and assign the output to the outer declared variable, which requires the code refactoring mentioned in https://github.com/pytorch/pytorch/blob/main/torch/_inductor/codegen/wrapper.py#L2577-L2582.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,746,476,059
|
[hop][inductor] track the dependency on unbacked symbols correctly with constant_args for hops
|
ydwu4
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143456
Before the PR, we're getting an undefined symbol error for output code when an unbacked symint is **only** used in the hop because we didn't correctly record the dependency of the unbacked symbols for hops and it gets DCEed accidentally.
This PR adds the symbol arguments to `constant_args`, where the dependencies can be correctly constructed when `get_unbacked_symbol_uses` is called to check constant_args.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,746,464,964
|
Add strict kwarg to `nn.Module.set_submodule` and fix bug for non dot delineated strings
|
mariovas3
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: improvements"
] | 16
|
CONTRIBUTOR
|
Before fixing set_submodule, it used to create leaf modules when the target was not a dot-delimited string. After the fix it will not create a new attribute if target is a non-dot-delimited string. If you want to create leaf nodes of `nn.Module` parent nodes, you can use `replace_or_create_new_leaf_module`.
Fixes https://github.com/pytorch/pytorch/issues/143441
| true
|
2,746,453,932
|
[foreach_map] Add foreach_map Adam impl to compiled optimizer tests
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
Adds a foreach_map backed Adam to compiled optimizer tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,746,444,346
|
Compiler Bisector Improvements
|
eellison
|
open
|
[
"triaged",
"module: inductor"
] | 6
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
@ezyang has been using Compiler Bisector internally and run it into a few feature requests.
- [ ] Query for backend, subsystems
- [ ] Config option to check meta stride for all ops, not just custom ops
- [ ] Option to specify particular backend/subsystem to iterate over
- [ ] Better print outs of how to manually run a particular bisect - for instance, if we bisected lowerings it should inform user to compare: `TORCH_BISECT_SUBSYSTEM=lowerings TORCH_BISECT_BACKEND=inductor TORCH_BISECT_MAX=21` and `TORCH_BISECT_SUBSYSTEM=lowerings TORCH_BISECT_BACKEND=inductor TORCH_BISECT_MAX=20`
Other requests
- [ ] Option to bisect which compiled graph is causing the issue first. Potentially we would bisect to the bad fwd/backward. then see if fwd, back, or joint graph passes/partitioner is the issue.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @exclamaforte who was interested
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,746,429,384
|
[Inductor] Fix _can_be_inplace function (#143279)
|
jiayisunx
|
closed
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Summary:
Modify _can_be_inplace function: return False if `_other.data` is an instance of `ir.BaseView`.
Fix https://github.com/pytorch/pytorch/issues/143280.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143279
Approved by: https://github.com/leslie-fang-intel, https://github.com/jansel, https://github.com/jgong5
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,746,381,128
|
[MTIA] (4/n) Implement PyTorch APIs to query/reset device peak memory usage
|
chaos5958
|
closed
|
[
"fb-exported",
"Stale",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Summary: This diff implements the "reset_peak_memory_stats" PyTorch API for MTIA devices, which resets the peak device DRAM usage
Test Plan:
```
buck2 test //mtia/host_runtime/torch_mtia/tests:test_torch_mtia_api -- -r test_reset_peak_memory_stats
```
https://www.internalfb.com/intern/testinfra/testrun/281475371812293
Reviewed By: yuhc, egienvalue
Differential Revision: D67120168
| true
|
2,746,356,265
|
Make Inductor cpp backend enable_floating_point_contract_flag to take string
|
hl475
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 18
|
CONTRIBUTOR
|
Differential Revision: D66269001
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,746,323,174
|
[MPS] Add `aten::angle`
|
sezelt
|
closed
|
[
"triaged",
"open source",
"Merged",
"release notes: mps",
"ciflow/mps"
] | 6
|
CONTRIBUTOR
|
This adds an MPS backend implementation for `aten::angle` and `aten::angle_out` (mentioned in issue #77764), following the example #78408.
| true
|
2,746,310,552
|
Enable CPP/CUDAExtension with py_limited_api for python agnosticism
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Getting tested with ao, but now there is a real test i added.
## What does this PR do?
We want to allow custom PyTorch extensions to be able to build one wheel for multiple Python versions, in other words, achieve python agnosticism. It turns out that there is such a way that setuptools/Python provides already! Namely, if the user promises to use only the Python limited API in their extension, they can pass in `py_limited_api` to their Extension class and to the bdist_wheel command (with a min python version) in order to build 1 wheel that will suffice across multiple Python versions.
Sounds lovely! Why don't people do that already with PyTorch? Well 2 things. This workflow is hardly documented (even searching for python agnostic specifically does not reveal many answers) so I'd expect that people simply don't know about it. But even if they did, _PyTorch_ custom Extensions would still not work because we always link torch_python, which does not abide by py_limited_api rules.
So this is where this PR comes in! We respect when the user specifies py_limited_api and skip linking torch_python under that condition, allowing users to enroll in the provided functionality I just described.
## How do I know this PR works?
I manually tested my silly little ultra_norm locally (with `import python_agnostic`) and wrote a test case for the extension showing that
- torch_python doesn't show up in the ldd tree
- no Py- symbols show up
It may be a little confusing that our test case is actually python-free (more clean than python-agnostic) but it is sufficient (and not necessary) towards showing that this change works.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #138088
| true
|
2,746,306,961
|
[dynamo] Properly model root frame globals during inlining
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143447
This patch updates `InliningInstructionTranslator.STORE_GLOBAL` to
properly check whether `self.f_globals` is the same as root frame
`f_globals`. See added comments for why this is important.
Fixes #143425.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,746,283,733
|
[c10d][fr] flight recorder improvements
|
c-p-i-o
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143446
Summary:
1. Flight recorder dumps are now automatically dumped by default upon
timeout or exception. Users don't need to opt-in.
2. Change default dump location to running user's home directory
`.cache` folder.
Test Plan:
1. Tested locally by running the crash program from flight recorder
tutorial page.
https://pytorch.org/tutorials/prototype/flight_recorder_tutorial.html#an-end-to-end-example
2. Noted that flight recorder files were correctly created.
❯ pwd
/home/cpio/.cache/fr_trace
❯ ls
nccl_trace_rank_0 nccl_trace_rank_1
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
Differential Revision: [D67363720](https://our.internmc.facebook.com/intern/diff/D67363720)
| true
|
2,746,267,089
|
update kineto to XPU Windows fixed PR. [submodule kineto]
|
xuhancn
|
closed
|
[
"module: windows",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel",
"ciflow/xpu"
] | 15
|
COLLABORATOR
|
Include XPU Windows Fixed PR: https://github.com/pytorch/kineto/pull/1012
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,746,230,091
|
[ONNX] Save dynamic shapes constraints to ONNX metadata
|
titaiwangms
|
closed
|
[
"module: onnx",
"triaged",
"onnx-triaged"
] | 5
|
COLLABORATOR
|
We should include shape constraints in ONNX metadata to provide more information to users. This can also reveal to the users why certain axes should remain static for them to further debug in their models.
| true
|
2,746,225,890
|
[ONNX] Rename dynamic shapes produced by ExportedProgram to dynamic_axes
|
titaiwangms
|
closed
|
[
"module: onnx",
"triaged",
"onnx-triaged"
] | 3
|
COLLABORATOR
|
`torch.export.export` names dynamic shapes to be s0, s1, s2, s3, ..., etc. However, in ONNX, users could pass in the naming through `dynamic_axes` and `input_names`. We need to rename them to what users request.
| true
|
2,746,223,503
|
fix checking non-trivial input constraints
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143442
A bunch of auto dynamic shape tests would fail non-strict retraceability because when checking input constraints, we'd compare non-trivial expressions, which would require / affect shape env.
```
... is not tracked with proxy for <torch.fx.experimental.proxy_tensor._ModuleStackTracer object ...
```
I've also observed this bug internally.
This PR does an early check on whether args passed have concrete shapes, and only then proceeds: as before, we
1. try to unify / solve with the arg dim when the corresponding placeholder node dim is symbolic in one symbol
2. check directly if the placeholder node dim is concrete
3. otherwise defer to run time.
Differential Revision: [D67359596](https://our.internmc.facebook.com/intern/diff/D67359596/)
| true
|
2,746,190,350
|
Bug-set-submodule-assigns-module-to-new-attribute
|
mariovas3
|
closed
|
[
"module: nn",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Based on the docstring of `nn.Module.set_submodule` - https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.set_submodule
we have `Set the submodule given by target if it exists, otherwise throw an error.`
This is violated when passing non-dot-delimited strings.
This is because calling `.split('.')` on non-dot-delimited strings will result in a singleton list - `'0'.split('.') -> ['0']`. Currently you pop the last element of the resulting list, making it an empty list and the for loop that follows (where all the validation checks are done) gets skipped. So you directly jump to setting the attribute even though it might not exist. https://github.com/pytorch/pytorch/blob/main/torch/nn/modules/module.py#L773
E.g., see the example below:
```python
from torch import nn
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
self.net = nn.Linear(3, 4)
model = SimpleModel()
model.set_submodule('0', nn.Conv2d(2, 3, 3))
# this will work, despite there being no module named '0' in model
assert isinstance(getattr(model, '0'), nn.Conv2d)
try:
model.set_submodule('foo.bar', nn.Conv2d(2, 3, 3))
except AttributeError as e:
message = str(e)
assert message == 'SimpleModel has no attribute `foo`'
```
I tested the above using the nightly release environment - created using the `pytorch/tools/nightly.py` script.
I am available to work on this, if you believe it is of value.
### Versions
output of the `collect_env.py` script:
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241216+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.31
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 142
Model name: Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz
Stepping: 12
CPU MHz: 1800.000
CPU max MHz: 3900.0000
CPU min MHz: 400.0000
BogoMIPS: 3600.00
Virtualisation: VT-x
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 6 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.0
[conda] No relevant packages
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,746,177,352
|
Locale issues in colab: after tensor(1j).cuda().abs() !commands cannot be executed.
|
fzimmermann89
|
open
|
[
"triaged",
"module: third_party",
"module: python frontend"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Running the following in colab (T4 runtime):
```
import torch
a=torch.tensor(1j,device="cuda")
a.abs()
!echo "cake is a lie"
```
results in an `NotImplementedError: A UTF-8 locale is required. Got ANSI_X3.4-1968`
it has to be a) complex b) abs c) on cuda.
otherwise, the final commands succeeds.
It seems like the complex abs cuda kernel modifies the locale?
Related: https://github.com/googlecolab/colabtools/issues/3409
### Versions
default google colab
torch 2.5.1+cu121
python 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
cc @albanD
| true
|
2,746,174,157
|
remove allow-untyped-defs for torch/fx/experimental/debug.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143439
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,746,174,081
|
remove allow-untyped-defs for torch/_functorch/batch_norm_replacement.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143438
| true
|
2,746,173,937
|
remove allow-untyped-defs for torch/nn/parallel/__init__.py
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143437
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,746,173,856
|
remove allow-untyped-defs for torch/_inductor/test_operators.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143436
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,746,173,783
|
remove allow-untyped-defs for torch/_export/passes/remove_runtime_assertions.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143439
* #143438
* #143437
* #143436
* __->__ #143435
| true
|
2,746,173,145
|
Missing nightly 20241217 on x86_64
|
Jack-Khuu
|
open
|
[
"module: binaries",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I'm looking at bumping the nightly pin in torchchat to dev20241217, but it looks like the nightly isn't being found.
Was there a wheel failure or was there a install support change recently (< 1 week)?
Looking a [download.pytorch.org](https://download.pytorch.org/whl/nightly/torch/) list, 1217 seems a bit sparse (66 matches vs 80+)
https://github.com/pytorch/torchchat/actions/runs/12381837837/job/34561243200?pr=1426
```
+ pip3 install --extra-index-url https://download.pytorch.org/whl/nightly/cpu torch==2.6.0.dev20241217 torchvision==0.22.0.dev20241217 torchtune==0.5.0.dev20241126
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/nightly/cpu
ERROR: Could not find a version that satisfies the requirement torch==2.6.0.dev20241217 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0.dev20241019+cpu, 2.6.0.dev20241020+cpu, 2.6.0.dev20241021+cpu, 2.6.0.dev20241022+cpu, 2.6.0.dev20241023+cpu, 2.6.0.dev20241024+cpu, 2.6.0.dev20241025+cpu, 2.6.0.dev20241026+cpu, 2.6.0.dev20241027+cpu, 2.6.0.dev20241028+cpu, 2.6.0.dev20241029+cpu, 2.6.0.dev20241030+cpu, 2.6.0.dev20241031+cpu, 2.6.0.dev20241101+cpu, 2.6.0.dev20241102+cpu, 2.6.0.dev20241103+cpu, 2.6.0.dev20241104+cpu, 2.6.0.dev20241105+cpu, 2.6.0.dev20241106+cpu, 2.6.0.dev20241107+cpu, 2.6.0.dev20241108+cpu, 2.6.0.dev20241109+cpu, 2.6.0.dev20241111+cpu, 2.6.0.dev20241112+cpu, 2.6.0.dev20241113+cpu, 2.6.0.dev20241114+cpu, 2.6.0.dev20241115+cpu, 2.6.0.dev20241116+cpu, 2.6.0.dev20241117+cpu, 2.6.0.dev20241118+cpu, 2.6.0.dev20241119+cpu, 2.6.0.dev20241120+cpu, 2.6.0.dev20241121+cpu, 2.6.0.dev20241122+cpu, 2.6.0.dev20241124+cpu, 2.6.0.dev20241125+cpu, 2.6.0.dev20241126+cpu, 2.6.0.dev20241127+cpu, 2.6.0.dev20241128+cpu, 2.6.0.dev20241129+cpu, 2.6.0.dev20241130+cpu, 2.6.0.dev20241201+cpu, 2.6.0.dev20241202+cpu, 2.6.0.dev20241203+cpu, 2.6.0.dev20241204+cpu, 2.6.0.dev20241205+cpu, 2.6.0.dev20241206+cpu, 2.6.0.dev20241207+cpu, 2.6.0.dev20241208+cpu, 2.6.0.dev20241209+cpu, 2.6.0.dev20241210+cpu, 2.6.0.dev20241211+cpu, 2.6.0.dev20241212+cpu, 2.6.0.dev20241213+cpu, 2.6.0.dev20241214+cpu, 2.6.0.dev20241215+cpu, 2.6.0.dev20241216+cpu)
ERROR: No matching distribution found for torch==2.6.0.dev20241217
```
### Versions
Linux runner 6.5.0-1025-azure #26~22.04.1-Ubuntu SMP Thu Jul 11 22:33:04 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
cc @seemethere @malfet @osalpekar @atalman
| true
|
2,746,155,774
|
Backout D66648013
|
mlazos
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 7
|
CONTRIBUTOR
|
Summary:
backing out https://www.internalfb.com/diff/D66648013 (see comments there for justification)
I will reland and disallow the bfloat16 atomics behavior on A100 because it causes a pretty significant performance regression.
Test Plan: This is a revert
Differential Revision: D67357485
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,746,123,852
|
Eager style export V0 API.
|
zhxchen17
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Summary:
Prototype of an end-to-end export workflow to call a torch.compiled model eagerly and package every single compiled model in the wrapped region of the code.
Code sample:
```
@torch.compile(fullgraph=True)
def f(x, y):
return x + y
# Compile the model and save it on disk
with torch.compiler._fullgraph_package(mode="package", path="/tmp/model.pt2", frontend="dynamo"):
f(torch.randn(3), torch.randn(3))
# Load the saved model back and call it in-place.
with torch.compiler._fullgraph_package(mode="load", path="/tmp/model.pt2"):
f(torch.randn(3), torch.randn(3))
```
Internally, fullgraph_package will call export and aoti step by step and finally package all the compiled models into a single package when we exit the context. (see test_fullgraph_package.py for more information)
Since this is an AOT workflow, we have to assume that the package file(s) saved from this API are likely to be used in a different environment, despite the fact that it may or may not cause soundness issue. To make this BC-safe, we need to give each compilation a unique name so that the models being loaded are addressable from a different process.
(Right now we just assign the unique compilatin name/id using the function's FQN. Ideally user should be able to specify the compilation name directly on torch.compile())
For now, the plan of record is we want to make this ctx manager API standalone because we need to put at least three arguments here:
1. `mode` which indicates whether we're saving or loading the packages
2. `path` to specify where we store the compiled package.
3. `frontend` to speciy whether the programming model.
We haven't made a final decision on where to put this funtionality, and the purpose of this diff is to demonstrate what an ideal workflow to do partial model capture for torchnative.
Test Plan:
buck test mode/opt caffe2/test:test_export -- -r FullgraphPackage
OSS: test_fullgraph_package.py
Differential Revision: D67353701
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,746,123,503
|
aot_eager causes CPU RNG behavior to change
|
ezyang
|
closed
|
[
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro
```
import torch
def f(image_latent):
B = 2
num_ref = 3
num_tar = 3
x = torch.rand(B, 12)
indices = torch.argsort(torch.rand(*x.shape), dim=-1)[:, :num_ref + num_tar]
return image_latent[torch.arange(B).unsqueeze(-1), indices][:, :num_ref]
torch.manual_seed(54321)
torch.cuda.manual_seed_all(54321)
print(torch.compile(backend="aot_eager", fullgraph=True)(f)(torch.randn((2, 12, 16, 32, 32), device='cuda')).sum())
torch.manual_seed(54321)
torch.cuda.manual_seed_all(54321)
print(torch.compile(backend="aot_eager", fullgraph=True)(f)(torch.randn((2, 12, 16, 32, 32), device='cuda')).sum())
torch.manual_seed(54321)
torch.cuda.manual_seed_all(54321)
print(torch.compile(backend="eager", fullgraph=True)(f)(torch.randn((2, 12, 16, 32, 32), device='cuda')).sum())
torch.manual_seed(54321)
torch.cuda.manual_seed_all(54321)
print(torch.compile(backend="eager", fullgraph=True)(f)(torch.randn((2, 12, 16, 32, 32), device='cuda')).sum())
```
This prints
```
tensor(209.5920, device='cuda:0')
tensor(209.5920, device='cuda:0')
tensor(300.4904, device='cuda:0')
tensor(300.4904, device='cuda:0')
```
From the duplicate runs, you can see that it is internally deterministic, but aot_eager perturbs the randomness somehow.
### Versions
main
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225
| true
|
2,746,107,833
|
[pytorch/et] Allow ET to save additional resources for completing a trace like generated kernels and index tensor data
|
sanrise
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143430
The resources directory lets ET observer dump any additional data like Triton kernels while capturing the ET.
This allows us to use the ET trace to replay PT2 workloads and get visibility into data like generated kernels and their usage in a model, index tensor data etc.
We also added a few ways to enable ET and ET Resources through the OS environment variables.
Setting `ENABLE_PYTORCH_EXECUTION_TRACE` will enable default Execution Tracing in Pytorch.
Additionally setting `ENABLE_PYTORCH_EXECUTION_TRACE_EXTRAS` will enable ET to collect extra resources from the ET run like Triton Kernels.
Differential Revision: [D58707846](https://our.internmc.facebook.com/intern/diff/D58707846/)
| true
|
2,746,076,900
|
[BE] Update triton repo link
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
It should be https://github.com/triton-lang/triton rather than https://github.com/openai/triton shouldn't it?
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,746,074,800
|
[pytorch/et] Allow ET to save additional resources for completing a trace like generated kernels and index tensor data (#142521)
|
sanrise
|
closed
|
[
"fb-exported"
] | 3
|
CONTRIBUTOR
|
Summary:
The resources directory lets ET observer dump any additional data like Triton kernels while capturing the ET.
This allows us to use the ET trace to replay PT2 workloads and get visibility into data like generated kernels and their usage in a model, index tensor data etc.
We also added a few ways to enable ET and ET Resources through the OS environment variables.
Setting `ENABLE_PYTORCH_EXECUTION_TRACE` will enable default Execution Tracing in Pytorch.
Additionally setting `ENABLE_PYTORCH_EXECUTION_TRACE_EXTRAS` will enable ET to collect extra resources from the ET run like Triton Kernels.
Test Plan: `export ENABLE_PYTORCH_EXECUTION_TRACE=1;export ENABLE_PYTORCH_EXECUTION_TRACE_EXTRAS=1;buck2 run @//mode/opt //kineto/libkineto/fb/integration_tests:e2e_integration -- --run_resnet --enable_profiling --trace_handler=auto_trace --ngpus=2 --num_iters=150`
Reviewed By: briancoutinho, shengfukevin
Differential Revision: D58707846
| true
|
2,746,068,307
|
Implement increment and add_to_set for CompileEventLogger
|
jamesjwu
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143427
This diff implements `increment` and `add_to_set`, which are features of MetricsContext, but not ChromiumEventLogger. This allows us to add a bunch of other metricscontext callsites to use CompileEventLogger instead.
Differential Revision: [D67354867](https://our.internmc.facebook.com/intern/diff/D67354867/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,746,054,290
|
[reland] Kill capture_pre_autograd_graph API
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"release notes: releng",
"ci-no-td"
] | 4
|
CONTRIBUTOR
|
Summary:
Delete the following API:
- capture_pre_autograd_graph()
- capture_pre_autograd_graph_using_training_ir()
- gm_using_training_ir()
Update XLA pin to include https://github.com/pytorch/xla/pull/8398
There's no more call sites to `capture_pre_autograd_graph`.
Except
1) two test cases in coreml, guarded by version guard, PR to remove: https://github.com/apple/coremltools/pull/2400
2) a few call sites guarded by version guard (< 2.5.0)
Test Plan: CI
Differential Revision: D67354440
| true
|
2,746,035,623
|
Dynamo fails to propagate updates to global variable
|
guilhermeleobas
|
closed
|
[
"oncall: pt2",
"module: dynamo",
"dynamo-triage-june2024"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
I discovered this one while working on https://github.com/pytorch/pytorch/pull/136033. The reproducer without using `@contextmanager` is a bit tricky, but the idea is the same. To reproduce, one needs two files to have different globals.
```python
# main file
import torch
import other_file
z = 1
k = 2
def create_fn():
def fn(x):
global k
k = 100
return x.sin()
return fn
@torch.compile(backend="eager", fullgraph=True)
def foo(x):
fn = create_fn()
global z
other_file.run_fn(fn, x)
z = k + 10 # k is not updated to 100
x = torch.randn(2, 3)
foo(x)
print(f'{z=} - {k=}')
assert z == 110
assert k == 100
```
```python
# second file
def run_fn(fn, x):
fn(x)
```
The assignment `k = 100` is not propagated to the parent `InstructionTranslator` as `fn` is called by `second_file::run_fn`. There's a check in `STORE_GLOBAL` for `self.f_globals is self.parent.f_globals` to determine whether the `symbolic_globals` is updated or not.
Maybe the fix for this one is to use the same symbolic_globals object for all InstructionTranslators in the same module?
https://github.com/pytorch/pytorch/blob/9283c40ba8e6adf55db3d12f7451d86bb9c68632/torch/_dynamo/symbolic_convert.py#L3333-L3342
### Versions
main branch
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,746,026,381
|
higher rank convolution
|
sycamoreoak
|
open
|
[
"module: nn",
"triaged"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
would it be possible to add official pytorch support for higher rank convolution? thanks!
### Alternatives
_No response_
### Additional context
working at a higher rank can be useful, depending on the application!
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,746,015,275
|
Use Manylinux 2.28 for nightly build and cxx11-abi
|
atalman
|
closed
|
[
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
As per: https://dev-discuss.pytorch.org/t/pytorch-linux-wheels-switching-to-new-wheel-build-platform-manylinux-2-28-on-november-12-2024/2581
Linux Builds: CPU, CUDA 11.8, CUDA 12.4 switched to Manylinux 2.28 and D_GLIBCXX_USE_CXX11_ABI=1 on the week of Dec 16
| true
|
2,745,929,853
|
cpp_builder.py: Build in -O2 to improve compilation time
|
benjaminglass1
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143422
* #143421
* #143223
* #141371
This does not appear to affect performance substantively (benchmarks pending), since we already apply OMP optimizations to loops which should be tightly optimized.
This PR additionally applies the `noexecstack` linker flag, so that GCC on some platforms stops warning about executable stacks in the output shared libraries.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,745,929,519
|
AOTI fallback ops: remove ops that were never codegen'ed
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"module: aotinductor"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144124
* #144123
* #144002
* #143909
* __->__ #143421
* #143223
* #141371
Removes 4 fallback ops that are currently not possible to codegen, which does not break ABI-compatibility.
1. `_cudnn_rnn_backward` and `_histogramdd_bin_edges` both return `Tensor[]`, which we cannot codegen with the current design.
2. `_sparse_coo_tensor_with_dims_and_tensors` only supplies a Sparse operator, which we don't support.
3. `zeros.names` requires a `Dimname` input, which we can't currently codegen.
Removing these ops from the list will improve test performance, since the fallback op generation will use the Python proxy executor instead of calling non-existent C functions.
cc @desertfire @chenyang78 @penguinwu
| true
|
2,745,918,870
|
Introduce CompileEventLogger, replace usages of metrics_context and chromium_event with it
|
jamesjwu
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143427
* __->__ #143420
**Problem statement**: I want to be able to centralize and simplify the process by which people add columns/data to existing spans. We have MetricsContext and ChromiumEventLogger, and there's various choices you can make to decide where and when to log different levels of observability for your events. To resolve this, I want a central API for "adding to events under dynamo_timed".
**CompileEventLogger** is intended as a frontend for MetricsContext and ChromiumEventLogger so we can use the same class for handling everything.
CompileEventLogger is intended be used within a `dynamo_timed()` context. Its purpose is to 1. log to existing events that are in progress (i.e. within dynamo_timed), and 2. log instant events to chromium that are independent of any specific span.
CompileEventLogger has three log levels:
- CHROMIUM: Log only to chromium events, visible via tlparse.
- PT2_COMPILE: Log to chromium_events + pt2_compile_events
- COMPILATION_METRIC: Log to compilation metrics in addition to the toplevel chromium and pt2_compile_event.
In addition, we have a function CompileEventLogger.add() that automagically chooses the correct log level. For now, it is conservative, and will never automagically choose to log CompilationMetrics (though I could imagine it figuring out the metadata are all keys in CompilationMetric and therefore loggable there).
The goal here is to make one single interface to log stuff for observability reasons, and make it as easy as possible.
Not included in this diff:
- V1 of this diff will not have implementations of `increment` and `add_to_set` which MetricsContext has, so those usages are not replaced yet. But I'll add those in a followup.
- We don't handle `RuntimeMetricsContext`. It's unclear if I want that to be part of this, because under RuntimeMetricsContext there might not be a toplevel event to log to, so chromium events doesn't make sense in that context. So I might leave that separate for now.
Differential Revision: [D67346203](https://our.internmc.facebook.com/intern/diff/D67346203/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,745,909,540
|
OpenGL interoperability
|
cajoek
|
closed
|
[
"module: cuda"
] | 4
|
NONE
|
### 🚀 The feature, motivation and pitch
Zero-copy transfer of data between PyTorch and OpenGL on GPU by including "OpenGL interoperability" from CUDA in pytorch.
I am working on a real-time machine learning graphics project which uses OpenGL both as an intermediate processing step in the model and to visualize the output. Right now transfer of data between PyTorch and OpenGL is a problem for both training and inference.
Without any additional packages i can copy data from PyTorch CUDA to CPU and then back to OpenGL on GPU, this is very simple but slow.
I can instead use some cuda bindings for python and a separate CUDA Toolkit installation to avoid the data transfer but this is quite complex and there are many competing ways and tools for doing this which makes it hard to navigate.
Old similar issue: https://github.com/pytorch/pytorch/issues/20047
Crosspost: https://github.com/pytorch/vision/issues/8803
### Alternatives
_No response_
### Additional context
The 2 main ways I have been using OpenGL from python are with the packages `moderngl` and `PyOpenGL`.
cc @ptrblck @msaroufim @eqy
| true
|
2,745,907,760
|
[ODML] Make the ML feature provider thread safe
|
seanxiaoxiao
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 73
|
CONTRIBUTOR
|
Summary:
This PR is generated from a meta internal Diff, aiming to resolve a crash from a race condition on the dictionary.
Test Plan:
Build and run
Print out the count/name/value of the dictionary and see if the values are get/set/removed correctly.
Observe the print statement on app start within IG
@diff-train-skip-merge
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,745,906,201
|
[compiled autograd] stop specializing on metadata during initial trace
|
zou3519
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"keep-going",
"module: compiled autograd",
"ci-no-td"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144115
* __->__ #143417
* #143405
* #143387
* #143304
* #143296
The previous PRs built up to this. We change compiled autograd's initial
trace to stop baking in metadata.
While tracing, we allocate some weirdly shaped tensors that we can put
proxies on. The initial trace should not be accessing any metadata of
these tensors (it will likely error out if it does because of how weird
the shapes are).
This involved fixing some various sites where we do specialize on the
metadata, like:
- we change CopySlices's apply_with_saved to proxy some calls
into the graph (this change is fairly hard to split out by itself).
- we stop calling InputBuffer::add
- we delete the weird metadata from the graph so that no graph passes
can make use of it.
Test Plan:
- tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @xmfan
| true
|
2,745,892,843
|
[ROCm] port CK rowwise F8 from fbgemm (#140856)
|
drisspg
|
closed
|
[
"module: rocm",
"fb-exported",
"Stale",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"skip-pr-sanity-checks",
"ciflow/rocm"
] | 11
|
CONTRIBUTOR
|
Summary:
author @jeffdaily
This ports (copies) FBGEMM's implementation from jwfromm.
https://github.com/pytorch/FBGEMM/tree/main/fbgemm_gpu/experimental/gen_ai/src/quantize/ck_extensions/fp8_rowwise
cc sunway513 jithunnair-amd pruthvistony ROCmSupport dllehr-amd jataylo hongxiayang naromero77amd yanbing-j vkuzo albanD kadeng penguinwu
Reviewed By: atalman
Differential Revision: D66797096
Pulled By: drisspg
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @albanD
| true
|
2,745,859,524
|
Fix sample inputs leaked from subtest
|
soulitzer
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143415
* #143333
| true
|
2,745,853,716
|
[PassRate] TorchBench training PassRate is less than 100
|
IvanKobzarev
|
open
|
[
"high priority",
"triaged",
"oncall: pt2"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Umbrella Task for the < 100 TorchBench PassRate
https://hud.pytorch.org/benchmark/compilers

### Versions
master
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
2,745,819,535
|
don't rethrow guard on data dependent errors
|
bobrenjc93
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143413
as discussed offline, this makes errors much easier to read/understand
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.