repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/pytorch
| 165,100
|
Header files not found during build
|
### 🐛 Describe the bug
I'm trying to build pytorch from source but getting the following error:
```
pytorch/aten/src/ATen/core/ivalue.h:4:10: fatal error: ATen/core/TensorBody.h: No such file or directory
```
Seems these files are generated and I see this line printed before
```
core header install: pytorch/build/aten/src/ATen/core/TensorBody.h
```
How can I get pytorch to build without these errors?
I'm running the following command
```
TORCH_CUDA_ARCH_LIST="8.0 9.0" BUILD_TEST=0 USE_DISTRIBUTED=1 USE_NCCL=1 USE_CUDA=1 python setup.py install
```
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11)
Clang version: Could not collect
CMake version: version 3.27.0
Libc version: glibc-2.34
Python version: 3.12.11 | packaged by Anaconda, Inc. | (main, Jun 5 2025, 13:09:17) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk15_hardened_2630_gf27365f948db-x86_64-with-glibc2.34
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
Is XPU available: N/A
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 44
On-line CPU(s) list: 0-43
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 44
Socket(s): 1
Stepping: 11
BogoMIPS: 3591.76
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 arat vnmi umip pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.4 MiB (44 instances)
L1i cache: 1.4 MiB (44 instances)
L2 cache: 176 MiB (44 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-43
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==7.3.0
[pip3] flake8-bugbear==24.12.12
[pip3] flake8-comprehensions==3.16.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==2024.24.12
[pip3] flake8-pyi==25.5.0
[pip3] flake8_simplify==0.22.0
[pip3] mypy_extensions==1.1.0
[pip3] numpy==2.3.1
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-nccl-cu12==2.27.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.5.0+git27664085
[pip3] torch==2.10.0.dev20251008+cu126
[pip3] torchaudio==2.8.0.dev20251009+cu126
[pip3] torc
|
https://github.com/pytorch/pytorch/issues/165100
|
open
|
[
"module: build",
"triaged",
"has workaround"
] | 2025-10-09T20:51:23Z
| 2025-10-10T13:43:50Z
| 1
|
tushar00jain
|
vllm-project/vllm
| 26,530
|
[Bug]: Fix CVE-2023-48022 in docker image
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
Not required for this.
</details>
### 🐛 Describe the bug
The vllm/vllm-openai:v0.10.2 image seems to be affected by the [CVE-2023-48022](https://avd.aquasec.com/nvd/2023/cve-2023-48022/) **Critical** CVE with `ray` (see scan results below). Is there any plan to address this?
```
grype vllm/vllm-openai:v0.10.2 --scope all-layers
```
```
NAME INSTALLED FIXED IN TYPE VULNERABILITY SEVERITY EPSS RISK
ray 2.49.1 python GHSA-6wgj-66m2-xxp2 Critical 91.9% (99th) 86.4
libgssapi-krb5-2 1.19.2-2ubuntu0.4 1.19.2-2ubuntu0.5 deb CVE-2024-3596 Medium 24.6% (95th) 12.3
libk5crypto3 1.19.2-2ubuntu0.4 1.19.2-2ubuntu0.5 deb CVE-2024-3596 Medium 24.6% (95th) 12.3
libkrb5-3 1.19.2-2ubuntu0.4 1.19.2-2ubuntu0.5 deb CVE-2024-3596 Medium 24.6% (95th) 12.3
libkrb5support0 1.19.2-2ubuntu0.4 1.19.2-2ubuntu0.5 deb CVE-2024-3596 Medium 24.6% (95th) 12.3
python3-pip 22.0.2+dfsg-1ubuntu0.6 22.0.2+dfsg-1ubuntu0.7 deb CVE-2023-32681 Medium 6.3% (90th) 3.1
libaom3 3.3.0-1ubuntu0.1 deb CVE-2019-2126 Low 8.1% (91st) 2.4
libcaca0 0.99.beta19-2.2ubuntu4 deb CVE-2022-0856 Low 4.9% (89th) 1.5
python3-httplib2 0.20.2-2 deb CVE-2021-21240 Low 4.5% (88th) 1.4
login 1:4.8.1-2ubuntu2.2 deb CVE-2024-56433 Low 3.6% (87th) 1.1
passwd 1:4.8.1-2ubuntu2.2 deb CVE-2024-56433 Low 3.6% (87th) 1.1
...
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26530
|
closed
|
[
"bug"
] | 2025-10-09T20:16:02Z
| 2025-10-10T21:14:49Z
| 3
|
geodavic
|
huggingface/lerobot
| 2,156
|
How to reproduce lerobot/pi0_libero_finetuned?
|
Thanks for the great work!
I evaluated lerobot/pi0_libero_finetuned on libero goal datasets.
When using n_action_steps=50, the success rate is ~ 75%
When using n_action_steps=10, the success rate is ~ 90%
I tried to reproduce the training results, so I mainly refered to [train_config.json](https://huggingface.co/lerobot/pi0_libero_finetuned/blob/main/train_config.json) in the `lerobot/pi0_libero_finetuned` repo, which has one key value pair in the config dict:
```
"pretrained_path": "pepijn223/pi0_libero_finetuned_extra"
```
So I also refered to the [train_config.json](https://huggingface.co/pepijn223/pi0_libero_finetuned_extra/blob/main/train_config.json) in th `pepijn223/pi0_libero_finetuned_extra` repo, which also has the key value pair:
```
"pretrained_path": "lerobot/pi0_libero_finetuned"
```
This again points back to the checkpoint that depends on it.
And my questions are, how are these checkpoints actually trained, and can anyone provide a train_config.json in the latest lerobot version that can reproduce lerobot/pi0_libero_finetuned?
Please also share some successful training configs if possible!
|
https://github.com/huggingface/lerobot/issues/2156
|
open
|
[
"question",
"policies",
"simulation"
] | 2025-10-09T18:11:47Z
| 2025-10-22T09:27:03Z
| null |
PuzhenYuan
|
pytorch/ao
| 3,137
|
README should highlight our huggingface models
|
We've got a few quantized models here and plan to keep adding to it: https://huggingface.co/pytorch. This should be highlighted close to the top of the README
|
https://github.com/pytorch/ao/issues/3137
|
open
|
[
"topic: documentation"
] | 2025-10-09T18:07:51Z
| 2025-10-09T18:08:06Z
| 0
|
andrewor14
|
huggingface/lerobot
| 2,153
|
Why can’t I find something like train_expert_only in the latest version of pi0? Do the current versions of pi0 and pi0.5 only support full-parameter training?
|
Why can’t I find something like “train_expert_only” in the latest version of pi0?
Do the current versions of pi0 and pi0.5 only support full-parameter training?
|
https://github.com/huggingface/lerobot/issues/2153
|
closed
|
[
"enhancement",
"question",
"policies",
"good first issue"
] | 2025-10-09T13:08:10Z
| 2025-12-31T14:54:29Z
| null |
ZHHhang
|
pytorch/pytorch
| 165,051
|
`[__recompiles] - 0/3: expected type of 'args[1]' to be a tensor type, ' but found <class 'torch.Tensor'>` cryptic recompilation cause
|
### 🐛 Describe the bug
Hello,
In some private workload I am running (unfortunately I don't have a minimal repro - I can try to get one if needed), the recompilation cause:
```
V1009 11:33:51.404000 3024 site-packages/torch/_dynamo/guards.py:3006] [0/5] [__recompiles] Recompiling function inner in /root/miniforge3/lib/python3.12/site-packages/torch/_dynamo/external_utils.py:68
V1009 11:33:51.404000 3024 site-packages/torch/_dynamo/guards.py:3006] [0/5] [__recompiles] triggered by the following guard failure(s):
V1009 11:33:51.404000 3024 site-packages/torch/_dynamo/guards.py:3006] [0/5] [__recompiles] - 0/1: expected type of 'args[1]' to be a tensor type, ' but found <class 'torch.Tensor'>
```
gets printed.
The log `expected type of 'args[1]' to be a tensor type, ' but found <class 'torch.Tensor'>` is surprising to me. What does it mean?
Thank you.
(this is on torch 2.7 - I'll test on 2.8 shortly)
### Versions
```
Collecting environment information...
PyTorch version: 2.7.1+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.39
Python version: 3.12.10 | packaged by conda-forge | (main, Apr 10 2025, 22:21:13) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 570.133.20
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 46%
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4800.35
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc amd_ibpb_ret arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: M
|
https://github.com/pytorch/pytorch/issues/165051
|
open
|
[
"needs reproduction",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2025-10-09T11:43:27Z
| 2025-10-10T17:59:08Z
| 3
|
fxmarty-amd
|
huggingface/datasets
| 7,802
|
[Docs] Missing documentation for `Dataset.from_dict`
|
Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes
Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029
The docstring is present for the function, but seems missing from the official documentation for the `Dataset` class on HuggingFace.
The method in question:
```python
@classmethod
def from_dict(
cls,
mapping: dict,
features: Optional[Features] = None,
info: Optional[DatasetInfo] = None,
split: Optional[NamedSplit] = None,
) -> "Dataset":
"""
Convert `dict` to a `pyarrow.Table` to create a [`Dataset`].
Important: a dataset created with from_dict() lives in memory
and therefore doesn't have an associated cache directory.
This may change in the future, but in the meantime if you
want to reduce memory usage you should write it back on disk
and reload using e.g. save_to_disk / load_from_disk.
Args:
mapping (`Mapping`):
Mapping of strings to Arrays or Python lists.
features ([`Features`], *optional*):
Dataset features.
info (`DatasetInfo`, *optional*):
Dataset information, like description, citation, etc.
split (`NamedSplit`, *optional*):
Name of the dataset split.
Returns:
[`Dataset`]
"""
```
|
https://github.com/huggingface/datasets/issues/7802
|
open
|
[] | 2025-10-09T02:54:41Z
| 2025-10-19T16:09:33Z
| 2
|
aaronshenhao
|
pytorch/pytorch
| 164,971
|
[dynamo] Keep stack trace where mutations happened
|
### 🐛 Describe the bug
This is essential to figure out where we want to use strict-export but there is a side effect, and we want to inform the user about how to rewrite their code to remove the side-effect.
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
|
https://github.com/pytorch/pytorch/issues/164971
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"oncall: export"
] | 2025-10-08T18:52:09Z
| 2025-10-09T17:23:32Z
| 1
|
anijain2305
|
pytorch/pytorch
| 164,966
|
XPU OOM when allocate tensor according to its reported available memory
|
### 🐛 Describe the bug
run below
```
import torch
torch.xpu.empty_cache()
## bring up the context, it may occupy memory
a = torch.rand(5).to("xpu:0")
free_memory_bytes = torch.xpu.mem_get_info("xpu:0")[0]
required_memory_bytes = 5000 * 5000 * (32 // 8)
# Leaving 50 MB of free memory for possible buffers, etc.
n_vals = (free_memory_bytes - required_memory_bytes - int(50e6)) // (32 // 8)
foo = torch.rand(n_vals, device="xpu:0")
```
You'll get exception as below:
> Traceback (most recent call last):
> File "/workspace/accelerate/./test.py", line 13, in <module>
> foo = torch.rand(n_vals, device="xpu:0")
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> torch.OutOfMemoryError: XPU out of memory. Tried to allocate 63.71 GiB. GPU 0 has a total capacity of 63.98 GiB. Of the allocated memory 512 bytes is allocated by PyTorch, and 2.00 MiB is reserved by PyTorch but unallocated. Please use `empty_cache` to release all unoccupied cached memory.
### Versions
latest xpu pytorch
cc @gujinghui @EikanWang @fengyuan14 @guangyey
|
https://github.com/pytorch/pytorch/issues/164966
|
open
|
[
"module: memory usage",
"triaged",
"module: xpu"
] | 2025-10-08T18:39:18Z
| 2025-10-11T01:40:46Z
| 3
|
yao-matrix
|
pytorch/pytorch
| 164,951
|
Docker checkouts take 30+ min on H100 runners
|
### 🐛 Describe the bug
See https://github.com/pytorch/pytorch/actions/runs/18344478781/job/52264153169 for example where "Pull docker image" takes 37 min!!! Can we cache/slim the docker? Or connect those runners to more powerful IO system
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra
|
https://github.com/pytorch/pytorch/issues/164951
|
open
|
[
"module: ci",
"triaged"
] | 2025-10-08T17:12:15Z
| 2025-10-08T17:12:25Z
| 0
|
malfet
|
pytorch/pytorch
| 164,922
|
`torch.compile` fails to trace `datetime.now()` with Dynamo guard check failure
|
### 🐛 Describe the bug
When compiling a model that uses `datetime.now()` function, `torch.compile` fails with a Dynamo guard check error. The warning message explicitly identifies this as a Python builtin that Dynamo cannot trace, and suggests filing an issue to add support.
```python
import torch
from datetime import datetime
class TestModel(torch.nn.Module):
def forward(self, x):
current_time = datetime.now()
return x + current_time.second
x = torch.randn(5)
model = TestModel()
print("Eager output:", model(x))
print("Compiled output:", torch.compile(model)(x))
```
### Error logs
```
Eager output: tensor([51.0676, 52.0309, 52.6077, 50.6691, 53.6591])
D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\_dynamo\variables\functions.py:1598: UserWarning: Dynamo does not know how to trace the builtin `<unknown module>.datetime.now.` This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind).
If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround.
If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use `torch.compiler.allow_in_graph`.
torch._dynamo.utils.warn_once(explanation + "\n" + "\n".join(hints))
Traceback (most recent call last):
File "E:\DL_Compiler_Test\torch_code\test.py", line 12, in <module>
print("Compiled output:", torch.compile(model)(x))
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\_dynamo\eval_frame.py", line 418, in __call__
return super().__call__(*args, **kwargs)
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\nn\modules\module.py", line 1777, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\nn\modules\module.py", line 1788, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\_dynamo\eval_frame.py", line 886, in compile_wrapper
return fn(*args, **kwargs)
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\nn\modules\module.py", line 1777, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\nn\modules\module.py", line 1788, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\_dynamo\convert_frame.py", line 2010, in __call__
result = self._torchdynamo_orig_backend(
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\_dynamo\convert_frame.py", line 1760, in __call__
result = self._inner_convert(
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\_dynamo\convert_frame.py", line 691, in __call__
result = _compile(
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\_dynamo\convert_frame.py", line 1569, in _compile
guarded_code, tracer_output = compile_inner(code, one_graph, hooks)
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\_dynamo\convert_frame.py", line 1251, in compile_inner
return _compile_inner(code, one_graph, hooks)
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\_dynamo\convert_frame.py", line 1385, in _compile_inner
check_fn = dynamo_output.build_guards(
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\_dynamo\convert_frame.py", line 860, in build_guards
return CheckFunctionManager(
File "D:\Programs\Python\virtualenvs\torch_code-afvE469o\lib\site-packages\torch\_dynamo\guards.py", line 3593, in __init__
raise AssertionError(f"Guard check failed: {reasons}")
AssertionError: Guard check failed: 0/0: ___check_obj_id(G['datetime'].now, 2702242757856) # current_time = datetime.now() # E:\DL_Compiler_Test\torch_code\test.py:6 in forward
```
### Versions
Collecting environment information...
PyTorch version: 2.10.0.dev20251005+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 4.0.2
Libc version: N/A
Python version: 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100-SP0
Is CUDA available: Fal
|
https://github.com/pytorch/pytorch/issues/164922
|
open
|
[
"triaged",
"function request",
"oncall: pt2",
"module: dynamo"
] | 2025-10-08T10:19:41Z
| 2025-10-14T20:25:33Z
| 9
|
LiSsHhUuAaIi
|
huggingface/transformers
| 41,431
|
gradient scaling occurs even though total gradient remains < max_grad_norm in trainer.py
|
Even though gradients remain < max_grad_norm throughout training, the gradient still goes through a scaling process. For instance, I set max_grad_norm = 1, and grad_norm consistently remains <= 0.33. Because the trainer takes you through the grad clip process if max_grad_norm > 0 or not None, this operation always gets executed within torch's clip function: `clip_coef = max_norm / (total_norm + 1e-6)`. Is there a way to prevent this? Thanks.
|
https://github.com/huggingface/transformers/issues/41431
|
closed
|
[] | 2025-10-07T22:13:08Z
| 2025-11-15T08:02:51Z
| 7
|
lorsonblair
|
pytorch/pytorch
| 164,878
|
Ban and remove plain asserts with no message in our python code
|
In a similar spirit to https://github.com/pytorch/pytorch/issues/148114
We should remove asserts without any message explaining what is happening.
On top of that, we should move them to proper errors to avoid any issue with python -O.
There are two parts here:
- [x] Enable Ruff lint for this https://docs.astral.sh/ruff/rules/assert/ (with appropriate skips)
- [ ] Remove all the existing ones
cc @malfet
|
https://github.com/pytorch/pytorch/issues/164878
|
open
|
[
"module: error checking",
"triaged",
"actionable",
"module: python frontend"
] | 2025-10-07T21:36:50Z
| 2025-12-16T20:02:43Z
| 26
|
albanD
|
huggingface/candle
| 3,120
|
AutoModel / PreTrainedModel equivalent magic ?
|
Hello all, first, thanks a lot for this wonderful crate.
I was wondering if it's on the roadmap or if there is a solution to have the same magic as in python with a `AutoModel.from_pretrained("the_model_name_string")`
As I'm protoyping and am often changing models... which requires to change the architecture everytime and having this "auto load" would save time.
Alternatives : https://github.com/lucasjinreal/Crane or https://docs.rs/kalosm/latest/kalosm/
Thanks in advance,
Have a nice day.
|
https://github.com/huggingface/candle/issues/3120
|
open
|
[] | 2025-10-07T21:27:31Z
| 2025-10-09T13:02:35Z
| 2
|
ierezell
|
huggingface/lerobot
| 2,134
|
what is the transformers version for latest lerobot pi0?
|
### System Info
```Shell
- lerobot version: 0.3.4
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
- Python version: 3.10.18
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 1.26.4
- PyTorch version: 2.7.1+cu126
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.6
- GPU model: NVIDIA A800-SXM4-80GB
- Using GPU in script?:
lerobot-eval --policy.path="lerobot/pi0_libero_finetuned" --env.type=libero --env.task=libero_goal --eval.batch_size=1 --eval.n_episodes=2 --seed=1000
```
### Information
- [x] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
Clone latest LeRobot repository and install dependencies and run lerobot_eval.py
```
lerobot-eval --policy.path="lerobot/pi0_libero_finetuned" --env.type=libero --env.task=libero_goal --eval.batch_size=1 --eval.n_episodes=2 --seed=1000
```
```
Traceback (most recent call last):
File "/cephfs/yuanpuzhen/conda_data/envs/libero/bin/lerobot-eval", line 7, in <module>
sys.exit(main())
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/scripts/lerobot_eval.py", line 750, in main
eval_main()
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/configs/parser.py", line 225, in wrapper_inner
response = fn(cfg, *args, **kwargs)
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/scripts/lerobot_eval.py", line 495, in eval_main
policy = make_policy(
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/policies/factory.py", line 386, in make_policy
policy = policy_cls.from_pretrained(**kwargs)
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 923, in from_pretrained
model = cls(config, **kwargs)
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 872, in __init__
self.model = PI0Pytorch(config)
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 545, in __init__
raise ValueError(msg) from None
ValueError: An incorrect transformer version is used, please create an issue on https://github.com/huggingface/lerobot/issues
Exception ignored in: <function MjRenderContext.__del__ at 0x7fb47e108ee0>
Traceback (most recent call last):
File "/cephfs/yuanpuzhen/conda_data/envs/libero/lib/python3.10/site-packages/robosuite/utils/binding_utils.py", line 199, in __del__
self.gl_ctx.free()
File "/cephfs/yuanpuzhen/conda_data/envs/libero/lib/python3.10/site-packages/robosuite/renderers/context/egl_context.py", line 150, in free
EGL.eglDestroyContext(EGL_DISPLAY, self._context)
File "/cephfs/yuanpuzhen/conda_data/envs/libero/lib/python3.10/site-packages/OpenGL/error.py", line 230, in glCheckError
raise self._errorClass(
OpenGL.raw.EGL._errors.EGLError: EGLError(
err = EGL_NOT_INITIALIZED,
baseOperation = eglDestroyContext,
cArguments = (
<OpenGL._opaque.EGLDisplay_pointer object at 0x7fb47c6805c0>,
<OpenGL._opaque.EGLContext_pointer object at 0x7fb47c6804c0>,
),
result = 0
)
```
### Expected behavior
Expect to evaluate the given checkpoint, output eval videos and eval_info.json
Can you provide stable transformers and numpy versions for the latest lerobot?
And what version of transformers could satisfied the code in PI0Pytorch?
```
try:
from transformers.models.siglip import check
if not check.check_whether_transformers_replace_is_installed_correctly():
raise ValueError(msg)
except ImportError:
raise ValueError(msg) from None
```
|
https://github.com/huggingface/lerobot/issues/2134
|
closed
|
[] | 2025-10-07T12:06:52Z
| 2025-11-14T20:04:50Z
| null |
PuzhenYuan
|
pytorch/torchtitan
| 1,805
|
TP gradient update is wrong during MoE backward
|
### Bug description
https://github.com/pytorch/torchtitan/blob/main/torchtitan/experiments/llama4/infra/parallelize.py#L454
TP used Dtensor's local tensor by calling to_local(), and the local tensor's gradient can not be correctly propagated back to the DTensor , because we didn't set grad_placements to tell autograd how to back propagete the gradients. So we missed a reduce_scatter() during backward in this line here.
### Versions
Current main torchtitan
|
https://github.com/pytorch/torchtitan/issues/1805
|
closed
|
[
"high priority",
"triage review"
] | 2025-10-07T03:43:55Z
| 2025-10-15T03:32:04Z
| 1
|
wwwjn
|
pytorch/pytorch
| 164,786
|
How should we handle PyTorch build flags in torch/headeronly for custom ops?
|
### 🐛 Describe the bug
This isn't exactly a bug, per sé, but it is misleading. Thanks to @mikaylagawarecki pointing out the following phenomenon in a parallel file, I'm realizing we have the following behavior in torch/headeronly/util/Half.h today:
Consider the following ifdef
https://github.com/pytorch/pytorch/blob/6861fa43e5fee7fedc0213e352fa983edea8aa78/torch/headeronly/util/Half.h#L44-L47
When libtorch is compiling Half.h, it will properly generate the fast vectorization logic depending on how CPU_CAPABILITY_AVX2 and CPU_CAPABILITY_AVX512 is set. Great. This is expected.
What may be unexpected is that custom ops including the headeronly Half.h will _not_ have CPU_CAPABILITY_AVX2 or CPU_CAPABILITY_AVX512 set and so will not have performant CPU code for `float2half_scalar` and `half2float_scalar` of Half.h.
### Versions
on main
cc @malfet @seemethere @chauhang @penguinwu @zou3519 @bdhirsh @swolchok
|
https://github.com/pytorch/pytorch/issues/164786
|
open
|
[
"module: build",
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 2025-10-06T21:22:09Z
| 2025-10-07T15:26:28Z
| 1
|
janeyx99
|
huggingface/diffusers
| 12,441
|
Support Wan2.2-Animate
|
[Wan2.2-Animate-14B](https://humanaigc.github.io/wan-animate), it's a unified model for character animation and replacement, with holistic movement and expression replication.
https://github.com/user-attachments/assets/351227d0-4edc-4f6c-9bf9-053e53f218e4
We would like open to the community, if anyone is interested, to integrate this model with Diffusers. Just take into consideration these points:
1. Don't integrate the preprocessing, we can help with that using a modular custom block.
2. This issue is for more advanced users than know the diffusers library very well.
Just let me know that you're interested and if you have any doubts, feel free to ask, if you open a PR we can help but we are currently busy with other priorities so we ask you to be patient.
|
https://github.com/huggingface/diffusers/issues/12441
|
closed
|
[
"help wanted",
"contributions-welcome"
] | 2025-10-06T18:08:21Z
| 2025-11-13T02:52:32Z
| 0
|
asomoza
|
huggingface/lerobot
| 2,124
|
Question regarding downsampling and resizing dataset
|
Hi,
Thank you for providing this wonderful library! I was curious about how one can take an existing dataset (collected or downloaded) and modify the fps (downsample, resize images, or delete specific episodes (for v3) prior to policy training. I am finding this tricky to do particularly when the dataset is not loaded in code but provided as a parameter to lerobot-train. I've spent time digging around the codebase but didn't see a way that doesn't involve loading the dataset in script first and adjusting this (for resizing, not sure about downsampling fps). Does the codebase provide utility functions for this? Thanks!
|
https://github.com/huggingface/lerobot/issues/2124
|
open
|
[
"question",
"dataset",
"good first issue"
] | 2025-10-06T16:07:47Z
| 2025-10-07T20:25:20Z
| null |
karthikm-0
|
huggingface/transformers
| 41,363
|
RT-Detr docs should reflect fixed 640x640 input size
|
The authors of RT-Detr mention that the model was trained on 640x640 images and was meant to be used for inference on 640x640 images. Also, the current implementation has certain quirks that make training/inferring on images of different sizes problematic. For example, the pixel masks used for batching images of varying sizes are discarded.
https://github.com/huggingface/transformers/blob/0452f28544f3626273d25f07f83c0e5f7da2d47a/src/transformers/models/rt_detr/modeling_rt_detr.py#L1645
The above are not clear in the current docs. I'll open a PR which adds a few lines in the docs to notify users about these issues.
|
https://github.com/huggingface/transformers/issues/41363
|
closed
|
[
"Documentation"
] | 2025-10-06T11:04:37Z
| 2025-11-06T13:24:01Z
| 4
|
konstantinos-p
|
pytorch/ao
| 3,122
|
Access to compact internal representation for `target_dtype=torch.uint4`
|
Hello, for my use case, I need to access and store the internal representation of 4-bit quantization. This is because I'd like to quantize and write back part of the full buffer. Think about "add some new channels" or "overwrite content of a channel".
I have problems getting to the compressed representation. I wrote this:
```
from torchao.quantization.observer import AffineQuantizedMinMaxObserver
from torchao.quantization.granularity import PerAxis
from torchao.quantization.quant_primitives import MappingType
from torchao.dtypes import to_affine_quantized_intx_static
from torchao.dtypes.affine_quantized_tensor import (
get_tensor_impl_constructor,
AffineQuantizedTensor,
)
from torchao.dtypes.utils import PlainLayout
source_dtype = torch.float32
target_dtype = torch.uint4
blocksize = 4096
num_slots = 128
input_float = torch.randn((num_slots, blocksize), dtype=source_dtype)
print(f"shape={(num_slots, blocksize)}: Compute scales, zero_points")
obs = AffineQuantizedMinMaxObserver(
mapping_type=MappingType.ASYMMETRIC,
target_dtype=target_dtype,
granularity=PerAxis(axis=0),
eps=torch.finfo(torch.float32).eps,
scale_dtype=torch.float32,
zero_point_dtype=torch.float32,
)
obs(input_float)
scales, zero_points = obs.calculate_qparams()
# Quantize
print("Quantize")
quant_tensor = to_affine_quantized_intx_static(
input_float=input_float,
scale=scales,
zero_point=zero_points,
block_size=(1, blocksize),
target_dtype=target_dtype,
)
int_data = quant_tensor.tensor_impl.get_plain()[0]
print(f"input_float {input_float.shape}, blocksize={blocksize}, int_data {int_data.shape}, int_data.dtype={int_data.dtype}")
print(f"int_data.min={int_data.min().item()}, int_data.max={int_data.max().item()}")
# Dequantize
print("Dequantize")
tensor_impl_ctr = get_tensor_impl_constructor(PlainLayout)
tensor_impl = tensor_impl_ctr(
int_data, scales, zero_points, PlainLayout(),
)
reconstructed = AffineQuantizedTensor(
tensor_impl,
block_size=(1, blocksize),
shape=int_data.shape,
dtype=source_dtype,
).dequantize(output_dtype=source_dtype)
print(f"reconstructed {reconstructed.shape}, dtype={reconstructed.dtype}")
```
From this, I get that `quant_tensor.tensor_impl.get_plain()[0]` returns an array of the correct shape, but with `dtype=torch.uint8`, yet values are in fact in the `uint4` range.
This cannot be how you store these things internally, otherwise this would not be 4-bit quantization.
Is there a way to get to the internal representation? I suppose this is something like a `(num_slots, blocksize // 2)` array of `uint8` type. I can compute this myself, but this seems a detour.
I know it is not nice to have to use internal representations, but your external API just does not support what I need.
Essentially, I want to maintain the quantized version of a buffer of shape `(all_slots, blocksize)`, but be able to modify slices. Say `buffer[a:b, :]` changes, I want to only re-quantize this part and write it back. I don't want to compute and store your supported representations for every single slot, that would be slow. So, getting to the internal representation seems the way to go, unless you'd support such use cases directly.
|
https://github.com/pytorch/ao/issues/3122
|
open
|
[
"question",
"triaged"
] | 2025-10-06T11:02:12Z
| 2025-10-09T08:29:55Z
| null |
mseeger
|
pytorch/xla
| 9,670
|
`all_reduce` does not apply `scale` when `xr.world_size == 1`
|
## ❓ Questions and Help
Hi, I have noticed that when `world_size == 1`, `all_reduce` is a no-op and does not apply `scale`:
In `torch_xla.core.xla_model` in `def all_reduce`:
```
# No-op if there is only one device
if runtime.world_size() == 1 and not xu.getenv_as('XLA_ALWAYS_ALLREDUCE',
bool, False):
if isinstance(inputs, torch.Tensor):
return inputs.clone()
else:
return inputs
```
Is this intended behavior? If it is indeed intended, it makes the use of `all_reduce` inconsistent when using `world_size == 1` vs `world_size > 1`. The issue manifests, for example, when you are logging running average loss value:
```
epoch_loss = xm.all_reduce(xm.REDUCE_SUM, loss_accum, scale=1.0 / ((idx + 1) * world_size))
```
|
https://github.com/pytorch/xla/issues/9670
|
open
|
[
"question",
"distributed"
] | 2025-10-06T04:40:24Z
| 2025-10-17T06:31:12Z
| null |
afzalxo
|
pytorch/pytorch
| 164,696
|
Support torch._inductor.config.inplace_buffers for custom_op whenever possible
|
### 🚀 The feature, motivation and pitch
Is it possible to add this support to custom_op?
The user would annotate what buffers can be used for in_place and torch compile should reuse buffers whenever possible (if they are not required by other ops or backward etc).
This is to reduce mem usage.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @coconutruben @zou3519 @bdhirsh
|
https://github.com/pytorch/pytorch/issues/164696
|
open
|
[
"triaged",
"module: custom-operators",
"function request",
"oncall: pt2",
"module: inductor",
"module: pt2-dispatcher"
] | 2025-10-05T08:30:21Z
| 2025-11-12T20:52:44Z
| 6
|
mayank31398
|
huggingface/tokenizers
| 1,873
|
Why is my Python implementation faster than the Rust implementation?
|
I am comparing the tokenizers in the Python and the huggingface implementation as follows
```python
import json
import time
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
[... Define and save the texts as data.json]
with open('./data.json', 'w', encoding='utf-8') as f:
json.dump(texts[:N], f, ensure_ascii=False)
N = 500
start = time.time()
for text in texts:
tokenizer(text)
end = time.time()
loop_time = end-start
print("Python in a loop: ",end-start, f"for {N} examples.")
# Python in a loop: 4.231077432632446 for 500 examples.
start = time.time()
results = tokenizer(texts[:N])
end = time.time()
batch_time = end-start
print("Python as a batch: ",batch_time, f"for {N} examples.")
# Python as a batch: 0.86988 for 500 examples.
```
and the rust implementation
```rust
use tokenizers::tokenizer::{Result as TokenizerResult, Tokenizer,Encoding};
use serde_json::Result as SerdeResult;
use std::time::Instant;
use std::fs::File;
use std::io::{BufReader,BufWriter, Write};
use std::any::type_name;
use rayon::prelude::*;
fn main() -> TokenizerResult<()> {
// needs http feature enabled
let tokenizer = Tokenizer::from_pretrained("bert-base-cased", None)?;
let file = File::open("./data.json")?;
let reader = BufReader::new(file);
let items: Vec<String> = serde_json::from_reader(reader)?;
let texts: Vec<&str> = items.iter().map(|s| s.as_str()).collect();
let start = Instant::now();
for name in texts.iter(){
let encoding = tokenizer.encode(*name, false)?;
}
let duration = start.elapsed();
println!("(1) Execution in loop: {:.6} seconds", duration.as_secs_f64());
// (1) Execution in loop: 29.867990 seconds
let start = Instant::now();
let encoded_items: Vec<_> = texts.par_iter().map(|name| tokenizer.encode(*name, false)).collect();
let duration = start.elapsed();
println!("(2) Execution with par_iter : time: {:.6} seconds", duration.as_secs_f64());
// (2) Execution with par_iter : 3.968467
let start = Instant::now();
let encoded_items: TokenizerResult<Vec<Encoding>> = tokenizer.encode_batch(items2.clone(), false);
let duration = start.elapsed();
println!("(3) Execution with encode_batch : time: {:.6} seconds", duration.as_secs_f64());
// (3) Execution with encode_batch : 3.968467 seconds
let start = Instant::now();
let encoded_items: TokenizerResult<Vec<Encoding>> = tokenizer.encode_batch_char_offsets(items2.clone(), false);
let duration = start.elapsed();
println!("(4) Execution with encode_batch_char_offsets : time: {:.6} seconds", duration.as_secs_f64());
// (4) Execution with encode_batch_char_offsets : 6.839765 seconds
let start = Instant::now();
let encoded_items: TokenizerResult<Vec<Encoding>> = tokenizer.encode_batch_fast(items2.clone(), false);
let duration = start.elapsed();
println!("(5) Execution with encode_batch_fast : time: {:.6} seconds", duration.as_secs_f64());
// (5) Execution with encode_batch_fast : 5.758732 seconds
Ok(())
}
```
You see that Rust is 10 times slower in a loop and 3 times slower even when parallelization is used.
What is the trick here? How can I make my Rust code as fast (or hopefully faster) than the python code?
|
https://github.com/huggingface/tokenizers/issues/1873
|
closed
|
[] | 2025-10-05T08:02:47Z
| 2025-10-08T17:41:28Z
| 4
|
sambaPython24
|
pytorch/pytorch
| 164,662
|
Improper batch processing in torch.linalg.eig with cuda
|
### 🚀 The feature, motivation and pitch
When calculating large eigenvalues of non-symmetric matrices, I noticed that torch processes the matrices one by one, with only one core getting loaded. The processing time of multiple matrices is more or less similar between a Python loop and a batched execution of linalg.eig. I think this could drastically improve the performance of eigenvalue calculations, basically in the order of magnitude of the available CPU cores.
I think the issue comes from torch only using magma's geev implementation. This implementation seems to largely rely on magma's geev solver. This solver seems to be single-threaded for large parts of the execution. Py parallelizing the execution using multiple GPUs (with none of them seeing any load) or using the CPU as another device, speedups in the order of 2x for 2 simultaneous calls. Therefore, I think it would be beneficial to try and perform the eigenvalue decomposition of multiple matrices in parallel.
Please also see [discuss.pytorch.org](https://discuss.pytorch.org/t/torch-linalg-eig-parallelisation/223386/7) for the discussion on the matter and to see the code I have used in order to evaluate this.
### Alternatives
Alternatively, it might also be beneficial to implement cuSolvers' geev implementation cusolverDnXgeev. I am trying to evaluate its performance compared to magmas geev implementation, but I only have limited experience in C++.
### Additional context
I would love to contribute myself, but I have only used C++ in the context of Arduino and ESP32 Microcontrollers, so this would probably only make sense if someone with some more experience could share some advice on how to tackle this.
cc @ptrblck @msaroufim @eqy @jerryzh168 @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
|
https://github.com/pytorch/pytorch/issues/164662
|
open
|
[
"module: cuda",
"triaged",
"module: linear algebra"
] | 2025-10-04T16:38:37Z
| 2025-10-07T21:39:03Z
| 0
|
johannesz-codes
|
huggingface/transformers
| 41,336
|
is there a bug in group_videos_by_shape for qwenvl video preprocessiong?
|
### System Info
in src/transformers/video_utils.py,
group_videos_by_shape
grouped_videos = {shape: torch.stack(videos, dim=0) for shape, videos in grouped_videos.items()}, where each video is of shape BTCHW. This will create a new dimension.
However, in qwenvl video preprocess
batch_size, grid_t, channel = patches.shape[:3]
It does not consider the additional dimension created in group_videos_by_shape
I think we should use torch.cat, not torch.stack?
@yonigozlan @molbap
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
running video preprocessing with list of video inputs, each with different shape
### Expected behavior
run without error
|
https://github.com/huggingface/transformers/issues/41336
|
closed
|
[
"bug"
] | 2025-10-03T22:26:26Z
| 2025-10-03T22:44:43Z
| 1
|
dichencd
|
pytorch/ao
| 3,120
|
Question: How to implement my quantization algorithm?
|
The docs mention that one could ask here for help if unsure how to implement a new quantization algorithm with `torchao`, so I'll use that chance.
First, in general, the current situation around pytorch quantization seems a bit unclear to me. As far as I understand:
- there used to be two quantization APIs: "Eager" and "FX Graph Mode"
- now there is a third API: "PT2E"
- the quantization implementation moved from `torch` to `ao` package
- all of this is still in experimental phase
But the docs still seem very unpolished (broken links, missing explanations), so I'm confused about the current state of this. In particular, let's say I have a new quantization algorithm (say similar to GPTQ), and I want to make experiments to evaluate it on large models like Llama4, gpt-oss, etc. Could I already use PT2E for that or is it still too unstable? Would I rather use GPTQModel perhaps? Or something else?
And then, my question is how I would implement it, because I'm not sure if `torchao` supports the correct "flow". Let me explain the necessary flow based on a concrete example. Let's say we have a neural network with the following layers:
- `Linear(28*28, 512)`
- `ReLU`
- `Linear(512, 128)`
- `ReLU`
- `Linear(128, 10)`
Now I want to turn the linear layers (which use `float32`) into quantized linear layers whose scalars are 4-bit or so. My algorithm needs calibration data. Let's call the calibration data (i.e. sample inputs) `X`. Now the flow for quantization would look like this:
1. **Quantize `Linear(28*28, 512)` into `QuantizedLinear(28*28,512)`**.
This needs the calibration data `X`. (as well as the original `float32` weights of linear layer of course)
2. **Quantize `Linear(512, 128)` into `QuantizedLinear(512,128)`**.
Here comes the crux. Because I sort-of need two kinds of calibration data. First, I need the result of passing X through `Linear(28*28, 512)` and `ReLU`. (I guess that's already possible?!) But second, I also need the result of passing X through `QuantizedLinear(28*28,512)` and `ReLU`, i.e., the result of passing X through the already-quantized network.
The idea is that in the quantization of second layer one can "correct" some inaccuracies that the quantization of the first layer caused. For this one needs to know the difference of the calibration data passed through the original first layer versus passed through the quantized first layer.
3. **Quantize `Linear(128, 10)` into `QuantizedLinear(128,10)`**.
Again, I need both of the following:
- X passed through the first 4 original units
- X passed through the first 4 quantized units
Is it possible to implement that with pytorch quantization, either with the new PT2E API or with one of the older APIs?
Thank you very much in advance!
|
https://github.com/pytorch/ao/issues/3120
|
closed
|
[] | 2025-10-03T19:18:39Z
| 2025-10-04T19:56:39Z
| null |
jbirnick
|
huggingface/lerobot
| 2,111
|
frame deletion
|
Great work on this project! I have a quick question - does LeRobotDataset support frame deletion? For example, in the DROID_lerobot dataset, the first few frames have an action value of 0 and I need to remove them.
I'd appreciate any insights you can provide. Thank you for your time and help!
|
https://github.com/huggingface/lerobot/issues/2111
|
closed
|
[
"question",
"dataset"
] | 2025-10-03T13:05:12Z
| 2025-10-10T12:17:53Z
| null |
Yysrc
|
pytorch/pytorch
| 164,559
|
fwd_rng_state show up in the aot_export_joint grpah input
|
See https://github.com/pytorch/torchtitan/pull/1794
P1975157784: rank0_autograd_function_0fea2786.py
Setting `torch._functorch.config.graphsafe_rng_functionalization = False` doesn't work.
How to avoid `fwd_rng_state` from showing up?
cc @chauhang @penguinwu @zou3519 @bdhirsh
|
https://github.com/pytorch/pytorch/issues/164559
|
open
|
[
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 2025-10-03T07:28:10Z
| 2025-10-06T19:10:58Z
| 1
|
SherlockNoMad
|
pytorch/pytorch
| 164,536
|
Very confused about conda-forge
|
### 🐛 Describe the bug
Is this the cpu or gpu version? https://anaconda.org/conda-forge/pytorch
What is this? https://anaconda.org/pytorch/pytorch-cuda
How should it be used? Is conda no longer a good way to install?
### Versions
Is this the cpu or gpu version? https://anaconda.org/conda-forge/pytorch
What is this? https://anaconda.org/pytorch/pytorch-cuda
How should it be used? Is conda no longer a good way to install?
|
https://github.com/pytorch/pytorch/issues/164536
|
closed
|
[] | 2025-10-03T01:25:26Z
| 2025-10-03T05:26:01Z
| 1
|
7735986
|
pytorch/pytorch
| 164,529
|
[RFC] Implement shrink_group API to expose ncclCommShrink
|
### 🚀 The feature, motivation and pitch
### PyTorch Process Group Shrink API
Authors: @brchang24 @spotluri @bosilca
#### Summary
This document outlines proposed API changes to improve fault tolerance and flexibility in PyTorch Process Groups.
#### Motivation
**Fault Tolerance support**
The API is designed to enhance fault tolerance capabilities in PyTorch by enabling exclusion of faulty ranks.
With the new API, malfunctioning participants should be excluded to avoid hangings, as their reliability can't be guaranteed. This API is not limited to hard faults (where processes disappear due to hardware failures) but allows the application to mold the execution environment as needed (for correctness or performance reasons).
**Performance**
The existing abort \+ init method entails a fixed cost for full initialization, as illustrated in the chart below with rank shrink as an example.

We also explored alternatives like split, but that approach requires the participation of all ranks, including malfunctioning ones, to form a new group.
So, both above factors must be considered when designing the API.
#### Proposed API changes
To address these concerns above, we are proposing the following API changes:
New API: shrink\_group()
```python
shrink_group(ranks_to_exclude: List[int],
Pg: Optional[ProcessGroup] = None,
shrink_flags : int = NCCL_SHRINK_DEFAULT)
Shrink an existing distributed group. Only group members of the updated ProcessGroup need to enter this function. The excluded ranks do not need to call this function. The scope of this call is therefore collective across the processes that belong to the shrunk distributed group.
Args:
ranks_exclude (list[int]): List of group ranks to be excluded in the updated ProcessGroup. This list must be consistent across all participating processes.
pg (ProcessGroup, optional): The process group to work on, If None, the default process group will be used.
shrink_flags (int): NCCL_SHRINK_DEFAULT (default)
NCCL_SHRINK_ABORT (attempt to terminate ongoing operations in the parent communicator before shrinking.
```
#### Implementation
PyTorch should directly use the shrink functionality if supported by the backend.
**Use Cases**
Rank Shrink
When one rank is detected defective and needs to be excluded, the new shrink\_group() can be invoked as the below example:
Example
remove rank 2
**shrink\_group**(\[2\], pg)
<img width="729" height="432" alt="Image" src="https://github.com/user-attachments/assets/fa84fbab-abcc-4099-98c3-a9f2e1389f40" />
In the example above, only the original rank 0, 1, and 3 need to invoke shrink\_group(), not rank 2 to avoid potential hang. After the shrink, the original rank 2 is excluded, so, the original rank 3 will be shifted down to become rank 2 in the updated group, if no parameter key is passed in.
Errors could be reported by the backend or PyTorch. However, it is up to the upper layer to decide whether to exclude a rank from the group.
Ideally, the defective ranks should be excluded from all associated groups. However, it appears to be enforcing a policy on PyTorch. This design suggests delegating the decision to exclude ranks from a group or completely dissolve the group to the Upper Layer (UL). Meanwhile, PyTorch should verify that no subgroups use the rank before it is removed from the default group.
**Things to Consider**
**Group Rank recalculation**
Shrink can lead to changes in the group's rank order. It will apply the default method to recalculate group ranks as detailed below.
When shrinking, ranks will be shifted to close any gaps left by excluded ranks.
**Metrics**
What are the main metrics to measure the value of this feature?
1. When one node/rank goes down completely
To compare with the existing solution.
2. Performance comparison with the existing solutions
1. Shrink
**Drawbacks**
Are there any reasons why we should not do this? Here we aim to evaluate risk and check ourselves.
Please consider:
* Is it a breaking change?
This change should be backward compatible.
* Impact on UX
No
* implementation cost, both in terms of code size and complexity
The assumption is that the pytorch cost should not be high, but the backend support cost might be high, especially for backends that do not have already support for fault management.
* integration of this feature with other existing and planned features
PyTorch layer needs to integrate with the backend when it has the support.
**Alternatives**
What other designs have been considered? What is the impact of not doing this?
Can be implemented using abort+init method.
Need full init method which potentially requires broadcast for the NCCL bootstrap.
**Prior Art**
Discuss prior art (both good and bad) in relation to this proposal:
|
https://github.com/pytorch/pytorch/issues/164529
|
closed
|
[
"oncall: distributed"
] | 2025-10-03T00:26:11Z
| 2025-10-17T17:55:06Z
| 0
|
brchang24
|
huggingface/lerobot
| 2,108
|
HIL-SERL Transform order for (tanh → rescale) is reversed
|
In `TanhMultivariateNormalDiag`:
```
transforms = [TanhTransform(cache_size=1)]
if low is not None and high is not None:
transforms.insert(0, RescaleFromTanh(low, high)) # puts Rescale *before* tanh
```
this applies RescaleFromTanh then Tanh, which is backwards. should we change it to tanh first, then rescale?
Fix
```
transforms = [TanhTransform(cache_size=1)]
if low is not None and high is not None:
transforms.append(RescaleFromTanh(low, high)) # tanh → rescale
```
Also, when I tried to assign value for low, high. I got error:
```
torch/distributions/transforms.py", line 303, in domain
domain = self.parts[0].domain
AttributeError: 'RescaleFromTanh' object has no attribute 'domain'
```
Might be fixed by adding the following to `class RescaleFromTanh(Transform)`
```
# Required attributes for PyTorch Transform
self.domain = constraints.interval(-1.0, 1.0)
self.codomain = constraints.interval(low, high)
self.bijective = True
```
|
https://github.com/huggingface/lerobot/issues/2108
|
open
|
[
"question",
"policies"
] | 2025-10-02T21:44:22Z
| 2025-10-07T20:36:31Z
| null |
priest-yang
|
pytorch/torchtitan
| 1,790
|
Distributed training hangs on local error instead of exit
|
In our model, we have the following code
```python
if x.shape[2:] != y.shape[2:]:
print(f"RANK {torch.distributed.get_rank()}: SPATIAL DIM MISMATCH!")
raise ValueError(f"x.shape[2:] != y.shape[2:], {x.shape[2:]=}, {y.shape[2:]=}")
x = torch.cat([x, y], dim=1)
```
However, if one rank get mismatch error, it can reach the print point, but not the raise error point, the program hangs forever.
How to debug this and what's the potential reason? Thanks.
|
https://github.com/pytorch/torchtitan/issues/1790
|
closed
|
[
"question"
] | 2025-10-02T21:18:54Z
| 2025-10-03T02:49:24Z
| null |
yzhao30
|
huggingface/lerobot
| 2,107
|
Low Success Rate When Training SmolVLA-0.24B on LIBERO
|
Hi folks, I'm trying to replicate the 0.24B SmolVLA model on the LIBERO dataset. Intuitively, I just changed the base model `vlm_model_name: str = "HuggingFaceTB/SmolVLM2-256M-Video-Instruct"`. Here is the command I used to train.
`lerobot-train --policy.type=smolvla --policy.load_vlm_weights=true --dataset.repo_id=HuggingFaceVLA/libero --env.type=libero --env.task=libero_10 --output_dir=./outputs/ --steps=100000 --batch_size=64 --eval.batch_size=1 --eval.n_episodes=1 --eval_freq=1000 --wandb.enable=true`
I trained on a single RTX4090. However, I found that the success rate on the eval set is quite low. The success rate was only 7.5%. Is there anything I did wrong? Attaching the training plots below.
<img width="1116" height="629" alt="Image" src="https://github.com/user-attachments/assets/9bbdcadb-e113-4d9f-b315-4f37b57bde37" />
<img width="1116" height="310" alt="Image" src="https://github.com/user-attachments/assets/23951a72-a374-4eda-9368-363367e4c746" />
|
https://github.com/huggingface/lerobot/issues/2107
|
open
|
[
"question",
"policies",
"simulation"
] | 2025-10-02T19:11:55Z
| 2025-12-20T09:30:58Z
| null |
zimgong
|
huggingface/optimum-onnx
| 66
|
How to export a stateless whisper model via optimum-cli?
|
I observe that when exporting a Whisper model via Python API, the resulting model is stateless, i.e. the decoder is split into two models.
```python
import os
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
ORTModelForSpeechSeq2Seq.from_pretrained("openai/whisper-tiny", export=True).save_pretrained("./whisper/python")
print(os.listdir("./whisper/python"))
# ['encoder_model.onnx', 'decoder_with_past_model.onnx', 'decoder_model.onnx', 'config.json', 'generation_config.json']
```
When I export this model via CLI, the decoder model is exported as stateful even if I provide the `--no-post-process` argument.
```bash
optimum-cli export onnx --task automatic-speech-recognition -m openai/whisper-tiny --no-post-process ./whisper/cli
ls ./whisper/cli
# added_tokens.json decoder_model.onnx generation_config.json normalizer.json special_tokens_map.json tokenizer.json
# config.json encoder_model.onnx merges.txt preprocessor_config.json tokenizer_config.json vocab.json
```
My environment:
```
certifi==2025.8.3
charset-normalizer==3.4.3
coloredlogs==15.0.1
filelock==3.19.1
flatbuffers==25.9.23
fsspec==2025.9.0
hf-xet==1.1.10
huggingface-hub==0.35.3
humanfriendly==10.0
idna==3.10
Jinja2==3.1.6
MarkupSafe==3.0.3
ml_dtypes==0.5.3
mpmath==1.3.0
networkx==3.4.2
numpy==2.2.6
nvidia-cublas-cu12==12.8.4.1
nvidia-cuda-cupti-cu12==12.8.90
nvidia-cuda-nvrtc-cu12==12.8.93
nvidia-cuda-runtime-cu12==12.8.90
nvidia-cudnn-cu12==9.10.2.21
nvidia-cufft-cu12==11.3.3.83
nvidia-cufile-cu12==1.13.1.3
nvidia-curand-cu12==10.3.9.90
nvidia-cusolver-cu12==11.7.3.90
nvidia-cusparse-cu12==12.5.8.93
nvidia-cusparselt-cu12==0.7.1
nvidia-nccl-cu12==2.27.3
nvidia-nvjitlink-cu12==12.8.93
nvidia-nvtx-cu12==12.8.90
onnx==1.19.0
onnxruntime==1.23.0
optimum @ git+https://github.com/huggingface/optimum@a813c95ac088c401547fe15e7a68ac5c6f00f9a7
optimum-onnx @ git+https://github.com/huggingface/optimum-onnx.git@671b84f78a244594dd21cb1a8a1f7abb8961ea60
packaging==25.0
protobuf==6.32.1
PyYAML==6.0.3
regex==2025.9.18
requests==2.32.5
safetensors==0.6.2
sympy==1.14.0
tokenizers==0.21.4
torch==2.8.0
tqdm==4.67.1
transformers==4.55.4
triton==3.4.0
typing_extensions==4.15.0
urllib3==2.5.0
```
How to export this model as stateless via optimum-cli? Also, how to export this model as stateful via Python API?
Thanks!
|
https://github.com/huggingface/optimum-onnx/issues/66
|
closed
|
[
"question"
] | 2025-10-02T09:50:03Z
| 2025-10-13T05:33:25Z
| null |
nikita-savelyevv
|
huggingface/lerobot
| 2,104
|
Select the VLM backbone for SmolVLA
|
Hi may I ask about the vlm_model_name, is there any model more powerful than HuggingFaceTB/SmolVLM2-500M-Video-Instruct which can be used to train SmolVLA for Lerobot SO101?
|
https://github.com/huggingface/lerobot/issues/2104
|
open
|
[
"question",
"policies",
"good first issue"
] | 2025-10-02T07:35:29Z
| 2025-10-11T16:53:59Z
| null |
Llkhhb
|
pytorch/torchtitan
| 1,781
|
How to add supervised finetuning mask in torchtitan?
|
How do I implement supervised fine-tuning (SFT) masking in TorchTitan for posttraining using a synthetic dataset?
|
https://github.com/pytorch/torchtitan/issues/1781
|
open
|
[
"post training"
] | 2025-10-01T23:36:12Z
| 2025-12-12T19:37:12Z
| null |
kailashg26
|
pytorch/pytorch
| 164,360
|
Would maintainers be open to a contribution that adds lightweight progress bar support (based on tqdm) in torch.utils?
|
### 🚀 The feature, motivation and pitch
Feature request:
Add a lightweight progress bar utility (based on tqdm) in torch.utils that users can optionally import to visualize training/validation/test loop progress.
Motivation:
PyTorch core currently does not provide any built-in progress tracking for long-running loops. While users can integrate tqdm manually, it requires repetitive boilerplate in tutorials and quick scripts. A small utility in torch.utils would lower the barrier for beginners and improve user experience without adding significant complexity to the core library.
Pitch:
The utility would remain optional, minimal, and import tqdm only if used. This way, PyTorch maintains its philosophy of flexibility while offering a small but meaningful quality-of-life improvement for users.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/164360
|
closed
|
[
"triaged",
"enhancement"
] | 2025-10-01T15:22:57Z
| 2025-10-06T17:16:15Z
| 2
|
wtfPrethiv
|
pytorch/xla
| 9,662
|
XLA mul with bf16×bf16 upcasts to f32 — op math type and option to disable?
|
## ❓ Questions and Help
Hi folks, I have a question about the XLA mul op.
When both inputs are bf16, the generated graph converts to f32, performs the multiply, then converts back to bf16. Two questions:
In this case, is the op math type effectively f32 (not bf16)?
If this upcast exists primarily for TPU accuracy/stability, would it be acceptable to gate it behind a flag (e.g., env option) so we can treat that path as a no-op and keep the op in native bf16 when desired?
Reference code path:
https://github.com/pytorch/xla/blob/master/torch_xla/csrc/aten_xla_type.cpp#L187-L211
If there’s a better approach please let me know. Thanks!
|
https://github.com/pytorch/xla/issues/9662
|
closed
|
[
"enhancement",
"tracing"
] | 2025-10-01T14:12:53Z
| 2025-10-03T18:22:12Z
| 3
|
sshonTT
|
huggingface/diffusers
| 12,415
|
SVG 2 kernels
|
Can we support new sparse kernels in (Neurips 2025)
https://svg-project.github.io/v2/
|
https://github.com/huggingface/diffusers/issues/12415
|
open
|
[] | 2025-10-01T10:52:50Z
| 2025-10-01T10:52:50Z
| 0
|
bhack
|
pytorch/pytorch
| 164,342
|
Official support for sm_120 (RTX 50-series / Blackwell) in stable PyTorch builds
|
### 🐛 Describe the bug
Hello PyTorch team,
I would like to kindly request official support for sm_120 (RTX 50-series / Blackwell GPUs, e.g. RTX 5070 Ti) in the stable PyTorch builds.
Current situation:
- CUDA 12.8/12.9 already includes support for Blackwell architectures.
- PyTorch nightly builds (e.g., 2.10.0.dev + cu12.9) can detect sm_120, but they are not yet fully stable.
- In my case, I tested the nightly build on Windows 11 with an RTX 5070 Ti. PyTorch itself launches, but DeepLabCut (DLC GUI, which relies heavily on PyTorch) still fails to start properly.
- Interestingly, Annolid GUI works fine on the same PC with RTX 5070 Ti. This suggests the underlying CUDA/NVIDIA support is there, but stable PyTorch integration is still missing.
Problem:
- DLC (and many other research tools) depend strictly on stable PyTorch releases. Without official sm_120 support in the stable channel, we cannot run these applications on RTX 50-series GPUs.
- As a researcher, I purchased RTX 5070 Ti for deep learning workloads, but currently it cannot be used productively with DLC due to this gap.
Request:
- Please prioritize adding official sm_120 support into stable PyTorch builds.
- Even partial support in an upcoming stable release (e.g., wheels with cu12.9) would greatly help researchers and developers adopt RTX 50-series hardware.
- At minimum, could you provide an ETA or roadmap for when sm_120 will be supported in stable builds?
Thank you very much for your efforts and for maintaining this essential framework.
Best regards,
### Versions
RTX 5070 Ti requires CUDA 12.0+ for full support.
Multiple rebuilds of environments tested.
PyTorch, NumPy, and OpenCV work independently.
Failures appear specific to DLC’s internal module loading mechanism.
cc @seemethere @malfet @atalman @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @ptrblck @msaroufim @eqy @jerryzh168
|
https://github.com/pytorch/pytorch/issues/164342
|
open
|
[
"needs reproduction",
"module: windows",
"module: cuda",
"triaged"
] | 2025-10-01T07:21:36Z
| 2025-11-13T00:29:02Z
| 14
|
endvntgf-design
|
huggingface/lerobot
| 2,096
|
How can I change the task name of already recorded episodes?
|
I recorded the dataset using:
--dataset.single_task="slice the clay until it becomes 4 pieces"
Now I want to update those recorded episodes to a different task name. How can I do that?
|
https://github.com/huggingface/lerobot/issues/2096
|
open
|
[
"question",
"dataset",
"good first issue"
] | 2025-10-01T02:15:49Z
| 2025-10-30T03:48:47Z
| null |
pparkgyuhyeon
|
huggingface/transformers
| 41,235
|
i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?
|
i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?
recover data state ,not only model state , i wish i said my request clearly .
how to use accelerate + transformers trainer to train model ,when training is broken ,it can recover from data checkpoint and model checkpoint ? thanks
i wish you understand what i said
|
https://github.com/huggingface/transformers/issues/41235
|
closed
|
[
"bug"
] | 2025-09-30T17:07:07Z
| 2025-11-08T08:04:40Z
| null |
ldh127
|
huggingface/accelerate
| 3,802
|
i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?
|
i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?
recover data state ,not only model state , i wish i said my request clearly .
how to use accelerate + transformers trainer to train model ,when training is broken ,it can recover from data checkpoint and model checkpoint ? thanks
i wish you understand what i said
|
https://github.com/huggingface/accelerate/issues/3802
|
closed
|
[] | 2025-09-30T15:58:32Z
| 2025-11-09T15:06:58Z
| null |
ldh127
|
pytorch/pytorch
| 164,247
|
Dynamo graph break on flex attention code
|
### 🐛 Describe the bug
```python
import torch
import torch.nn as nn
from torch.nn.attention.flex_attention import create_block_mask, flex_attention
class MixedFakeModeModel(nn.Module):
def __init__(self, dim=64):
super().__init__()
self.dim = dim
self.lin = torch.nn.Linear(64, 64)
def forward(self, x):
batch_size, seq_len, _ = x.shape
# Process input first - this creates fake tensors in export's fake mode
processed = self.lin(x)
# Create some computation that depends on processed tensor
intermediate = processed.sum(dim=-1).detach() # Shape: (batch, seq_len)
def dynamic_mask_function(batch_idx, head_idx, q_idx, kv_idx):
threshold = intermediate[
batch_idx, q_idx % seq_len
] # Access the captured tensor
return (kv_idx <= q_idx) & (threshold > 0)
block_mask = create_block_mask(
mask_mod=dynamic_mask_function,
B=batch_size,
H=None,
Q_LEN=seq_len,
KV_LEN=seq_len,
device=x.device,
_compile=False, # HF sets this to True, which runs into the issue i am talking below
)
q = processed.view(batch_size, 1, seq_len, self.dim)
k = processed.view(batch_size, 1, seq_len, self.dim)
v = processed.view(batch_size, 1, seq_len, self.dim)
# this doesn't work
out = torch.compile(flex_attention)(q, k, v, block_mask=block_mask)
# this works (flex attention internally calls torch.compile(backend=eager) which
# has special handling similar to torch.cond
out = flex_attention(q, k, v, block_mask=block_mask)
return out
torch.compile(MixedFakeModeModel(), fullgraph=True)(torch.randn(2, 128, 64))
```
When we are tracing through create_block_mask, dynamo graph breaks with:
```
Unsupported: id() with unsupported args
Explanation: Dynamo doesn't know how to trace id() call with args (NestedUserFunctionVariable(),)
Hint: Supported args are Tensors, and functions/nn.Modules/user-defined objects from outside the compiled region.
Hint: It may be possible to write Dynamo tracing rules for this code. Please report an issue to PyTorch if you encounter this graph break often and it is causing performance issues.
Developer debug context: (NestedUserFunctionVariable(),)
For more details about this graph break, please visit: https://meta-pytorch.github.io/compile-graph-break-site/gb/gb0191.html
from user code:
File "/tmp/ipykernel_2970620/3759915601.py", line 27, in forward
block_mask = create_block_mask(
File "/data/users/tmanlaibaatar/.bento/kernels/bento_kernel_pytorch/2670/bento_kernel_pytorch_binary-inplace#link-tree/torch/nn/attention/flex_attention.py", line 1067, in create_block_mask
mod_type = _get_mod_type(mask_mod)
File "/data/users/tmanlaibaatar/.bento/kernels/bento_kernel_pytorch/2670/bento_kernel_pytorch_binary-inplace#link-tree/torch/nn/attention/flex_attention.py", line 244, in _get_mod_type
for param in inspect.signature(fn).parameters.values()
File "/data/users/tmanlaibaatar/.bento/kernels/bento_kernel_pytorch/2670/bento_kernel_pytorch_binary-inplace#link-tree/runtime/lib/python3.12/inspect.py", line 3348, in signature
return Signature.from_callable(obj, follow_wrapped=follow_wrapped,
File "/data/users/tmanlaibaatar/.bento/kernels/bento_kernel_pytorch/2670/bento_kernel_pytorch_binary-inplace#link-tree/runtime/lib/python3.12/inspect.py", line 3085, in from_callable
return _signature_from_callable(obj, sigcls=cls,
File "/data/users/tmanlaibaatar/.bento/kernels/bento_kernel_pytorch/2670/bento_kernel_pytorch_binary-inplace#link-tree/runtime/lib/python3.12/inspect.py", line 2538, in _signature_from_callable
obj = unwrap(obj, stop=(lambda f: hasattr(f, "__signature__")
File "/data/users/tmanlaibaatar/.bento/kernels/bento_kernel_pytorch/2670/bento_kernel_pytorch_binary-inplace#link-tree/runtime/lib/python3.12/inspect.py", line 773, in unwrap
memo = {id(f): f}
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
### Versions
main
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames @Lucaskabela @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
|
https://github.com/pytorch/pytorch/issues/164247
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: dynamo",
"module: graph breaks",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 2025-09-30T15:16:18Z
| 2025-10-17T17:44:48Z
| 7
|
tugsbayasgalan
|
pytorch/torchtitan
| 1,773
|
Unreachable code in `CheckpointManager`
|
Hi! I've noticed that `def maybe_wait_for_staging` basically never does anything as `self.staging` is set to `False` in `__init__` and never modified. Is there something wrong or is this code never supposed to run?
https://github.com/pytorch/torchtitan/blob/a3104201ba3a0fa19e9c3cc5ba748b0398551410/torchtitan/components/checkpoint.py#L616
|
https://github.com/pytorch/torchtitan/issues/1773
|
closed
|
[] | 2025-09-30T13:59:22Z
| 2025-10-02T16:43:43Z
| 3
|
antony-frolov
|
huggingface/transformers
| 41,211
|
Add DEIMv2
|
### Model description
It would be nice to integrate DEIMv2, a new state-of-the-art model for real-time object detection based on DINOv3. The weights are released under Apache 2.0.
Related thread: https://github.com/Intellindust-AI-Lab/DEIMv2/issues/20
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
Code: https://github.com/Intellindust-AI-Lab/DEIMv2
Weights (on Google Drive for now): https://github.com/Intellindust-AI-Lab/DEIMv2?tab=readme-ov-file#1-model-zoo
Ideally, the [AutoBackbone API](https://huggingface.co/docs/transformers/main_classes/backbones) can be leveraged to not having to re-implement the entire DINOv3 backbone in `modular_deimv2.py` and `modeling_deimv2.py`. See an example of how this is leveraged for DETR [here](https://github.com/huggingface/transformers/blob/59035fd0e1876f9e526488b61fe43ff8829059f6/src/transformers/models/detr/modeling_detr.py#L280).
|
https://github.com/huggingface/transformers/issues/41211
|
open
|
[
"New model"
] | 2025-09-30T09:43:07Z
| 2025-10-04T18:44:06Z
| 4
|
NielsRogge
|
pytorch/torchtitan
| 1,771
|
Posttraining Library
|
# Posttraining Library Support
## Summary
I understand that torchtune is being phased out and the team announced in July 2025 that they are developing a new product in a new repo for end-to-end post-training with scale. It's now been several months since that announcement. Could you share an update on when this new library will be released?
## Motivation
In [[Issue #2883](https://github.com/pytorch/torchtune/issues/2883)](https://github.com/pytorch/torchtune/issues/2883), the torchtune team announced plans to develop a new product focused on end-to-end post-training with scale. That announcement was made several months ago in July 2025, and torchtune is now in maintenance mode (receiving only critical bug fixes and security patches during 2025).
## Questions
- **When will the new post-training library be released?** It's been several months since the July announcement, can you share a timeline or expected release date?
- **Will the new library be part of torchtitan or a separate repository?** The announcement mentioned a "new repo," but given torchtitan's focus on production-grade training, would it make sense to integrate?
- **What's the relationship between the new library and torchtitan?** Will they share infrastructure, or are they separate projects?
- **Which post-training techniques will be prioritized?** (eg SFT, RLHF/DPO, continued pretraining)
- **Is there a beta or early access program?** Many in the community are eager to start testing and contributing.
## Why I'm asking here (instead of torchtune)
I'm posting this question in the torchtitan repo rather than torchtune because:
1. **Architectural excellence**: The torchtitan team has demonstrated exceptional work in building a production-grade, PyTorch-native training system with modular composability and scale as a first-class citizen, exactly the qualities mentioned in the torchtune transition announcement.
2. **Natural evolution**: Given that torchtitan already handles pretraining at scale with features like 3D parallelism, distributed checkpointing, and native PyTorch integration, it seems like a natural foundation or model for a post-training library with similar scale requirements.
3. **Team expertise**: The torchtitan team's deep expertise in distributed training, parallelism techniques, and PyTorch internals makes them well-positioned to build or be involved with the successor to torchtune.
4. **Unified vision**: Both the torchtitan philosophy and the announced new post-training library share similar goals: hackable code, minimal abstraction, scale-first design, and native PyTorch.
## Additional Context
With torchtune entering maintenance mode and no longer accepting new features, many practitioners are in a transitional period waiting for the new post-training solution. Understanding the timeline and scope of the new library would help the community plan their training workflows accordingly.
Thank you for your excellent work on torchtitan and the broader PyTorch training ecosystem, we're excited to see what's coming!
|
https://github.com/pytorch/torchtitan/issues/1771
|
open
|
[
"post training"
] | 2025-09-30T09:42:49Z
| 2025-10-24T07:58:26Z
| 2
|
MarkLiLabs
|
huggingface/transformers
| 41,208
|
Integrate mamba SSM kernels from the hub
|
### Feature request
Currently, mamba kernels are imported via the main source package ex, for [GraniteMoeHybrid](https://github.com/huggingface/transformers/blob/main/src/transformers/models/granitemoehybrid/modeling_granitemoehybrid.py#L44-L46)
Can we migrate this to use the kernels-hub (`kernels-community/mamba-ssm`) variation instead?
### Motivation
Removes the external dependency. Kernel hub is also integrated at several other places throughout the library.
### Your contribution
I can submit a PR for migrating from the PyPi `mamba_ssm` package to the `kernels` package for mamba ops.
|
https://github.com/huggingface/transformers/issues/41208
|
closed
|
[
"Feature request"
] | 2025-09-30T07:50:52Z
| 2025-12-18T10:17:06Z
| 15
|
romitjain
|
huggingface/tokenizers
| 1,870
|
How can I convert a trained tokenizer into `transformers` format
|
Hi guys,
I have trained a tokenizer which works pretty well and it is stored in a single `.json` file. Is there any method / API to convert it into a `transformers` toeknizer format?
If there's no such implementation I am happy to contribute.
|
https://github.com/huggingface/tokenizers/issues/1870
|
closed
|
[] | 2025-09-30T06:09:52Z
| 2025-09-30T13:53:53Z
| 1
|
dibbla
|
huggingface/lighteval
| 999
|
How to print all pass@k scores when generating 16 samples?
|
Hi,
I want to print all results of pass@k metrics when generating 16 samples. (e.g., k=1, 2, 4, 8, 16)
```python
math_500_pass_k_at_16 = LightevalTaskConfig(
name="math_500_pass_k_at_16",
suite=["custom"],
prompt_function=math_500_prompt_fn,
hf_repo="HuggingFaceH4/MATH-500",
hf_subset="default",
hf_avail_splits=["test"],
evaluation_splits=["test"],
few_shots_split=None,
few_shots_select=None,
generation_size=32768,
metrics=[
Metrics.pass_at_k_math(sample_params={"k": 1, "n": 16}),
Metrics.pass_at_k_math(sample_params={"k": 2, "n": 16}),
Metrics.pass_at_k_math(sample_params={"k": 4, "n": 16}),
Metrics.pass_at_k_math(sample_params={"k": 8, "n": 16}),
Metrics.pass_at_k_math(sample_params={"k": 16, "n": 16}),
],
version=2,
```
But, I can't see full results that I want. Does anyone know how to resolve it?
|
https://github.com/huggingface/lighteval/issues/999
|
open
|
[] | 2025-09-29T21:49:44Z
| 2025-10-14T08:04:17Z
| null |
passing2961
|
pytorch/pytorch
| 164,145
|
Improvements to profiler for bitwise equivalence use case
|
### 🐛 Describe the bug
Suppose that you want to verify that eager and aot_eager are numerically equivalent. The profiler can be a good tool for determining why there is a small numerical difference, as one might reasonably expect to get exactly the same kernels between the two. However, the profiler has obviously not been setup to handle this situation. Here are some obvious problems I ran into on the way:
- [ ] No documentation for FunctionEvent at https://docs.pytorch.org/docs/stable/profiler.html . We need to postprocess events() in an unusual way, but because there are no docs it's difficult to tell what the format of events are. In particular, there's a hierarchical structure that we need to know about.
- [ ] A convenient way to get all events in chronological order at a "consistent" level of abstraction, with no overlapping. For example, we might be interested specifically in what at:: kernel the dispatch dispatches to at the CPU/CUDA key. When looking at this, we do NOT want internal redispatches (e.g., an at::empty call to perform an allocation). Similarly, we might something equivalent to the top level first dispatcher invocation. Call it "list_operators"
- [ ] There should be a convenient function for dumping a string trace at the highest level of abstraction, so you can quickly eyeball what code was run (in a similar niche to DebugMode, but "better" because it is guaranteed not to interfere with what exactly is executed in eager mode).
### Versions
main
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
|
https://github.com/pytorch/pytorch/issues/164145
|
open
|
[
"oncall: profiler"
] | 2025-09-29T15:30:14Z
| 2025-10-26T03:18:33Z
| 2
|
ezyang
|
pytorch/pytorch
| 164,133
|
Use libtorch export onnx
|
### 🚀 The feature, motivation and pitch
How to export onnx using libtorch after training a model with libtorch ?
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/164133
|
closed
|
[] | 2025-09-29T13:55:19Z
| 2025-09-29T14:43:24Z
| 1
|
yongxin3344520
|
pytorch/pytorch
| 164,124
|
torch.compile compiles multiple Triton autotune kernels, but uses the wrong ones
|
### 🐛 Describe the bug
When torch.compile autotunes a Triton kernel multiple times for different shapes, it uses the wrong kernel afterwards. Interestingly, this only happens when no torchinductor-cache files exist. On next run of the same program, it uses the correct kernels!
Here are the details:
I have adapted your example under "Advanced Usage" here, which explains how to use autotune with torch.compile:
https://docs.pytorch.org/tutorials/recipes/torch_compile_user_defined_triton_kernel_tutorial.html
Here is the test case:
[test.py](https://github.com/user-attachments/files/22595171/test.py)
Changes:
- I have added a time waster to the kernel, that clearly shows autotune which configuration is the inefficient one
- made the shape of `x` a key for autotuning. The use case for this in reality is using large block sizes for large tensors
- Call the kernel with different shapes and let it tune
- Call it normally - it uses the wrong kernel
It appears that it *has* compiled 2 separate kernels for the 2 shapes, but it consistently uses the wrong one for *both* shapes, as if it intentionally tried to use the wrong one.
But only until you run the program a second time. When it reads the kernels from the torchinductor cache, it uses the correct kernels!
### Error logs
- Without torch.compile:
```
TRITON_PRINT_AUTOTUNING=1 TORCH_COMPILE_DISABLE=1 python test.py
[...]
best config selected: BLOCK_SIZE: 2, num_warps: 4, num_ctas: 1, num_stages: 4, maxnreg: None;
[...]
best config selected: BLOCK_SIZE: 4, num_warps: 8, num_ctas: 1, num_stages: 3, maxnreg: None;
```
Best config for both shapes is selected, no errors.
- With torch.compile **the first time**: [make sure your torchinductor cache is deleted]
```
TORCH_COMPILE_DISABLE= python test.py
pid (2, 0, 0) idx () --- BADLY TUNED KERNEL ---: 2
```
- With torch.compile, ran **a second time**, with existing torchinductor cache:
```
TORCH_COMPILE_DISABLE= python test.py
```
No errors. It uses the correct kernels.
### Versions
```
PyTorch version: 2.8.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.3 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-36-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Ti SUPER
Nvidia driver version: 580.82.07
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-12400F
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 5
CPU(s) scaling MHz: 54%
CPU max MHz: 4400.0000
CPU min MHz: 800.0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 288 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 7.5 MiB (6 instances)
L3 cache: 18 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data
|
https://github.com/pytorch/pytorch/issues/164124
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: user triton"
] | 2025-09-29T10:19:35Z
| 2025-09-29T16:50:17Z
| 3
|
dxqb
|
huggingface/lerobot
| 2,083
|
How to train this RL model with my trained data
|
I want this model to load the trained model that I have already generated. So, I modified the output_dir and set resume to true, but then the problem shown in the figure occurred. How can I solve it?
`{ "output_dir": "outputs/train/2025-09-28/17-28-55_default",
"job_name": "default", "resume": true,
"seed": 1000, "num_workers": 4,
"batch_size": 256,
"steps": 100000,`
and the origin code is :
`{ "output_dir": null,
"job_name": "default", "resume": flase,
"seed": 1000, "num_workers": 4,
"batch_size": 256,
"steps": 100000,
`[
<img width="1515" height="717" alt="Image" src="https://github.com/user-attachments/assets/5f46acd3-9a72-41a5-8506-742f5c479c53" />
](url)
|
https://github.com/huggingface/lerobot/issues/2083
|
open
|
[] | 2025-09-29T07:22:08Z
| 2025-10-07T20:32:04Z
| null |
993984583
|
huggingface/lerobot
| 2,082
|
How to train this RL model with my model data
|
I want this model to load the trained model that I have already generated. So, I modified the output_dir and set resume to true, but then the problem shown in the figure occurred. How can I solve it?
`{
"output_dir": "outputs/train/2025-09-28/17-28-55_default",
"job_name": "default",
"resume": true,
"seed": 1000,
"num_workers": 4,
"batch_size": 256,
"steps": 100000,`[
<img width="1515" height="717" alt="Image" src="https://github.com/user-attachments/assets/df121807-b309-4a5c-bee1-850b0fab2ae0" />
](url)
|
https://github.com/huggingface/lerobot/issues/2082
|
closed
|
[] | 2025-09-29T07:18:52Z
| 2025-10-07T20:33:11Z
| null |
993984583
|
pytorch/pytorch
| 164,094
|
Failed to change backward stream
|
in [pytorch cuda semantic ](https://docs.pytorch.org/docs/stable/notes/cuda.html#stream-semantics-of-backward-passes)
> Each backward CUDA op runs on the same stream that was used for its corresponding forward op. If your forward pass runs independent ops in parallel on different streams, this helps the backward pass exploit that same parallelism.
I currently have a requirement to run the backward pass on a different stream. I implemented an `autograd.Function` node and used `torch.cuda.set_stream()` in its backward method to switch streams, but I observed in the nsys timeline that the backward still runs on the same stream as the forward. Is there any way to force PyTorch’s backward to use a different CUDA stream than the forward?
```
class BackwardStream(torch.autograd.Function):
@staticmethod
def forward(ctx, input_tensor: torch.Tensor, stream: torch.cuda.Stream) -> torch.Tensor:
ctx.stream = stream
return input_tensor
@staticmethod
def backward(ctx, grad_output: torch.Tensor) -> torch.Tensor:
stream = ctx.stream
stream.wait_stream(torch.cuda.current_stream())
torch.cuda.set_stream(stream)
return grad_output
```
cc @ezyang @albanD @gqchen @nikitaved @soulitzer @Varal7 @xmfan
|
https://github.com/pytorch/pytorch/issues/164094
|
closed
|
[
"module: autograd",
"triaged"
] | 2025-09-29T02:14:19Z
| 2025-10-05T23:40:49Z
| 16
|
shadow150519
|
pytorch/pytorch
| 164,074
|
When will the version for ROCM 7 be released?
|
### 🚀 The feature, motivation and pitch
The homepage still shows version 6.4.
### Alternatives
_No response_
### Additional context
_No response_
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
|
https://github.com/pytorch/pytorch/issues/164074
|
closed
|
[
"module: rocm",
"triaged"
] | 2025-09-28T16:27:21Z
| 2025-09-30T00:40:11Z
| 3
|
mihongyu
|
huggingface/sentence-transformers
| 3,532
|
What is the proper way to use prompts? Do we have to format/render them ourselves?
|
Hi. First time using the Sentence Transformers library and I had a question regarding using prompts. Specifically, it seems like the [`SentenceTransformer.encode_document`](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode_document) method is a convenient wrapper for the [`SentenceTransformer.encode`](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode) method in the sense that the prompt `"document"` and the task `"document"` are selected automatically.
However, I'm noticing that the prompt is simply prepended to the provided text rather than having it be formatted. The prompt for `"document"` is `title: {title | "none"} | text: {content}` and inside the `encode` method simply prepends it: https://github.com/UKPLab/sentence-transformers/blob/7341bf155b4349b88690b78c84beb5aa658c439f/sentence_transformers/SentenceTransformer.py#L1040
Meaning that the resulting input to the embedding model would look like `title: none | text: {OUR_TEXT}`. But what if we wanted to include a `title` value? It seems like we'd have to pre-process the input ourselves. But then what is the point of using `encode_document`?
|
https://github.com/huggingface/sentence-transformers/issues/3532
|
closed
|
[] | 2025-09-28T06:32:51Z
| 2025-09-30T10:59:24Z
| null |
seanswyi
|
pytorch/pytorch
| 164,061
|
GPU Memory Leak due to distributions
|
I am using the [MixStyle](https://arxiv.org/abs/2104.02008) methodology for domain adaptation and it involves using a custom layer which is inserted after every encoder stage. However, it is causing VRAM to grow linearly, which causes OOM error. No memory leak occurs on disabling the layer. Any idea on why this is happening?
```
class MixStyle(nn.Module):
"""MixStyle.
Reference:
Zhou et al. Domain Generalization with MixStyle. ICLR 2021.
"""
def __init__(self, p=0.5, alpha=0.1, eps=1e-6, mix='random'):
"""
Args:
p (float): probability of using MixStyle.
alpha (float): parameter of the Beta distribution.
eps (float): scaling parameter to avoid numerical issues.
mix (str): how to mix.
"""
super().__init__()
self.p = p
self.beta = torch.distributions.Beta(alpha, alpha)
self.eps = eps
self.alpha = alpha
self.mix = mix
self._activated = True
def __repr__(self):
return f'MixStyle(p={self.p}, alpha={self.alpha}, eps={self.eps}, mix={self.mix})'
def set_activation_status(self, status=True):
self._activated = status
def update_mix_method(self, mix='random'):
self.mix = mix
def forward(self, x):
if not self.training or not self._activated:
return x
if random.random() > self.p:
return x
B = x.size(0)
mu = x.mean(dim=[2, 3], keepdim=True)
var = x.var(dim=[2, 3], keepdim=True)
sig = (var + self.eps).sqrt()
mu, sig = mu.detach(), sig.detach()
x_normed = (x-mu) / sig
lmda = self.beta.sample((B, 1, 1, 1))
lmda = lmda.to(x.device)
if self.mix == 'random':
# random shuffle
perm = torch.randperm(B)
elif self.mix == 'crossdomain':
# split into two halves and swap the order
perm = torch.arange(B - 1, -1, -1) # inverse index
perm_b, perm_a = perm.chunk(2)
perm_b = perm_b[torch.randperm(B // 2)]
perm_a = perm_a[torch.randperm(B // 2)]
perm = torch.cat([perm_b, perm_a], 0)
else:
raise NotImplementedError
mu2, sig2 = mu[perm], sig[perm]
mu_mix = mu*lmda + mu2 * (1-lmda)
sig_mix = sig*lmda + sig2 * (1-lmda)
return x_normed*sig_mix + mu_mix
```
cc @fritzo @neerajprad @alicanb @nikitaved
|
https://github.com/pytorch/pytorch/issues/164061
|
open
|
[
"module: distributions",
"triaged"
] | 2025-09-28T05:08:15Z
| 2025-09-29T14:54:42Z
| 1
|
vedantdalimkar
|
huggingface/transformers
| 41,186
|
Qwen2.5-VL restore tensor multi-image form
|
Hello, I have recently been experimenting with qwen2.5-vl (https://github.com/huggingface/transformers/blob/v4.52-release/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py). I noticed that multiple images are pre-merged here,
```
image_embeds = self.get_image_features(pixel_values, image_grid_thw)
```
but I want to process each image individually, such as performing pooling on each image. I found that when I attempt operations like
```
image_embeds.view(n_img, image_embeds.shape[0]//n_img, -1)
```
I cannot correctly restore the multi-image format. Could you please advise on how to handle this?
|
https://github.com/huggingface/transformers/issues/41186
|
closed
|
[] | 2025-09-28T03:36:24Z
| 2025-11-05T08:02:55Z
| 2
|
NiFangBaAGe
|
huggingface/peft
| 2,802
|
Guide on training that requires both LoRA and base model forward calls ?
|
Hi, I'm working on some training variants that require hidden states from the base model and the hidden states produced with LoRA. I'm currently initializing two separate model objects:
```
from peft import get_peft_model
m1=AutoModelForCausalLM.from_pretrained(model_path)
m2=AutoModelForCausalLM.from_pretrained(model_path)
lora_config = LoraConfig(....)
m2 = get_peft_model(m2, lora_config)
```
Is there already an api to call non-lora forward with `m2` object ? I believe it'll be more memory efficient.
|
https://github.com/huggingface/peft/issues/2802
|
closed
|
[] | 2025-09-27T23:12:23Z
| 2025-10-15T10:26:15Z
| 3
|
thangld201
|
huggingface/lerobot
| 2,072
|
How to run lerobot with RTX 5090? If not possible, please add support
|
### System Info
```Shell
- lerobot version: 0.3.4
- Platform: Linux-6.14.0-32-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface Hub version: 0.35.1
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- PyTorch version: 2.8.0+cu128
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.8
- GPU model: NVIDIA GeForce RTX 5090
- Using GPU in script?: Yes
```
### Information
- [x] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to run the train script as shown in the examples
```
python -m lerobot.scripts.lerobot_train --policy.path=cijerezg/smolvla-test --dataset.repo_id=cijerezg/pick-up-train-v1 --batch_size=48 --steps=20000 --output_dir=outputs/train/my_smolvla_pickup_v9 --job_name=my_smolvla_training --policy.device=cuda --wandb.enable=true --policy.repo_id=pickup_policy_v5 --save_freq=1000
```
### Expected behavior
I expect it to run, but instead I get the following error:
```
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/scripts/lerobot_train.py", line 363, in <module>
main()
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/scripts/lerobot_train.py", line 359, in main
train()
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/configs/parser.py", line 225, in wrapper_inner
response = fn(cfg, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/scripts/lerobot_train.py", line 263, in train
batch = next(dl_iter)
^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/utils.py", line 917, in cycle
yield next(iterator)
^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 734, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1516, in _next_data
return self._process_data(data, worker_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1551, in _process_data
data.reraise()
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/_utils.py", line 769, in reraise
raise exception
NotImplementedError: Caught NotImplementedError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/_utils/worker.py", line 349, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
~~~~~~~~~~~~^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/lerobot_dataset.py", line 874, in __getitem__
video_frames = self._query_videos(query_timestamps, ep_idx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/lerobot_dataset.py", line 846, in _query_videos
frames = decode_video_frames(video_path, shifted_query_ts, self.tolerance_s, self.video_backend)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/video_utils.py", line 69, in decode_video_frames
return decode_video_frames_torchcodec(video_path, timestamps, tolerance_s)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/video_utils.py", line 248, in decode_video_frames_torchcodec
decoder = decoder_cache.get_decoder(str(video_path))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/video_utils.py", line 193, in get_decoder
decoder = VideoDecoder(file_handle, seek_mode="approximate")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torchcodec/decoders/_video_decoder.py", line 89, in __init__
self._decoder = create_decoder(source=source, seek_mode=seek_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/
|
https://github.com/huggingface/lerobot/issues/2072
|
closed
|
[] | 2025-09-27T19:52:42Z
| 2025-11-08T07:53:00Z
| null |
cijerezg
|
huggingface/text-generation-inference
| 3,333
|
How to use prefix caching
|
Hi
I can't find a way to turn on the prefix caching
When I run any model, I always get:
Using prefix caching = False
Thanks a lot
|
https://github.com/huggingface/text-generation-inference/issues/3333
|
open
|
[] | 2025-09-27T14:14:37Z
| 2025-09-29T11:52:48Z
| null |
Noha-Magdy
|
huggingface/smol-course
| 259
|
[QUESTION] Is this a bug in smollmv3's chat template?
|
Hi
I am reading this
https://huggingface.co/learn/smol-course/unit1/2#chat-templates-with-tools
I feel like there is a bug in `HuggingFaceTB/SmolLM3-3B` 's chat template
from the example
```
# Conversation with tool usage
messages = [
{"role": "system", "content": "You are a helpful assistant with access to tools."},
{"role": "user", "content": "What's the weather like in Paris?"},
{
"role": "assistant",
"content": "I'll check the weather in Paris for you.",
"tool_calls": [
{
"id": "call_1",
"type": "function",
"function": {
"name": "get_weather",
"arguments": '{"location": "Paris, France", "unit": "celsius"}'
}
}
]
},
{
"role": "tool",
"tool_call_id": "call_1",
"content": '{"temperature": 22, "condition": "sunny", "humidity": 60}'
},
{
"role": "assistant",
"content": "The weather in Paris is currently sunny with a temperature of 22°C and 60% humidity. It's a beautiful day!"
}
]
# Apply chat template with tools
formatted_with_tools = tokenizer.apply_chat_template(
messages,
tools=tools,
tokenize=False,
add_generation_prompt=False
)
print("Chat template with tools:")
print(formatted_with_tools)
```
I got this result
```
Chat template with tools:
<|im_start|>system
## Metadata
Knowledge Cutoff Date: June 2025
Today Date: 27 September 2025
Reasoning Mode: /think
## Custom Instructions
You are a helpful assistant with access to tools.
### Tools
You may call one or more functions to assist with the user query.
You are provided with function signatures within <tools></tools> XML tags:
<tools>
{'type': 'function', 'function': {'name': 'get_weather', 'description': 'Get the current weather for a location', 'parameters': {'type': 'object', 'properties': {'location': {'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA'}, 'unit': {'type': 'string', 'enum': ['celsius', 'fahrenheit'], 'description': 'The temperature unit'}}, 'required': ['location']}}}
{'type': 'function', 'function': {'name': 'calculate', 'description': 'Perform mathematical calculations', 'parameters': {'type': 'object', 'properties': {'expression': {'type': 'string', 'description': 'Mathematical expression to evaluate'}}, 'required': ['expression']}}}
</tools>
For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>
<|im_end|>
<|im_start|>user
What's the weather like in Paris?<|im_end|>
<|im_start|>assistant
I'll check the weather in Paris for you.<|im_end|>
<|im_start|>user
{"temperature": 22, "condition": "sunny", "humidity": 60}<|im_end|>
<|im_start|>assistant
The weather in Paris is currently sunny with a temperature of 22°C and 60% humidity. It's a beautiful day!<|im_end|>
```
Which is kind of weird.
The first thing is there is no tool call in below message
```
<|im_start|>assistant
I'll check the weather in Paris for you.<|im_end|>
```
I expect it to have `<tool_call> ... </tool_call>` in it.
the second thing is why the `tool` role got replace with `user` role.
Should not we explicitly specify the role?
Can someone help me with this, please?
|
https://github.com/huggingface/smol-course/issues/259
|
closed
|
[
"question"
] | 2025-09-27T10:19:37Z
| 2025-11-24T18:40:09Z
| null |
Nevermetyou65
|
pytorch/pytorch
| 163,982
|
Need to update Magma version in Pytorch
|
### 🐛 Describe the bug
Need to look into updating Magma for Pytorch CUDA builds
Need to understand what is the perf increase.
Do we need MAGMA at all ?
### Versions
2.10.0
cc @ptrblck @msaroufim @eqy @jerryzh168
|
https://github.com/pytorch/pytorch/issues/163982
|
open
|
[
"module: cuda",
"triaged"
] | 2025-09-26T19:21:26Z
| 2025-09-26T19:23:09Z
| 0
|
atalman
|
huggingface/accelerate
| 3,797
|
Question: ReduceLROnPlateau wrapped by AcceleratedScheduler in DDP may multiply LR by num_processes?
|
Hi,
I’m using ReduceLROnPlateau wrapped by AcceleratedScheduler in a multi-GPU / DDP setup (num_processes=8).
My main process calls:
```
lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, mode="min", factor=self.hyper_params['lr_decay_factor'], patience=self.hyper_params['lr_reduce_patient']
)
model, optimizer, train_loader, val_loader, lr_scheduler, = accelerator.prepare(
model_bundle.model, optimizer, data_loaders.train_loader, data_loaders.val_loader, lr_scheduler
)
for epoch in range(self.hyper_params['epochs']):
# train...
val_loss = self.eval()
lr_scheduler.step(val_loss)
```
I noticed that AcceleratedScheduler.step() does:
```
num_processes = AcceleratorState().num_processes
for _ in range(num_processes):
# Special case when using OneCycle and `drop_last` was not used
if hasattr(self.scheduler, "total_steps"):
if self.scheduler._step_count <= self.scheduler.total_steps:
self.scheduler.step(*args, **kwargs)
else:
self.scheduler.step(*args, **kwargs)
```
Will this cause the LR to be reduced num_processes times for a single validation step?
Thanks!
|
https://github.com/huggingface/accelerate/issues/3797
|
closed
|
[] | 2025-09-26T10:02:20Z
| 2025-11-03T15:08:09Z
| 1
|
nicelulu
|
pytorch/pytorch
| 163,946
|
ModuleNotFoundError: No module named 'importlib_metadata'
|
### 🐛 Describe the bug
I encountered this error when I used torchrun.
Traceback (most recent call last):
File "xxx/bin/torchrun", line 5, in <module>
from torch.distributed.run import main
File "xxx/lib/python3.9/site-packages/torch/distributed/run.py", line 381, in <module>
from torch.distributed.elastic.rendezvous.utils import _parse_rendezvous_config
File "xxx/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/__init__.py", line 142, in <module>
from .registry import _register_default_handlers, _register_out_of_tree_handlers
File "xxx/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/registry.py", line 19, in <module>
from importlib_metadata import entry_points
ModuleNotFoundError: No module named 'importlib_metadata'
I saw the following code in the source code
if sys.version_info < (3, 10):
from importlib_metadata import entry_points
else:
from importlib.metadata import entry_points
Since Python3.8 the importlib_metedata third-party library has been merged into Cpython and became its importlib.metada module, why is it judged here that Python is less than 3.10? Is there any special consideration?
If it is necessary, should it be added to the requirements.txt
### Versions
torch 2.6.0
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci
|
https://github.com/pytorch/pytorch/issues/163946
|
closed
|
[
"needs reproduction",
"oncall: distributed"
] | 2025-09-26T08:26:50Z
| 2025-11-06T07:20:57Z
| 6
|
yunyiyun
|
huggingface/lerobot
| 2,050
|
I wonder how to use RL on so101 within sim environment?
|
https://github.com/huggingface/lerobot/issues/2050
|
closed
|
[
"question",
"simulation",
"good first issue"
] | 2025-09-26T06:52:38Z
| 2025-10-08T18:04:44Z
| null |
Temmp1e
|
|
huggingface/lerobot
| 2,045
|
I would appreciate it if you could explain how to train the slicing clay model
|
I am planning to conduct a clay-cutting task using pi0. Since this type of task is not typically included among pi0’s foundation model tasks, I would like to inquire how many episodes (and the approximate duration of each) would generally be required for such a custom task.
The task I have in mind involves cutting clay in this manner, and I am uncertain whether it can be made to work effectively. I would greatly appreciate any realistic advice or guidance you could provide on this matter.
<img width="1333" height="1065" alt="Image" src="https://github.com/user-attachments/assets/cd474850-c09a-4ae0-9668-a2ce8c2b3b6e" />
|
https://github.com/huggingface/lerobot/issues/2045
|
open
|
[] | 2025-09-26T00:51:59Z
| 2025-09-26T00:51:59Z
| null |
pparkgyuhyeon
|
pytorch/pytorch
| 163,900
|
[Maintenance] MacOS runners update
|
## Current Status
*ongoing*.
## Error looks like
MacOS jobs might fail with infra errors
## Incident timeline (all times pacific)
*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*
## User impact
*How does this affect users of PyTorch CI?*
## Root cause
*What was the root cause of this issue?*
## Mitigation
*How did we mitigate the issue?*
## Prevention/followups
*How do we prevent issues like this in the future?*
|
https://github.com/pytorch/pytorch/issues/163900
|
closed
|
[
"ci: sev"
] | 2025-09-25T22:30:08Z
| 2025-09-26T11:27:33Z
| 3
|
malfet
|
pytorch/torchx
| 1,130
|
The hosted doc server is not working
|
## 📚 Documentation
## Link
We are now redirected from https://docs.pytorch.org/torchx/main/quickstart.html to https://meta-pytorch.org/torchxmain/quickstart.html
## What does it currently say?
```
404
File not found
The site configured at this address does not contain the requested file.
If this is your site, make sure that the filename case matches the URL as well as any file permissions.
For root URLs (like http://example.com/) you must provide an index.html file.
[Read the full documentation](https://help.github.com/pages/) for more information about using GitHub Pages.
```
It should redirect us to https://meta-pytorch.org/torchx/main/quickstart.html or https://meta-pytorch.org/torchx/latest/quickstart.html
Looks like instead of `/torchx/main/` it gets resolved to `/torchxmain/`.
## What should it say?
Should work as before
## Why?
Hosted docs are very useful
|
https://github.com/meta-pytorch/torchx/issues/1130
|
closed
|
[] | 2025-09-25T16:58:45Z
| 2025-09-25T20:14:43Z
| 2
|
clumsy
|
huggingface/lerobot
| 2,042
|
Question: How to train to get Task Recovery behavior?
|
We would need the robot to be able to detect a failure (like dropping an object) and attempt to correct it to continue with the task.
How would the training data would look like for this?
Thanks
|
https://github.com/huggingface/lerobot/issues/2042
|
open
|
[] | 2025-09-25T15:52:55Z
| 2025-09-25T15:52:55Z
| null |
raul-machine-learning
|
huggingface/accelerate
| 3,794
|
Error when evaluating with multi-gpu
|
I met a problem when evaluating Llada-8B with multi-gpu ( **Nvidia V100** ) using accelerate+lm_eval. Error occurs when **num_processes>1**.
but there is no problem with single GPU, all the other cfgs are the same.
How can i solve this problem?
I use this command to evaluate
accelerate launch --config_file config1.yaml eval_llada.py --tasks ${task} --num_fewshot ${num_fewshot} \
--confirm_run_unsafe_code --model llada_dist \
--model_args model_path='/raid/data/zhouy/model_data/LLaDA-8B-Instruct',
gen_length=${length},steps=${length},block_length=${block_length},show_speed=True
This is my config1.yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
enable_cpu_affinity: false
machine_rank: 0
main_process_ip: null
main_process_port: 5678
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
Here is the Error logs:
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/zhouy/dllm/Fast-dLLM-main/llada/eval_llada.py", line 364, in <module>
[rank1]: cli_evaluate()
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/__main__.py", line 389, in cli_evaluate
[rank1]: results = evaluator.simple_evaluate(
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/utils.py", line 422, in _wrapper
[rank1]: return fn(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/evaluator.py", line 308, in simple_evaluate
[rank1]: results = evaluate(
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/utils.py", line 422, in _wrapper
[rank1]: return fn(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/evaluator.py", line 528, in evaluate
[rank1]: resps = getattr(lm, reqtype)(cloned_reqs)
[rank1]: File "/home/zhouy/dllm/Fast-dLLM-main/llada/eval_llada.py", line 312, in generate_until
[rank1]: generated_answer, nfe = generate_with_dual_cache(self.model, input_ids, steps=self.steps, gen_length=self.gen_length, block_length=self.block_length,
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank1]: return func(*args, **kwargs)
[rank1]: File "/home/zhouy/dllm/Fast-dLLM-main/llada/generate.py", line 208, in generate_with_dual_cache
[rank1]: output = model(x, use_cache=True)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1643, in forward
[rank1]: else self._run_ddp_forward(*inputs, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1459, in _run_ddp_forward
[rank1]: return self.module(*inputs, **kwargs) # type: ignore[index]
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/accelerate/utils/operations.py", line 818, in forward
[rank1]: return model_forward(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/accelerate/utils/operations.py", line 806, in __call__
[rank1]: return convert_to_fp32(self.model_forward(*args, **kwargs))
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
[rank1]: return func(*args, **kwargs)
[rank1]: File "/home/zhouy/dllm/Fast-dLLM-main/llada/model/modeling_llada.py", line 1582, in forward
[rank1]: outputs = self.model.forward(
[rank1]: File "/home/zhouy/dllm/Fast-dLLM-main/llada/model/modeling_llada.py", line 1479, in forward
[rank1]: x, cache = block(x, attention_bias=attention_bias, layer_past=layer_past, use_ca
|
https://github.com/huggingface/accelerate/issues/3794
|
closed
|
[] | 2025-09-25T14:42:29Z
| 2025-11-03T15:08:12Z
| 1
|
adfad1
|
huggingface/text-embeddings-inference
| 728
|
Compile error in multiple environments for CPU backend
|
### System Info
TEI source code:
- Latest main branch(0c1009bfc49b759fe75eed4fd377b4fbad534ad5);
- Latest release `v1.8.2`;
- Release `v1.8.1`
Tested platform:
- Win: AMD 7950X+Windows 10 x64 Version 10.0.19045.6332;
- WSL2: AMD 7950X+Debian 13 on wsl2 (Linux DESKTOP 5.15.167.4-microsoft-standard-WSL2 # 1 SMP Tue Nov 5 00:21:55 UTC 2024 x86_64 GNU/Linux) @ Windows 10 x64 Version 10.0.19045.6332;
- Linux: Intel 6133*2+Ubuntu 20.04;
(GPUs is not mentioned due to build TEI on CPU)
Tested rustup envs:
Freshly installed rustup: default rustup profile: cargo 1.85.1 (d73d2caf9 2024-12-31)
- Win: Freshly installed rustup & Freshly installed MSVC v143 -VS 2022 C++ build tools+Winodws 11 SDK (10.0.22621.0)+cmake
- WSL: Freshly installed rustup & gcc (Debian 14.2.0-19) 14.2.0
- Linux: Freshly installed rustup & gcc (GCC) 10.5.0
### Information
- [ ] Docker
- [x] The CLI directly
### Tasks
- [x] An officially supported command
- [ ] My own modifications
### Reproduction
As docs' recommend, tested on 3 different envs listed above:
1. `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`
2. `cargo install --path router -F mkl --verbose` (added `--verbose` for logging)
Shows compile error about **25 undefined references / external symbol** (`'vsTanh', 'vsSub', 'vsSqrt', 'vsSin', 'vsMul', 'vsLn', 'vsFmin', 'vsExp', 'vsDiv', 'vsCos', 'vsAdd', 'vdTanh', 'vdSub', 'vdSqrt', 'vdSin', 'vdMul', 'vdLn', 'vdFmin', 'vdExp', 'vdDiv', 'vdCos', 'vdAdd', 'sgemm_', 'hgemm_', 'dgemm_'`)
### Expected behavior
Expect finishing compile, but:
- Compile v1.8.2/v1.8.1/main (similar error) on Win+MSVC+AMD CPU:
```
...
Running `C:\Users\nkh04\.rustup\toolchains\1.85.1-x86_64-pc-windows-msvc\bin\rustc.exe --crate-name text_embeddings_router --edition=2021 router\src\main.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=115 --crate-type bin --emit=dep-info,link -C opt-level=3 -C panic=abort -C lto=fat -C codegen-units=1 --cfg "feature=\"candle\"" --cfg "feature=\"default\"" --cfg "feature=\"dynamic-linking\"" --cfg "feature=\"http\"" --cfg "feature=\"mkl\"" --check-cfg cfg(docsrs,test) --check-cfg "cfg(feature, values(\"accelerate\", \"candle\", \"candle-cuda\", \"candle-cuda-turing\", \"candle-cuda-volta\", \"default\", \"dynamic-linking\", \"google\", \"grpc\", \"http\", \"metal\", \"mkl\", \"ort\", \"python\", \"static-linking\"))" -C metadata=e1406d246b8c925f --out-dir F:\text-embeddings-inference-1.8.2\target\release\deps -C strip=symbols -L dependency=F:\text-embeddings-inference-1.8.2\target\release\deps --extern anyhow=F:\text-embeddings-inference-1.8.2\target\release\deps\libanyhow-5751be73768123a3.rlib --extern axum=F:\text-embeddings-inference-1.8.2\target\release\deps\libaxum-8bc59cf51b8d1ae2.rlib --extern axum_tracing_opentelemetry=F:\text-embeddings-inference-1.8.2\target\release\deps\libaxum_tracing_opentelemetry-6919ca207315f42e.rlib --extern base64=F:\text-embeddings-inference-1.8.2\target\release\deps\libbase64-20907aaabfa37a5c.rlib --extern clap=F:\text-embeddings-inference-1.8.2\target\release\deps\libclap-ded1b8a7f6da29a7.rlib --extern futures=F:\text-embeddings-inference-1.8.2\target\release\deps\libfutures-55e1ce906ca8ce43.rlib --extern hf_hub=F:\text-embeddings-inference-1.8.2\target\release\deps\libhf_hub-46162d037bf61d01.rlib --extern http=F:\text-embeddings-inference-1.8.2\target\release\deps\libhttp-721bb5a8d4ad5af4.rlib --extern init_tracing_opentelemetry=F:\text-embeddings-inference-1.8.2\target\release\deps\libinit_tracing_opentelemetry-1130e5d6b02b3c83.rlib --extern intel_mkl_src=F:\text-embeddings-inference-1.8.2\target\release\deps\libintel_mkl_src-7de47f7e38d141d5.rlib --extern metrics=F:\text-embeddings-inference-1.8.2\target\release\deps\libmetrics-f38f63f59a9e401d.rlib --extern metrics_exporter_prometheus=F:\text-embeddings-inference-1.8.2\target\release\deps\libmetrics_exporter_prometheus-3e83484daaaf9a40.rlib --extern mimalloc=F:\text-embeddings-inference-1.8.2\target\release\deps\libmimalloc-55786f97dafb497c.rlib --extern num_cpus=F:\text-embeddings-inference-1.8.2\target\release\deps\libnum_cpus-26f3f7fb7d16b825.rlib --extern opentelemetry=F:\text-embeddings-inference-1.8.2\target\release\deps\libopentelemetry-43ce590757d45ebb.rlib --extern opentelemetry_otlp=F:\text-embeddings-inference-1.8.2\target\release\deps\libopentelemetry_otlp-7adf99fb9a924955.rlib --extern opentelemetry_sdk=F:\text-embeddings-inference-1.8.2\target\release\deps\libopentelemetry_sdk-48d11cd15d38a406.rlib --extern reqwest=F:\text-embeddings-inference-1.8.2\target\release\deps\libreqwest-cdbb64c7917c22c9.rlib --extern serde=F:\text-embeddings-inference-1.8.2\target\release\deps\libserde-e13a1b310cb83bc5.rlib --extern serde_json=F:\text-embeddings-inference-1.8.2\target\release\deps\libserde_json-c2074a4721fb3f74.rlib --extern simsimd=F:\text-embeddings-inference-1.8.2\target\release\deps\libsimsimd-5bf7050b419eab84.rlib --extern text_embeddings_bac
|
https://github.com/huggingface/text-embeddings-inference/issues/728
|
open
|
[
"documentation",
"question"
] | 2025-09-25T11:52:16Z
| 2025-11-18T14:49:01Z
| null |
nkh0472
|
huggingface/transformers
| 41,141
|
Need a concise example of Tensor Parallelism (TP) training using Trainer/SFTTrainer.
|
### Feature request
I have checked the code and there are few places which talk about TP. I saw from_pretrained method for model contains tp_plan and device_mesh. I also checked that the TrainingArgument can take parallelism_config which defines the TP/CP plan along with FSDP. However, I am not able to successfully stitch things together to make the only TP based training work. Please help.
Ref:
- https://github.com/huggingface/transformers/blob/main/examples/3D_parallel.py
### Motivation
Need to enable only TP based training, but no tutorial or example is available.
### Your contribution
Given proper understanding and proper guidance, I can come up with clean example and documentation for the same.
|
https://github.com/huggingface/transformers/issues/41141
|
open
|
[
"Documentation",
"Feature request",
"Tensor Parallel"
] | 2025-09-25T03:01:02Z
| 2026-01-04T14:05:36Z
| 10
|
meet-minimalist
|
pytorch/pytorch
| 163,801
|
[CUDA][Triton][PTXAS] Triton Wheel Missing CUDA13 PTXAS - Breakage exists for the environment where CTK is not present
|
### 🐛 Describe the bug
By default triton release/3.5x ships a PTXAS version that is based on CUDA12.8.
** in environments that the latest CTK is NOT installed**
Comparing to PTXAS from CUDA13.0, CUDA12.8 ptxas is not capable to handle THOR device (which underwent a renaming, see https://github.com/llvm/llvm-project/issues/156096 for background related issue. Note this llvm issue 156096 has been fixed in triton/3.5.x via https://github.com/triton-lang/llvm-project/pull/2, which can be verified with a CTK 13.0. Referencing here just for the renaming context) and for other newer devices.
Users on THOR would encounter:
ptxas fatal : Value 'sm_110a' is not defined for option 'gpu-name'
Users on SM_121 device (https://docs.nvidia.com/cuda/pdf/CUDA_Features_Archive.pdf) would encounter
ptxas fatal : Value 'sm_121a' is not defined for option 'gpu-name'
See also the report https://github.com/llvm/llvm-project/issues/156096#issuecomment-3319410046 from @[mcr-ksh](https://github.com/mcr-ksh)
** in environments that has the latest CTK installed **
Users may still need the explicit "export TRITON_PTXAS_PATH=/usr/local/cuda/bin/ptxas" to get Triton to pick up the right ptxas.
We have a few options:
1. According to @ptrblck, one workaround could be to ship ptxas12 as well as ptxas13 and use the appropriate one using a runtime check for the PyTorch/CUDA version, we did this in the past for Blackwell (using ptxas_blackwell) when ptxas==12.8.
2. PyTorch cu126/cu128/cu130 shipping a different ptxas, then triton won't need one
3. we build triton cuda wheels separately for cu126/cu128/cu130.
No.1 seems to be doable for final v2.9RC. Thoughts?
cc @seemethere @malfet @atalman @ptrblck @eqy @tinglvv @xwang233 @davidberard98
### Versions
Triton release/3.5.x
|
https://github.com/pytorch/pytorch/issues/163801
|
closed
|
[
"module: binaries",
"triaged",
"module: third_party",
"has workaround",
"dependencies"
] | 2025-09-24T22:21:24Z
| 2025-09-30T01:56:15Z
| null |
nWEIdia
|
huggingface/lerobot
| 2,034
|
dataset v2.1 and groot n1.5
|
for now, groot dose not support dataset v3.0 to fine_tune ? in this case, should we continue use v2.1 ? and if we already collect data from v3, how we can convert it back to v2.1?
|
https://github.com/huggingface/lerobot/issues/2034
|
open
|
[
"question",
"policies",
"dataset"
] | 2025-09-24T21:12:26Z
| 2025-12-24T00:05:45Z
| null |
zujian-y
|
pytorch/pytorch
| 163,789
|
[docs] instructions to locally build docs are underspecified
|
*Note: moving the dependency conflict discussion to #164010.*
### 📚 The doc issue
Docstring changes I made in #163120 caused the `linux-jammy-py3_10-gcc11-build` `docs_test` CI to fail. To debug this I had to build the docs locally, and ran into some rough edges:
1. There are small discrepancies between the instructions in [`CONTRIBUTING.md`](https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md#building-documentation) and [`README.md`](https://github.com/pytorch/pytorch?tab=readme-ov-file#building-the-documentation).
2. As written, neither set of instructions will work with Python >=3.11 - `.ci/docker/requirements-docs.txt` uses `matplotlib=3.5.3` for Python <3.13, but matplotlib 3.5.3 only has wheels for/supports Python <=3.10. It also uses `matplotlib=3.6.3` for Python >=3.13, but matplotlib 3.6.3 only has wheels for/supports Python <=3.11.
3. ~Tools like `uv` don't support use of the editable flag `-e` with URLs (line 4 of `docs/requirements.txt`). This is also deprecated in `pip` and will be enforced in `pip 25.3`, which will be released in about a month!~ (Edit: no action required here - `setup.py develop` is being deprecated, not the entire editable mechanism. See [this issue](https://github.com/pypa/pip/issues/11457) and [PEP 660](https://peps.python.org/pep-0660/) for more context.)
4. I wasn't able to build the docs without installing `numpy<2`. This isn't possible, right?
Nits:
- ~The README docs instructions have a typo - `node@6.13.1` should presumably be `node@16.13.1`.~ (Edit: this was wrong.)
- The `CONTRIBUTING.md` tip about removing irrelevant `.rst` files is outdated - the example command removes `docs/source/scripts/exportdb/generate_example_rst.py`, which will cause builds to error. This can be fixed with `find . -type f -iname "*.rst" | ...`
<details>
<summary>How to build the PyTorch nightly docs on MacOS</summary>
If you come across this issue while trying to build the docs, this works as of September 2025:
1. Set up the repo and a Python 3.10 env with pip.
```bash
git clone https://github.com/pytorch/pytorch.git
cd pytorch
uv venv -p 3.10 .venv-docs
source .venv-docs/bin/activate
uv pip install -U pip
```
2. Install torch.
If you're making small changes in `docs/source`, you can install [the appropriate nightly wheel](https://pytorch.org/get-started/locally/):
```bash
python -m pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu
```
If you're adding new Python modules or updating Python docstrings in `torch/`, you can use `tools/nightly.py` with the prefix flag:
```bash
./tools/nightly.py checkout -b our-branch -p .venv-docs
```
If you're doing something more involved you likely have to [build from source](https://github.com/pytorch/pytorch?tab=readme-ov-file#from-source):
```bash
pip install --group dev
python -m pip install --no-build-isolation -v -e .
```
3. Install docs-specific dependencies.
```bash
brew install node
npm install -g katex@0.13.18
pip install -r docs/requirements.txt
pip install 'numpy<2'
```
Afterwards you should be able to `cd docs && make html`.
</details>
### Suggest a potential alternative/fix
Reconcile and update the instructions in `CONTRIBUTING.md` and `README.md`. In particular:
- Recommend using a separate venv for local docs builds.
- Explain when/why someone building the docs should install torch from source.
- Move niche information (e.g. building a PDF of the docs) from the README to `CONTRIBUTING.md`, and add a link.
And:
- Pin `numpy<2` and appropriate matplotlib versions in `.ci/docker/requirements-docs.txt`. If there's some reason we can't do this, let's explicitly note that only Python 3.10 is supported for now (since we can't build torch from source on 3.9).
- ~Remove the editable flag on `pytorch_sphinx_theme2`, and document a separate flow for those actually working on the theme.~
If all of this makes sense to reviewers, I can get started on a PR with these fixes.
I can't edit the wiki, but it'd be great if a maintainer could update the [Docstring Guidelines](https://github.com/pytorch/pytorch/wiki/Docstring-Guidelines) to link directly to `https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings` and link to the instructions in `CONTRIBUTING.md`.
cc @svekars @sekyondaMeta @AlannaBurke
|
https://github.com/pytorch/pytorch/issues/163789
|
open
|
[
"module: docs",
"triaged",
"actionable"
] | 2025-09-24T20:24:43Z
| 2025-09-26T22:34:16Z
| 2
|
filipviz
|
pytorch/pytorch
| 163,785
|
Revisit guarding on unbacked inputs !
|
We generate guards on unbacked inputs now those are interesting,
- some we do not need at all because they are side effects of torch.check calls
- some are actually needed (striding properties that we did assert on ), shall we make them runtime assertions?
There are some examples in the tests [here](https://github.com/pytorch/pytorch/pull/163705/files), not sure yet what is the right solution. but here is some different examples
### Example1
For example u0<6 here should not be a guard.
```
@torch.compile(fullgraph=True, dynamic=True, backend=cnt)
def func(a):
torch._check(a.size()[0] < 6)
return a * 10
a = torch.rand(4, 10)
```
### Example2
but what about something like
```
@torch.compile(fullgraph=True, dynamic=True, backend=cnt)
def func(a):
return a*10
# no reocmpile if we pass 9, 8
# recompile if we pass 11
a = torch.rand(1,2,3,4,5)
torch._dynamo.decorators.mark_unbacked(a, 0)
torch._dynamo.decorators.mark_unbacked(a, 1)
torch._dynamo.decorators.mark_unbacked(a, 2)
torch._dynamo.decorators.mark_unbacked(a, 3)
torch._dynamo.decorators.mark_unbacked(a, 4)
func(a)
```
shall we guard or runtime assert on striding properties with unabacked
ex:
L['a'].stride()[0] == L['a'].size()[1]*L['a'].size()[2]*L['a'].size()[3]*L['a'].size()[4]
### Example3
here is another example is my expectation of what should happen right in it?
```
def func(a):
# this should generate runtime assertio and no guard.
torch._check(a.size()[0] == a.size()[1])
# This should generate guard
torch._check(a.size()[0] < 10)
return a * 10
a = torch.rand(4,4)
torch._dynamo.decorators.mark_unbacked(a, 0)
torch._dynamo.mark_dynamic(a, 1)
func(a)
#should not no recompile(i think)
try :
func(torch.rand(4, 7))
except:
pass
# recompile (should recompile)
try :
func(torch.rand(100, 100))
except:
pass
```
we recompile for both now.
cc @chauhang @penguinwu @ezyang @bobrenjc93
|
https://github.com/pytorch/pytorch/issues/163785
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 2025-09-24T19:17:35Z
| 2025-10-29T22:58:35Z
| 2
|
laithsakka
|
huggingface/tokenizers
| 1,868
|
How to set the cache_dir in the Rust implementation?
|
Hey, thank you for your great work with these tokenizers.
When I use the tokenizers through the Python API via transformers, I can set a specific cache_dir like this
```
from transformers import AutoTokenizer
self.tokenizer = AutoTokenizer.from_pretrained(self.tokenizer_name,cache_dir = self.cache_dir)
```
How can I do that in Rust? How can I print the default cache dir (in Rust)?
|
https://github.com/huggingface/tokenizers/issues/1868
|
open
|
[] | 2025-09-24T18:50:38Z
| 2025-10-06T04:25:46Z
| null |
sambaPython24
|
huggingface/diffusers
| 12,386
|
Implement missing features on ModularPipeline
|
as i'm looking to take advantage of new `ModularPipeline` ask is to implement some currently missing features
my use case is to convert existing loaded model using standard pipeline into modular pipeline. that functionality was provided via #11915 and is now working.
first minor obstacle is that modular pipeline does not have defined params for execution
in standard pipeline i can inspect `__call__` signature to see which are allowed params
i currently work around this using
`possible = [input_param.name for input_param in model.blocks.inputs]`
please advise if this is acceptable
second one is that modular pipelines don't seem to implement normal callbacks at all (e.g. `callback_on_step_end_tensor_inputs`? at the minimum we need some kind of callback functionality to capture interim latents on each step
third is more cosmetic - modular pipeline does implement `set_progress_bar_config`, but its not doing anything as its not implement on actual block (tested with `StableDiffusionXLModularPipeline`)
cc @yiyixuxu @DN6 @sayakpaul
|
https://github.com/huggingface/diffusers/issues/12386
|
open
|
[
"roadmap"
] | 2025-09-24T15:49:23Z
| 2025-09-29T05:46:29Z
| 0
|
vladmandic
|
pytorch/pytorch
| 163,761
|
Does device mesh of (N,1) cause all_gather communication in HSDP of FSDP2?
|
In HSDP of FSDP2, let's say I have N GPUs, if the shape of device mesh is (N,1) (similar to DDP), will all_gather communication still happen in forward/backward? Or is this device mesh shape illegitimate?
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci @chauhang @penguinwu
|
https://github.com/pytorch/pytorch/issues/163761
|
open
|
[
"oncall: distributed"
] | 2025-09-24T13:51:27Z
| 2025-09-25T18:59:28Z
| 1
|
EquationWalker
|
pytorch/pytorch
| 163,753
|
Invalid __shared__ read of size 16 bytes in torch.conv_transpose3d
|
### 🐛 Describe the bug
When using `torch.nn.ConvTranspose3d` with certain parameters, a CUDA `__shared__` memory read out-of-bounds error occurs.
```python
import torch
import torch.nn as nn
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
def main():
if not torch.cuda.is_available() or not torch.backends.cudnn.is_available():
print("This bug requires a CUDA-enabled GPU with cuDNN.")
return
device = torch.device("cuda")
dtype = torch.float32
try:
in_channels = 24
model = nn.ConvTranspose3d(
in_channels=in_channels,
out_channels=1,
kernel_size=(15, 3, 10),
stride=(2, 1, 1),
padding=(23, 0, 1),
dilation=(1, 3, 3),
groups=1,
bias=False
).to(device, dtype=dtype)
model.eval()
input_shape = (1, in_channels, 24, 24, 24)
input_tensor = torch.randn(input_shape, device=device, dtype=dtype)
model(input_tensor)
except Exception as e:
print(f"An unexpected error occurred: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
main()
```
### How to Reproduce
1. Save the code above as `repro.py`.
2. Run the script using `compute-sanitizer`. The `Invalid __shared__ read` error will be reported.
```bash
compute-sanitizer python repro.py
```
### Observed Results
```
========= COMPUTE-SANITIZER
========= Invalid __shared__ read of size 16 bytes
========= at void xmma_cudnn_infer::implicit_gemm::strided_dgrad_indexed::kernel_helper_stage_1<xmma_cudnn_infer::implicit_gemm::strided_dgrad_indexed::Params_pre_hopper>(T1)+0x2710
========= by thread (255,0,0) in block (4,0,0)
========= Address 0x400 is out of bounds
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x2631a7a] in libcudnn_cnn_infer.so.8
========= Host Frame: [0x268da7a] in libcudnn_cnn_infer.so.8
========= Host Frame: [0x21853e2] in libcudnn_cnn_infer.so.8
========= Host Frame: [0x1928b9b] in libcudnn_cnn_infer.so.8
========= Host Frame: cudnn::cnn::infer::InferNdSubEngine<false, (cudnnTensorFormat_t)1, (cudnnTensorFormat_t)1, (cudnnTensorFormat_t)1, (cudnnDataType_t)0, true, 80, (cudnn::cnn::infer::subtree_t)1, cask_cudnn_infer::ConvolutionDgrad, cask_cudnn_infer::ShaderList<cask_cudnn_infer::ConvDgradShader, cask_cudnn_infer::ConvolutionDgrad>, cask_cudnn_infer::ConvDgradShader>::execute_internal_fprop_impl(cudnnContext*, CUstream_st*, void const*, void const*, void const*, void const*, void const*, void const*, unsigned long, void*, void*, unsigned int) [0x1215689] in libcudnn_cnn_infer.so.8
========= Host Frame: cudnn::cnn::infer::InferNdSubEngine<false, (cudnnTensorFormat_t)1, (cudnnTensorFormat_t)1, (cudnnTensorFormat_t)1, (cudnnDataType_t)0, true, 80, (cudnn::cnn::infer::subtree_t)1, cask_cudnn_infer::ConvolutionDgrad, cask_cudnn_infer::ShaderList<cask_cudnn_infer::ConvDgradShader, cask_cudnn_infer::ConvolutionDgrad>, cask_cudnn_infer::ConvDgradShader>::execute_internal_impl(cudnn::backend::VariantPack const&, CUstream_st*) [0x1215c7a] in libcudnn_cnn_infer.so.8
========= Host Frame: cudnn::cnn::EngineInterface::execute(cudnn::backend::VariantPack const&, CUstream_st*) [0xd8eb04] in libcudnn_cnn_infer.so.8
========= Host Frame: cudnn::cnn::EngineContainer<(cudnnBackendEngineName_t)1051>::execute_internal_impl(cudnn::backend::VariantPack const&, CUstream_st*) [0xdc94cf] in libcudnn_cnn_infer.so.8
========= Host Frame: cudnn::cnn::EngineInterface::execute(cudnn::backend::VariantPack const&, CUstream_st*) [0xd8eb04] in libcudnn_cnn_infer.so.8
========= Host Frame: cudnn::cnn::AutoTransformationExecutor::execute_pipeline(cudnn::cnn::EngineInterface&, cudnn::backend::VariantPack const&, CUstream_st*) const [0xf00e7d] in libcudnn_cnn_infer.so.8
========= Host Frame: cudnn::cnn::BatchPartitionExecutor::operator()(cudnn::cnn::EngineInterface&, cudnn::cnn::EngineInterface*, cudnn::backend::VariantPack const&, CUstream_st*) const [0xf00fc6] in libcudnn_cnn_infer.so.8
========= Host Frame: cudnn::cnn::GeneralizedConvolutionEngine<cudnn::cnn::EngineContainer<(cudnnBackendEngineName_t)1051> >::execute_internal_impl(cudnn::backend::VariantPack const&, CUstream_st*) [0xf0f7aa] in libcudnn_cnn_infer.so.8
========= Host Frame: cudnn::cnn::EngineInterface::execute(cudnn::backend::VariantPack const&, CUstream_st*) [0xd8eb04] in libcudnn_cnn_infer.so.8
========= Host Frame: cudnn::backend::execute(cudnnContext*, cudnn::backend::ExecutionPlan&, cudnn::backend::VariantPack&) [0xda3498] in libcudnn_cnn_infer.so.8
========= Host Frame: cudnnBackendExecute [0xda383c] in libcudnn_cnn_infer.so.8
========= Host Frame: at::native::run_conv_plan(cudnnContext*, at::Tensor const&, at::Tensor const&, at::Tensor const&, cudnn_fronten
|
https://github.com/pytorch/pytorch/issues/163753
|
closed
|
[] | 2025-09-24T11:33:17Z
| 2025-09-26T01:22:04Z
| 4
|
supermarkli
|
pytorch/torchtitan
| 1,750
|
Inconsistent loss between different TP
|
### Bug description
I have encountered different Inconsistent loss between different TP on both llama3 and llama4 moe model.
The toml configs are exactly the same except different tensor parallels.
The seed is set and deterministic is turned on.
tensorboard:
## llama4:
gradnorm:
<img width="1278" height="460" alt="Image" src="https://github.com/user-attachments/assets/f289b37a-b0cf-4de0-aa02-20c0a2af9522" />
loss:
<img width="1266" height="460" alt="Image" src="https://github.com/user-attachments/assets/2b3e1245-5474-4c36-ab26-e2a76588363e" />
## llama3:
gradnorm:
<img width="1278" height="460" alt="Image" src="https://github.com/user-attachments/assets/76a51203-d1f0-46dd-b457-778f2001cb79" />
loss:
<img width="1278" height="460" alt="Image" src="https://github.com/user-attachments/assets/0f42db79-0445-4ea7-b7b8-b361904d0eb7" />
### Versions
toml:
```
# torchtitan Config.toml
[job]
dump_folder = "./outputs"
description = "Llama 3 debug training"
print_args = false
use_for_integration_test = true
[profiling]
enable_profiling = false
save_traces_folder = "profile_trace"
profile_freq = 10
enable_memory_snapshot = false
save_memory_snapshot_folder = "memory_snapshot"
[metrics]
log_freq = 1
disable_color_printing = false
enable_tensorboard = true
save_tb_folder = "tb"
enable_wandb = false
[model]
name = "llama3"
flavor = "debugmodel"
# test folder with tokenizer.json, for debug purpose only
hf_assets_path = "./tests/assets/tokenizer"
# converters = ["float8"]
[optimizer]
name = "AdamW"
lr = 4e-3
eps = 1e-15
[lr_scheduler]
warmup_steps = 2 # lr scheduler warm up, normally 20% of the train steps
decay_ratio = 0.8 # lr scheduler decay ratio, 80% of the train steps
decay_type = "linear"
min_lr_factor = 0.1
[training]
local_batch_size = 1
global_batch_size = 64
seq_len = 2048
max_norm = 1.0 # grad norm clipping
steps = 100000
dataset_type = "hf" # mmap for megatron style
dataset = "c4_test" # supported datasets: c4_test (2K), c4 (177M)
dataset_path = "3rdparty/torchtitan/tests/assets/c4_test"
seed = 1234
deterministic = true
[parallelism]
data_parallel_replicate_degree = 1
data_parallel_shard_degree = -1
fsdp_reshard_after_forward = "default" # default / never / always
tensor_parallel_degree = {1 / 4}
enable_async_tensor_parallel = false
pipeline_parallel_degree = 1
pipeline_parallel_schedule = "1F1B"
context_parallel_degree = 1
expert_parallel_degree = 1
expert_tensor_parallel_degree = 1
[checkpoint]
enable_checkpoint = false
folder = "checkpoint"
interval = 10
last_save_model_only = false
export_dtype = "float32"
async_mode = "disabled" # ["disabled", "async", "async_with_pinned_mem"]
[activation_checkpoint]
mode = "none" # ["none", "selective", "full"]
selective_ac_option = '2' # 'int' = ac every positive int layer or 'op', ac based on ops policy
[compile]
enable=false
components = ["model", "loss"]
[float8]
enable_fsdp_float8_all_gather = false
precompute_float8_dynamic_scale_for_fsdp = false
filter_fqns = ["output", "router.gate"]
moe_fqns = ["experts"]
```
torch version: 2.9.0+main.de744ca4b19.post20250818
cuda: 12.4
|
https://github.com/pytorch/torchtitan/issues/1750
|
open
|
[
"question"
] | 2025-09-24T03:11:22Z
| 2025-10-02T00:25:43Z
| null |
weixuansun
|
huggingface/candle
| 3,096
|
[Question] Minimal documentation/example on including weights in compiled executable
|
Just what the title says: Is there a minimal code example on including weights in the compiled executable using include_bytes. Nervous to implement this without understanding best practices and end up with a suboptimal solution.
|
https://github.com/huggingface/candle/issues/3096
|
closed
|
[] | 2025-09-24T02:47:28Z
| 2025-10-07T04:49:26Z
| 1
|
bitanath
|
pytorch/torchtitan
| 1,749
|
What is the benefit of using torchrun instead of python directly with slurm and other launchers ?
|
Is there any difference in the following two commands ?
srun torchrun --nnodes 4 --nproc_per_node 8 --rdzv_endpoint "$head_node_ip:29500" -m torchtitan.train ...
MASTER_ADDR= ip-adress MASTER_PORT=port-number srun --nodes=4 --ntasks-per-node=8 python -m torchtitan.train
|
https://github.com/pytorch/torchtitan/issues/1749
|
open
|
[] | 2025-09-23T23:35:08Z
| 2025-09-26T18:05:51Z
| null |
githubsgi
|
pytorch/pytorch
| 163,699
|
Should we mark `TestExportOpInfo.test_fake_export` tests as distributed?
|
### 🐛 Describe the bug
`TestExportOpInfo.test_fake_export` calls `_test_export_helper`
https://github.com/pytorch/pytorch/blob/8c8416b021e59a5ec58aceb38eeffc63885a28bc/test/export/test_export_opinfo.py#L125-L133
which sends tensor to `cuda:1`
https://github.com/pytorch/pytorch/blob/8c8416b021e59a5ec58aceb38eeffc63885a28bc/test/export/test_export_opinfo.py#L80-L90
You can verify with this command on a machine with single GPU
```
$ python test/run_test.py -i export/test_export_opinfo --exclude-distributed-tests -- -k test_fake_export___radd___cpu_float32
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/pytorch/pytorch/test/export/test_export_opinfo.py", line 133, in test_fake_export
_test_export_helper(self, dtype, op)
File "/opt/pytorch/pytorch/test/export/test_export_opinfo.py", line 116, in _test_export_helper
ep = torch.export.export(m, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/export/__init__.py", line 311, in export
raise e
File "/usr/local/lib/python3.12/dist-packages/torch/export/__init__.py", line 277, in export
return _export(
^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py", line 1177, in wrapper
raise e
File "/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py", line 1143, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/export/exported_program.py", line 124, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py", line 2269, in _export
ep = _export_for_training(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py", line 1177, in wrapper
raise e
File "/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py", line 1143, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/export/exported_program.py", line 124, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py", line 2085, in _export_for_training
export_artifact = export_func(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/export/_trace.py", line 1971, in _non_strict_export
) = make_fake_inputs(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_export/non_strict_utils.py", line 402, in make_fake_inputs
fake_args, fake_kwargs = tree_map_with_path(
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/utils/_pytree.py", line 2056, in tree_map_with_path
return treespec.unflatten(func(*xs) for xs in zip(*all_keypath_leaves))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/utils/_pytree.py", line 1193, in unflatten
leaves = list(leaves)
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/utils/_pytree.py", line 2056, in <genexpr>
return treespec.unflatten(func(*xs) for xs in zip(*all_keypath_leaves))
^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_export/non_strict_utils.py", line 403, in <lambda>
lambda kp, val: fakify(
^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_export/non_strict_utils.py", line 232, in fakify
fake = mode.from_tensor(t, source=source, symbolic_context=symbolic_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py", line 3004, in from_tensor
return self.fake_tensor_converter.from_real_tensor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py", line 404, in from_real_tensor
out = self.meta_converter(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/meta_utils.py", line 1922, in __call__
r = self.meta_tensor(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/meta_utils.py", line 1698, in meta_tensor
r = callback(
^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py", line 395, in mk_fake_tensor
return FakeTensor(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py", line 744, in __new__
init_gpu_context(device)
File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses
|
https://github.com/pytorch/pytorch/issues/163699
|
closed
|
[
"module: tests",
"oncall: pt2",
"oncall: export"
] | 2025-09-23T22:12:42Z
| 2025-09-30T16:12:42Z
| 2
|
xwang233
|
pytorch/pytorch
| 163,690
|
Recomputed values for the following tensors have different metadata than during the forward pass.
|
### 🐛 Describe the bug
hi I have a model with linear layers which i wrap with LoRA layers applied as following
```
(attn): Attention(
(q_proj): LoRALinear(
(original_layer): Linear(in_features=4096, out_features=4096, bias=False)
(dropout): Identity()
)
(k_proj): LoRALinear(
(original_layer): Linear(in_features=4096, out_features=4096, bias=False)
(dropout): Identity()
)
(v_proj): LoRALinear(
(original_layer): Linear(in_features=4096, out_features=4096, bias=False)
(dropout): Identity()
)
(proj): LoRALinear(
(original_layer): Linear(in_features=4096, out_features=4096, bias=True)
(dropout): Identity()
)
(proj_drop): Dropout(p=0.0, inplace=False)
)
```
```
class LoRALinear(nn.Module):
def __init__(
self,
original_layer: nn.Linear,
rank: int,
init_lora_weights="gaussian",
dropout: float = 0.0,
):
super().__init__(
original_layer=original_layer,
rank=rank,
init_lora_weights=init_lora_weights,
dropout=dropout,
)
self.reset_weights(init_lora_weights)
def forward(self, x: torch.Tensor) -> torch.Tensor:
lora_x = self.dropout(x) @ self.lora_A @ self.lora_B
output = self.original_layer(x) + lora_x
return output
```
i wrap the model's linear layers with LoRA layers and then wrap the model blocks with FSDP2 and AC. see this error when i call backwards on the model. why would there be a mismatch here?how can i debug/solve this
```
[rank0]: loss.backward()
[rank0]: File "/usr/local/lib/python3.12/site-packages/torch/_tensor.py", line 648, in backward
[rank0]: torch.autograd.backward(
[rank0]: File "/usr/local/lib/python3.12/site-packages/torch/autograd/__init__.py", line 353, in backward
[rank0]: _engine_run_backward(
[rank0]: File "/usr/local/lib/python3.12/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/site-packages/torch/utils/checkpoint.py", line 1128, in unpack_hook
[rank0]: frame.check_recomputed_tensors_match(gid)
[rank0]: File "/usr/local/lib/python3.12/site-packages/torch/utils/checkpoint.py", line 902, in check_recomputed_tensors_match
[rank0]: raise CheckpointError(
[rank0]: torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.
[rank0]: tensor at position 58:
[rank0]: saved metadata: {'shape': torch.Size([4096, 4096]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)}
[rank0]: recomputed metadata: {'shape': torch.Size([1, 4096, 4096]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)}
```
### Versions
[rank0]: saved metadata: {'shape': torch.Size([4096, 4096]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)}
[rank0]: recomputed metadata: {'shape': torch.Size([1, 4096, 4096]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)}
cc @soulitzer
|
https://github.com/pytorch/pytorch/issues/163690
|
closed
|
[
"needs reproduction",
"module: activation checkpointing",
"triaged"
] | 2025-09-23T21:21:49Z
| 2025-09-24T01:04:09Z
| 3
|
asahni-sc
|
pytorch/pytorch
| 163,688
|
[torch.distributed.pipelining] Gradients are None in first training step with ScheduleGPipe
|
## Bug Description
When using `torch.distributed.pipelining` with `ScheduleGPipe`, gradients are unexpectedly `None` for parameters _in the first training step only_, and appear correctly in subsequent steps. This occurs despite the forward pass completing and losses computed.
This is leading to a significant divergence compared to non pipeline-parallel execution, beyond what is explainable by float and slicing numerical error, e.g. stalled and irrecoverable convergence.
## To Reproduce
1. Save the provided script as `repro.py`
2. Run with 4 GPUs: `torchrun --nproc_per_node=4 repro.py`
3. Observe the output showing gradients are None in step 0 but present in steps 1-2
### Expected behavior
Gradients should be computed and available for all parameters after the backward pass in every training step, including the first one. The pipeline schedule should handle gradient accumulation consistently across all steps.
### Actual behavior
Step 0: All parameters have grad=None despite successful forward/backward pass
Step 1-2: Gradients are properly computed and available (non-None)
#### Example Output
```
Example output:
Rank 3, step: 0/3, losses list: [tensor(10.3931, device='cuda:3', dtype=torch.bfloat16), ...]
Rank 0 Step 0:
{'embed_tokens.weight': None, 'layers.0.norm1.bias': None, ...}
...
Rank 0 Step 1:
{'embed_tokens.weight': '1.41e+02',
'layers.0.norm1.bias': '9.46e-01',
'layers.0.norm1.weight': '1.90e+00',
...}
```
**Minimal reproducible example:**
This example:
1. Uses a simple transformer model split into 4 stages
2. Each stage has 3 transformer layers
3. Uses standard ScheduleGPipe with 4 microbatches
4. Demonstrates the issue clearly with gradient norm printing: **why are first step grads None across all ranks?**
```python
import torch
import torch.nn as nn
import torch.distributed as dist
from torch.distributed.pipelining import PipelineStage, ScheduleGPipe
import os
import pprint
class TransformerStage(nn.Module):
def __init__(self, hidden_size=1536, num_layers=3, vocab_size=32000, stage_index=0, num_stages=4):
super().__init__()
self.stage_index = stage_index
self.num_stages = num_stages
if stage_index == 0:
self.embed_tokens = nn.Embedding(vocab_size, hidden_size)
self.layers = nn.ModuleList([
nn.TransformerEncoderLayer(
hidden_size,
nhead=12,
dim_feedforward=5440,
batch_first=True
)
for _ in range(num_layers)
])
if stage_index == num_stages - 1:
self.lm_head = nn.Linear(hidden_size, vocab_size, bias=False)
def forward(self, x):
if hasattr(self, 'embed_tokens'):
x = self.embed_tokens(x)
for layer in self.layers:
x = layer(x)
if hasattr(self, 'lm_head'):
x = self.lm_head(x)
return x
def create_pipeline_loss_fn():
def loss_fn(logits, labels):
logits = logits.float()
labels = nn.functional.pad(labels, (0, 1), value=-100)
shift_labels = labels[..., 1:].contiguous()
vocab_size = logits.size(-1)
logits = logits.view(-1, vocab_size)
shift_labels = shift_labels.view(-1)
return nn.functional.cross_entropy(logits, shift_labels, ignore_index=-100)
return loss_fn
def main():
torch.manual_seed(0)
torch.cuda.manual_seed_all(0)
global_rank = int(os.environ['RANK'])
world_size = int(os.environ['WORLD_SIZE'])
local_rank = int(os.environ['LOCAL_RANK'])
dist.init_process_group(
backend="nccl",
rank=global_rank,
world_size=world_size,
device_id=torch.device(f"cuda:{local_rank}"),
)
pp_degree = 4
assert world_size == pp_degree
device = torch.device(f"cuda:{local_rank}")
torch.cuda.set_device(device)
config = {
'batch_size': 4,
'micro_batch_size': 1,
'sequence_length': 512,
'hidden_size': 1536,
'vocab_size': 32000,
'num_layers_per_stage': 3,
}
pp_rank = global_rank
stage_model = TransformerStage(
hidden_size=config['hidden_size'],
num_layers=config['num_layers_per_stage'],
vocab_size=config['vocab_size'],
stage_index=pp_rank,
num_stages=pp_degree
).to(device)
if global_rank == 0:
print(f"Pipeline setup: {pp_degree} stages, {config['num_layers_per_stage']} layers per stage")
pipeline_stage = PipelineStage(
stage_model,
stage_index=pp_rank,
num_stages=pp_degree,
device=device,
)
n_microbatches = config['batch_size'] // config['micro_batch_size']
pipeline_schedule = ScheduleGPipe(
stage=pipeline_stage,
n_microbatches=n_microbatches,
loss_fn=create_pipeline_loss_fn(),
scale_gr
|
https://github.com/pytorch/pytorch/issues/163688
|
open
|
[
"oncall: distributed",
"has workaround",
"module: amp (automated mixed precision)",
"module: pipelining"
] | 2025-09-23T21:03:37Z
| 2025-09-26T14:36:14Z
| 2
|
tplr-y
|
pytorch/pytorch
| 163,684
|
PyTorch 2.8 + CUDA 12.8 fails to initialize on RTX 5090 (WinError 1114)
|
### 🐛 Describe the bug
Summary
Attempting to run a source-built PyTorch 2.8.0 against CUDA 12.8 with explicit sm_120 flags on RTX 5090 results in a DLL initialization failure:
Code
OSError: [WinError 1114] A dynamic link library (DLL) initialization routine failed.
Error loading "torch_cpu.dll" or one of its dependencies.
System Info
GPU: RTX 5090
CUDA: 12.8 (confirmed installed and functional)
PyTorch: 2.8.0 (source build)
Python: 3.10.11
OS: Windows 11 x64
Build flags:
TORCH_CUDA_ARCH_LIST=8.6;9.0;12.0
Verified sm_120 kernels are present in ptx and fatbin sections
What’s been tried
✅ Verified all DLLs in torch/lib using Dependencies.exe
✅ Rebuilt torch_cpu.dll and shm.dll from source
✅ Manually validated libomp140.x86_64.dll and other runtime dependencies
✅ Renamed crashing DLLs to isolate failure
✅ Confirmed failure occurs inside DllMain or static constructor
✅ Attempted fallback to nightly builds—same result
Observations
torch_cpu.dll loads cleanly in Dependencies.exe but crashes during runtime
torch_cuda.dll depends on torch_cpu.dll, so exclusion breaks CUDA backend
No missing dependencies reported—failure is internal to DLL initialization
No exports visible in torch_cpu.dll, suggesting static init or device registration failure
Request
Looking for:
Confirmation of RTX 5090 support in PyTorch 2.8+
Known workarounds or patches for sm_120 initialization
Guidance on isolating DllMain crash or bypassing CPU backend for CUDA-only workflows
### Versions
(venv) PS D:\Projects\python\pytorch-src> python tools\collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro (10.0.26100 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 4.1.0
Libc version: N/A
Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100-SP0
Is CUDA available: N/A
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 5090
Nvidia driver version: 581.29
cuDNN version: Could not collect
Is XPU available: N/A
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Name: AMD Ryzen 9 9950X 16-Core Processor
Manufacturer: AuthenticAMD
Family: 107
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 4300
MaxClockSpeed: 4300
L2CacheSize: 16384
L2CacheSpeed: None
Revision: 17408
Versions of relevant libraries:
[pip3] numpy==2.2.6
[pip3] optree==0.17.0
[pip3] torch==2.10.0a0+gitunknown
[conda] Could not collect
(venv) PS D:\Projects\python\pytorch-src>
|
https://github.com/pytorch/pytorch/issues/163684
|
closed
|
[] | 2025-09-23T20:31:52Z
| 2025-09-23T22:16:59Z
| 2
|
tsondo
|
huggingface/optimum-executorch
| 149
|
Add documentation for how to run each type of exported model on ExecuTorch
|
Blocked on runner / multimodal runner work in ExecuTorch
|
https://github.com/huggingface/optimum-executorch/issues/149
|
open
|
[] | 2025-09-23T18:53:55Z
| 2025-09-23T18:54:00Z
| null |
jackzhxng
|
pytorch/pytorch
| 163,664
|
[BE] Add Linux aarch64 CUDA install and test to validation framework
|
### 🐛 Describe the bug
Currently https://github.com/pytorch/test-infra/blob/main/.github/workflows/validate-aarch64-linux-binaries.yml only validates Linu aarch64 CPU builds.
These workflows are launched via validate-binaries. Here is an example of run: https://github.com/pytorch/test-infra/actions/runs/17628169416
In the past aarch64 GPU builds where not validated since we have not had any hardware for aarch64 GPU and these builds where prototype. At the moment we don't have any aarch64 GPU hardware however would be required to validate now.
We need to validate also aarch64 GPU builds so that at least install works and CPU mode works for these builds.
Installation is same as Linux x86 builds:
```
pip3 install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu130
pip3 install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu128
pip3 install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu126
```
Before running smoke test set: ``MATRIX_GPU_ARCH_TYPE=cpu``
Here is smoke test for reference
https://github.com/pytorch/pytorch/blob/main/.ci/pytorch/smoke_test/smoke_test.py
### Versions
2.9.0
cc @seemethere @malfet @ptrblck @msaroufim @eqy @jerryzh168
|
https://github.com/pytorch/pytorch/issues/163664
|
closed
|
[
"module: binaries",
"module: cuda",
"triaged",
"better-engineering",
"topic: binaries"
] | 2025-09-23T17:00:27Z
| 2025-10-01T14:19:45Z
| 0
|
atalman
|
pytorch/pytorch
| 163,659
|
Allow double in native_functions.yaml as a schema type
|
### 🚀 The feature, motivation and pitch
Today, our schemas say "float" but that is a lie!! Internally we pass around doubles. I'm okay with this though.
My ask: can we allow schemas to say "double", so for user custom ops they can put "double" in the schema and double in their custom kernels and be less confused?
Today, custom ops writers have `double` in their kernels but put `float` in the schema cuz they have to.
Triggered from https://github.com/pytorch/pytorch/pull/163505/files#r2372832280
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @zou3519 @anjali411 @chauhang @penguinwu @bdhirsh @ezyang
|
https://github.com/pytorch/pytorch/issues/163659
|
open
|
[
"module: cpp-extensions",
"triaged",
"module: dispatch",
"module: library",
"oncall: pt2",
"module: pt2-dispatcher"
] | 2025-09-23T16:27:39Z
| 2025-09-24T18:45:19Z
| 2
|
janeyx99
|
huggingface/safetensors
| 653
|
`get_slice` is slow because it uses `tensors()` method instead of `info()`
|
### Feature request
Replace
```rust
self.metadata.tensors().get(name)
```
with
```rust
self.metadata.info(name)
```
in `get_slice` method
### Motivation
I noticed that the `get_slice` method of `Open` [does](https://github.com/huggingface/safetensors/blob/0816a1ae1d6b731cefd67f061d80d1cadd0dd7bb/bindings/python/src/lib.rs#L851)
```rust
self.metadata.tensors().get(name)
````
instead of
```rust
self.metadata.info(name)
```
like `get_tensor()` [does](https://github.com/huggingface/safetensors/blob/0816a1ae1d6b731cefd67f061d80d1cadd0dd7bb/bindings/python/src/lib.rs#L638) when retrieving `TensorInfo` by name.
Because of this, `get_slice` is much slower, since the `tensors()` method [reconstructs](https://github.com/huggingface/safetensors/blob/0816a1ae1d6b731cefd67f061d80d1cadd0dd7bb/safetensors/src/tensor.rs#L633) a new `HashMap` on each call.
Is there any particular reason for this approach? Would it be possible to replace it with `self.metadata.info(name)` to improve performance?
### Your contribution
I do not mind doing a PR
|
https://github.com/huggingface/safetensors/issues/653
|
closed
|
[] | 2025-09-23T15:09:51Z
| 2025-09-28T16:42:45Z
| 1
|
PgLoLo
|
huggingface/diffusers
| 12,375
|
What kernels should we integrate in Diffusers?
|
Now that we have an [integration](https://github.com/huggingface/diffusers/pull/12236) with the `kernels` lib to use Flash Attention 3 (FA3), it'd be nice to gather community interest about which kernels we should try to incorporate in the library through the [`kernels` lib](https://github.com/huggingface/kernels/). FA3 delivers a significant speedup on Hopper GPUs.
I have done some work in the `kernelize` branch to see if replacing `GELU`, `SiLU`, and `RMSNorm` with their optimized kernels would have any speedups on Flux. So far, it hasn't had any. Benchmarking script: https://gist.github.com/sayakpaul/35236dd96e15d9f7d658a7ad11918411. One can compare the changes here: https://github.com/huggingface/diffusers/compare/kernelize?expand=1.
> [!NOTE]
> The changes in the `kernelize` branch are quite hacky as we're still evaluating things.
Please use this issue to let us know which kernels we should try to support in Diffusers. Some notes to keep in mind:
* Layers where the `forward()` method is easily replaceable with the `kernelize()` [mechanism](https://github.com/huggingface/kernels/blob/main/docs/source/layers.md#kernelizing-a-model) would be prioritized. A reference is here: https://github.com/huggingface/transformers/pull/38205.
* Even if a kernel isn't directly compatible with `kernels`, we can try to make it so, like we have for https://huggingface.co/kernels-community/flash-attn3.
* Not all kernels contribute non-trivial gains in terms of speedup. So, please bear that in mind when proposing a kernel.
Cc: @MekkCyber
|
https://github.com/huggingface/diffusers/issues/12375
|
open
|
[
"performance"
] | 2025-09-23T09:03:13Z
| 2025-09-30T06:56:39Z
| 8
|
sayakpaul
|
pytorch/pytorch
| 163,624
|
[aoti] [xpu] [null-pointer-deference] potential npt issue in `sycl_runtime_wrappers.h`
|
### 🐛 Describe the bug
Code below in `sycl_runtime_wrappers.h` uses malloc to allocate the memory.
https://github.com/pytorch/pytorch/blob/5d749ceb92c2c28bcfbdf918b4ab99b1a91fcb50/torch/csrc/inductor/aoti_runtime/sycl_runtime_wrappers.h#L45-L58
However, there is a potential risk that the memory allocation fails. Then, maybe `strLog` is a `nullptr`? A possible fix is to add a `NPD check` here.
To be honest, I don't know how to draft a test to trigger this case, so I opened an issue to discuss this instead of sending a PR directly.
Feel free to correct me if i am wrong.
### Versions
none
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1
|
https://github.com/pytorch/pytorch/issues/163624
|
open
|
[
"triaged",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 2025-09-23T08:24:03Z
| 2025-09-23T16:02:18Z
| 4
|
shaoyuyoung
|
huggingface/peft
| 2,798
|
Add stricter type checking in LoraConfig for support with HfArgumentParser
|
### System Info
System Info
transformers version: 4.57.0.dev0
Platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.39
Python version: 3.12.3
Huggingface_hub version: 0.34.4
Safetensors version: 0.5.2
Accelerate version: 1.10.1
Accelerate config: not found
DeepSpeed version: not installed
PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using distributed or parallel set-up in script?: No
Using GPU in script?: No
GPU type: NVIDIA A100-SXM4-80GB
peft version: 0.17.1
### Who can help?
@benjaminbossan @githubnemo
### Reproduction
```
from peft import LoraConfig
from transformers import HfArgumentParser
p = HfArgumentParser(dataclass_types=LoraConfig) # fails
```
### Expected behavior
I would expect LoraConfig to be supported by HfArgumentParser.
As I understand, this fails because HfArgumentParser does not support fields of type (`Optional[List[str], str]`).
I had raised this in transformers as well, please refer [here](https://github.com/huggingface/transformers/issues/40915).
Can we add stricter type checking for such fields so it can be easily integrated with other libraries and argument parsers?
|
https://github.com/huggingface/peft/issues/2798
|
closed
|
[] | 2025-09-23T05:19:34Z
| 2025-09-23T12:37:47Z
| 3
|
romitjain
|
pytorch/pytorch
| 163,576
|
GPU Performance in Modern Computing
|
### Release highlight for proposed Feature
Could you please review the PyTorch library and determine if performance evaluation tests would be helpful? https://github.com/pytorch/pytorch/pull/162107
GPU Performance in Modern Computing
In the realm of artificial intelligence and supercomputing, GPUs play a pivotal role as accelerators driving innovation in hyperscaled data centers. But how does the computational speed of GPUs compare to CPUs? A performance test was conducted comparing the efficiency of CPU versus GPU on an Apple Mac M4, revealing intriguing insights.
Key Insight:
A significant performance shift occurs around the ~1500x1500 matrix size. An adaptive approach to the device selection could successfully build the model architecture. A hybrid approach of CPU for small operations and GPU for large operations could optimize the performance advantages of the devices.
CPU Superiority (Smaller Matrices):
For matrix sizes up to 1000x1000, CPUs outperform GPUs by 2-4 times. This is attributed to the overhead of GPU initialization surpassing its computational benefits. For instance, in a scenario with 500x500 matrices, CPUs performed 3.5 times faster than GPUs.
GPU Dominance (Medium and Larger Matrices):
As matrix sizes exceed 2000x2000, GPUs outperform CPUs by 2-2.3 times. The parallel computational advantages of GPUs prove superior, overcoming the initial costs. As an example for 4000x4000 matrices, GPUs were 2.2 times faster than CPUs.
Implications for Business:
CPU Applications: Ideal for small-scale tasks like rapid prototyping, edge devices, and small batch inference because of the cost efficiency and immediate execution without warmup. The smaller data sets cache efficiently without memory transfers.
GPU Applications: Suited for operations that optimize the performance advantages of the devices such as training, batch processing, and production inference. The throughput advantage is 2-3 times speed.
Add comprehensive benchmarking tools for matrix operations and neural networks
Include device detection for CUDA, MPS, and CPU
Provide proper GPU synchronization and timing
Add complete unit test suite with device-specific tests
Include automated test runner script
Add detailed documentation and contribution guide
Features:
Cross-platform device support (CUDA/MPS/CPU)
Modular, extensible design with type hints
Comprehensive error handling and reporting
Educational examples for proper GPU benchmarking
No breaking changes to existing PyTorch functionality
### Point(s) of contact
_No response_
### Release Mode (pytorch/pytorch features only)
In-tree
### Out-Of-Tree Repo
_No response_
### Description and value to the user
_No response_
### Link to design doc, GitHub issues, past submissions, etc
_No response_
### What feedback adopters have provided
_No response_
### Plan for documentations / tutorials
Tutorial exists
### Additional context for tutorials
_No response_
### Marketing/Blog Coverage
Yes
### Are you requesting other marketing assistance with this feature?
_No response_
### Release Version
_No response_
### OS / Platform / Compute Coverage
_No response_
### Testing Support (CI, test cases, etc..)
_No response_
|
https://github.com/pytorch/pytorch/issues/163576
|
closed
|
[
"triaged"
] | 2025-09-22T22:21:49Z
| 2025-09-29T17:16:02Z
| 7
|
alpha-investor
|
pytorch/torchtitan
| 1,735
|
For mixed-precision training, does FSDP2 also need `amp.grad_scaler.GradScaler` ? or is FSDP2 already handled?
|
In mixed-precision training of DDP, `amp.grad_scaler.GradScaler` is needed to dynamically scale the loss. I see that torchtitan do not use it to scale loss in FSDP2, so my question is does FSDP2 also need `amp.grad_scaler.GradScaler` ? or is FSDP2 already handled?
|
https://github.com/pytorch/torchtitan/issues/1735
|
closed
|
[
"question"
] | 2025-09-22T15:05:37Z
| 2025-09-24T20:12:20Z
| null |
EquationWalker
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.