repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/trl
| 4,376
|
Rewrite `peft_integration.md`
|
This section of the documentation is widely outdated and rely only on PPO.
Ideally, we should have a clear documentation that shows how to use peft with SFT, DPO and GRPO at least, via the `peft_config` argument. We could have additional subsection about QLoRA and prompt-tuning.
|
https://github.com/huggingface/trl/issues/4376
|
closed
|
[] | 2025-10-30T03:23:24Z
| 2025-11-24T10:39:27Z
| 0
|
qgallouedec
|
vllm-project/vllm
| 27,778
|
[Usage]: Is DP + PP a possible way to use vLLM?
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
Hi there, I wonder if we can adopt DP + PP in vLLM to form a heterogeneous inference pipeline. For example, If i have two V100 32G GPUs and one A100 80G GPU, can I utilize them in pipeline parallelism with vLLM? I might use V100 as the first stage, and A100 as the second.
Consider that V100's compute ability is lower than A100, this would result in unbalance, and the V100 stage becomes a bottleneck. Thus I would like to use two V100s in DP at the first PP stage.
Is this possible with the current released vLLM version?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27778
|
open
|
[
"usage"
] | 2025-10-30T02:05:06Z
| 2025-10-30T02:05:06Z
| 0
|
oldcpple
|
pytorch/pytorch
| 166,580
|
torch/utils/cpp_extension.py:531] There are no /usr/bin/g++-14 version bounds defined for CUDA version 13.0
|
### 🐛 Describe the bug
Hi,
i'm getting that error message whatever torch version I'm trying >= 2.7
```
W1029 20:55:47.576000 79341 torch/utils/cpp_extension.py:531] There are no /usr/bin/g++-14 version bounds defined for CUDA version 13.0
building 'flash_attn_3._C' extension
```
what does that mean exactly ?
I noticed I'm a nvidia driver 12.4 and I have cuda tools 12.8.
### Versions
Collecting environment information...
PyTorch version: 2.7.1+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.3 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.11.13 (main, Jun 4 2025, 08:57:30) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.1.62-nvidia-gpu-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 550.127.08
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 15
On-line CPU(s) list: 0-14
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9354 32-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 15
Stepping: 1
BogoMIPS: 6499.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean flushbyasid pausefilter pfthreshold v_vmsave_vmload vgif avx512vbmi umip pku avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm flush_l1d arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 960 KiB (15 instances)
L1i cache: 960 KiB (15 instances)
L2 cache: 7.5 MiB (15 instances)
L3 cache: 240 MiB (15 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-14
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.6
[pip3] nvidia-cublas==13.0.0.19
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti==13.0.48
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc==13.0.48
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime==13.0.48
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cudnn-cu13==9.13.0.50
[pip3] nvidia-cufft==12.0.0.15
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand==10.4.0.35
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver==12.0.3.29
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse==12.6.2.49
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-cusparselt-cu13==0.8.0
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nccl-cu13==2.27.7
[pip3] nvidia-nvjitlink==13.0.39
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3]
|
https://github.com/pytorch/pytorch/issues/166580
|
closed
|
[
"module: cpp-extensions",
"triaged",
"actionable"
] | 2025-10-29T22:24:15Z
| 2025-12-29T10:58:14Z
| 1
|
christopher5106
|
pytorch/pytorch
| 166,563
|
[RFC] Modifying Getting started page for Experimental Wheel Variant Support
|
### Release highlight for proposed Feature
Related to Wheel Next Initiative: https://github.com/pytorch/pytorch/issues/159714
This proposal is for changes to the PyTorch "Getting Started" page to better promote variant enabled wheels and increase their visibility. This is a strategic move to ensure users are more aware of these new options, which can improve adoption and usage.
PyTorch team have been producing an experimental set of Wheels for Release 2.8 and Release 2.9.
PyTorch Release 2.8 Q&A: https://www.youtube.com/watch?v=amx4zUyfl3I
#### What are Wheel Variants ?
- Wheel variants are a mechanism for publishing platform-dependent Python wheels and selecting the most suitable package variant for a given platform.
- This approach helps to remove the need for local identifier experience in PyTorch packaging and enhance user experience installing PyTorch
**Disclaimer:**
This is a draft proposal. We are presenting only a schematic version at this stage.
v1: https://wheelnext.github.io/pytorch_selector_revamp/v1.html
<img width="1367" height="757" alt="Image" src="https://github.com/user-attachments/assets/2c4fa0c5-a209-4734-94ee-a30404c8961f" />
v2: https://wheelnext.github.io/pytorch_selector_revamp/v2.html
<img width="1363" height="724" alt="Image" src="https://github.com/user-attachments/assets/7a7c2cec-f138-4691-9b54-46f563167906" />
cc @svekars @sekyondaMeta @AlannaBurke @malfet @seemethere @anitakat @albanD @DEKHTIARJonathan @rgommers @mgorny @emmatyping @bdice @warsaw @msarahan @vyasr @aterrel @charliermarsh @konstin @geofft @zanieb @jezdez
### Release Version
2.10
|
https://github.com/pytorch/pytorch/issues/166563
|
open
|
[
"module: docs",
"triaged",
"release-feature-request"
] | 2025-10-29T20:11:37Z
| 2025-10-31T15:22:23Z
| 3
|
atalman
|
pytorch/pytorch
| 166,555
|
[dynamo, docs] Suggest torch.compiler.set_stance("force_eager") to determine if eager code causes issues
|
We should include in the programming model docs for users to try running their code on eager to see if eager-errors are causing graph breaks.
`torch.compiler.set_stance("force_eager")` is the preferred way to do this since users don't have to change their `torch.compile` decorators or `module.compile` calls.
See https://docs.pytorch.org/tutorials/recipes/torch_compiler_set_stance_tutorial.html#crashing-sooner for an existing example of `set_stance` usage for debugging.
cc @svekars @sekyondaMeta @AlannaBurke @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela
|
https://github.com/pytorch/pytorch/issues/166555
|
open
|
[
"module: docs",
"triaged",
"oncall: pt2",
"module: dynamo",
"compile-docs",
"module: compile ux"
] | 2025-10-29T19:15:49Z
| 2025-12-03T00:48:27Z
| 0
|
williamwen42
|
pytorch/vision
| 9,253
|
Patch versions of the wheel available in the CPU only pypi registry
|
In the CPU only pypi registry, https://download.pytorch.org/whl/torchvision/, I can see some dev/patch versions of the wheels:
```
torchvision-0.24.0+0429d73-cp311-cp311-win_arm64.whl
torchvision-0.24.0+0429d73-cp312-cp312-win_arm64.whl
torchvision-0.24.0+0429d73-cp313-cp313-win_arm64.whl
torchvision-0.24.0+7a9db90-cp311-cp311-win_arm64.whl
torchvision-0.24.0+7a9db90-cp312-cp312-win_arm64.whl
torchvision-0.24.0+7a9db90-cp313-cp313-win_arm64.whl
torchvision-0.24.0+b919bd0-cp311-cp311-win_arm64.whl
torchvision-0.24.0+b919bd0-cp312-cp312-win_arm64.whl
torchvision-0.24.0+b919bd0-cp313-cp313-win_arm64.whl
torchvision-0.24.0+e437e35-cp311-cp311-win_arm64.whl
torchvision-0.24.0+e437e35-cp312-cp312-win_arm64.whl
torchvision-0.24.0+e437e35-cp313-cp313-win_arm64.whl
```
I don't think they should be here in the wild, and it causes some confusion with `uv` where it's trying to use these, but is unable to download them.
Is there a valid reason for these wheels to be here? If not could they be removed?
|
https://github.com/pytorch/vision/issues/9253
|
open
|
[] | 2025-10-29T16:42:44Z
| 2026-01-04T11:06:45Z
| 3
|
aandrestrumid
|
vllm-project/vllm
| 27,746
|
[Bug]: `strict` value in function definitions causes request error when using Mistral tokenizer
|
### Your current environment
Tested with latest vllm source build from main
### 🐛 Describe the bug
Start vLLM with a model that uses the mistral tokenizer:
```
vllm serve mistralai/Mistral-Small-24B-Instruct-2501 \
--enable-auto-tool-choice \
--tool-call-parser mistral \
--tokenizer-mode mistral
```
Send a simple tool call request with the `strict` parameter set to a value of `False`:
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="fake")
tools = [
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Get the current time in UTC",
"parameters": {
"type": "object",
"properties": {},
"required": []
},
"strict": False,
}
},
]
model = client.models.list().data[0].id
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": "What is the current time?"}],
tools=tools,
)
print("Success!")
```
The request fails with a 400 error like:
`openai.BadRequestError: Error code: 400 - {'error': {'message': '1 validation error for Tool\nfunction.strict\n Extra inputs are not permitted [type=extra_forbidden, input_value=False, input_type=bool]\n For further information visit https://errors.pydantic.dev/2.12/v/extra_forbidden 1 validation error for Tool\nfunction.strict\n Extra inputs are not permitted [type=extra_forbidden, input_value=False, input_type=bool]\n For further information visit https://errors.pydantic.dev/2.12/v/extra_forbidden', 'type': 'BadRequestError', 'param': None, 'code': 400}}`
Start vLLM without the mistral tokenizer and the request succeeds.
Note that this is explicitly NOT about making `strict=True` actually enforce structured outputs. The scope of this is simply to not return a validation error when this parameter is passed with any valid value when the `mistral` tokenizer is in use. The current behavior breaks some client frameworks that always pass this value, even when it has a value of `False`.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27746
|
open
|
[
"bug"
] | 2025-10-29T14:33:13Z
| 2025-10-30T19:14:50Z
| 4
|
bbrowning
|
huggingface/trl
| 4,368
|
GKD: multimodal inputs?
|
Does the Generalized Knowledge Distillation trainer (GKDTrainer) support multimodal inputs (VLMs)?
If yes, what's the expected dataset format? There is no example of this in the documentation.
Thanks!
|
https://github.com/huggingface/trl/issues/4368
|
closed
|
[
"📚 documentation",
"❓ question",
"🏋 GKD"
] | 2025-10-29T14:08:44Z
| 2025-11-07T19:26:23Z
| 2
|
e-zorzi
|
pytorch/pytorch
| 166,519
|
Long queue for ROCM runners, also B200 and XPU queueing is observed
|
## Current Status
mitigated
## Error looks like
Jobs requiring following runners will be queueing:
<img width="731" height="424" alt="Image" src="https://github.com/user-attachments/assets/c83de025-fb94-4b45-a125-c65c3baa1cb7" />
Please see:
https://hud.pytorch.org/metrics
## Incident timeline (all times pacific)
Started Oct 28, 2PM PDT ~1hr queueing. Notified AMD team on the issue
Oct 29 5AM observing 7hrs queuing SEV is created
Oct 29 5AM also observing XPU and B200 queuing
Oct 30 11AM confirmed ROCm runners are no longer queueing
## User impact
Rocm jobs will not start
## Root cause
*What was the root cause of this issue?*
## Mitigation
*How did we mitigate the issue?*
## Prevention/followups
*How do we prevent issues like this in the future?*
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
|
https://github.com/pytorch/pytorch/issues/166519
|
closed
|
[
"module: rocm",
"module: ci",
"triaged"
] | 2025-10-29T12:20:19Z
| 2025-11-03T17:55:53Z
| 4
|
atalman
|
pytorch/pytorch
| 166,516
|
Performance issue of torch._higher_order_ops.scan
|
### 🐛 Describe the bug
I have a Monte Carlo code on CPU, and I want to get one sample each from many discrete distributions pi = Ai * Bi, where A and B are N x n, with n ~ 20 and N ~ 10^6. So I generate N random numbers from 0 ~ 1, and count the cumsum of pi below the random numbers. Ideally I want to loop over the n axis and keep only the cumsum value (instead of the full length-n vector). It leads to the `cumsum_count_fast` in the following, which is fast enough but requires recompilation for different n. If I keep the whole cumsum vector, it leads to the `cumsum_count_slow` in the following, and is much slower. I think it fits the `fori_loop` function but it's not available yet, so I just use `scan` with an empty y. Unfortuately it's just slightly faster than `cumsum_count_slow`, and much slower than `cumsum_count_fast`. Is there any solution to this?
```python
import time
import functools
import torch
torch._dynamo.config.capture_scalar_outputs = True
from torch._higher_order_ops import scan
torch.set_default_dtype(torch.float64)
# C is the normalization coefficient
@functools.partial(torch.compile, dynamic=True)
def cumsum_count_fast(A, B, C):
N = len(C)
counts = torch.zeros(N, dtype=torch.int)
cumsum = torch.zeros(N)
for i in range(n):
cumsum = cumsum + torch.abs(A[i] * B[i])
counts = counts + (cumsum < C)
return counts
@functools.partial(torch.compile, dynamic=True)
def cumsum_count_slow(A, B, C):
cumsum = torch.cumsum(torch.abs(A * B), 0)
return torch.sum((cumsum < C).to(torch.int), dim=0)
def fn(carry, ab):
a, b = ab
cumsum, counts = carry
new_cumsum = cumsum - torch.abs(a * b)
new_counts = counts + (cumsum > 0).to(torch.int)
new_carry = (new_cumsum, new_counts)
y = torch.tensor(0)
return new_carry, y
@functools.partial(torch.compile, dynamic=True)
def cumsum_count_scan(A, B, C):
n, N = A.shape
cumsum = C
counts = torch.zeros((N, ), dtype=torch.int)
carry = (cumsum, counts)
(_, counts), _ = scan(fn, carry, (A, B))
return counts
N = 1000000
for func in [cumsum_count_fast, cumsum_count_slow, cumsum_count_scan]:
print(f"{func.__name__}")
for n in [20, 30]:
A = torch.rand((n, N))
B = torch.rand((n, N))
total = torch.sum(torch.abs(A * B), dim=0)
random = torch.rand((N, ))
C = total * random
for _ in range(3):
t1 = time.time()
counts = func(A, B, C)
t2 = time.time()
print(n, t2 - t1)
print()
```
Output:
```
cumsum_count_fast
20 2.491441488265991
20 0.012846231460571289
20 0.012821197509765625
30 0.6074924468994141
30 0.016744375228881836
30 0.01685929298400879
cumsum_count_slow
20 0.18932175636291504
20 0.05529618263244629
20 0.05519819259643555
30 0.07843923568725586
30 0.08022069931030273
30 0.08077597618103027
cumsum_count_scan
20 0.755068302154541
20 0.038268089294433594
20 0.03831148147583008
30 0.05788612365722656
30 0.05818939208984375
30 0.05914735794067383
```
### Versions
```
Collecting environment information...
PyTorch version: 2.9.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 9.0.0 (https://github.com/conda-forge/clangdev-feedstock 284a3d5d88509307bcfba64b055653ee347371db)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.14.0-33-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
Stepping: 13
CPU(s) scaling MHz: 96%
CPU max MHz: 4900.0000
CPU min MHz: 800.0000
BogoMIPS: 7200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse
|
https://github.com/pytorch/pytorch/issues/166516
|
open
|
[
"module: autograd",
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 2025-10-29T11:57:03Z
| 2025-11-04T21:32:28Z
| 2
|
SUSYUSTC
|
huggingface/lerobot
| 2,338
|
policy gr00t not found when do async inference with gr00t
|
### System Info
```Shell
lerobot version:
3f8c5d98 (HEAD -> main, origin/main, origin/HEAD) fix(video_key typo): fixing video_key typo in update_video_info (#2323)
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
I have installed the following packages:
pip install "torch>=2.2.1,<2.8.0" "torchvision>=0.21.0,<0.23.0" # --index-url https://download.pytorch.org/whl/cu1XX
pip install ninja "packaging>=24.2,<26.0" # flash attention dependencies
pip install "flash-attn>=2.5.9,<3.0.0" --no-build-isolation
python -c "import flash_attn; print(f'Flash Attention {flash_attn.__version__} imported successfully')"
pip install lerobot[groot]
Then I ran asyn inference server:
python -m lerobot.async_inference.policy_server \
--host=127.0.0.1 \
--port=8080
when the async inference client send policy gr00t, the server complains no groot as below:
ERROR 2025-10-29 05:30:24 /_server.py:636 Exception calling application: Policy type groot not supported. Supported policies: ['act', 'smolvla', 'diffusion', 'tdmpc', 'vqbet', 'pi0', 'pi05']
Finetuning a pi05 model is OK on the same code
Any idea why this happens?
### Expected behavior
Should not complain about no groot policy
|
https://github.com/huggingface/lerobot/issues/2338
|
closed
|
[
"bug",
"question",
"policies"
] | 2025-10-29T05:36:20Z
| 2025-11-21T15:34:21Z
| null |
jcl2023
|
huggingface/lerobot
| 2,337
|
Can I continue reinforcement learning in HIL-SERL using a pi0
|
Can I continue reinforcement learning in HIL-SERL using a pi0 model from LERobot that has been fine-tuned via imitation learning?
|
https://github.com/huggingface/lerobot/issues/2337
|
open
|
[
"question",
"policies"
] | 2025-10-29T04:30:26Z
| 2025-11-11T03:13:23Z
| null |
pparkgyuhyeon
|
huggingface/peft
| 2,878
|
peft " target_modules='all-linear' " have different behavior between x86 and aarch ?
|
### System Info
i have tested on 2 arch (x86, arm) then find this bug.
both arch have peft==0.17.1
### Who can help?
@benjaminbossan @githubnemo
### Reproduction
Reproduction script : bug_reprod.py
```python
from transformers import AutoModelForImageTextToText
model = AutoModelForImageTextToText.from_pretrained("OpenGVLab/InternVL3_5-1B-HF", trust_remote_code=True)
lm_head = model.lm_head
model = model.language_model
model.lm_head = lm_head
from peft import get_peft_model
from peft import LoraConfig
peft_config = LoraConfig(
inference_mode=False,
r=12,
target_modules="all-linear",
)
bug_model = get_peft_model(model, peft_config)
bug_model.print_trainable_parameters()
breakpoint() # p bug_model, you will find lm_head have different results
```
put bug_reprod.py to x86 and aarch, run it you will find it have different results on lm_head!
following figure show the error :
#### x86
<img width="978" height="567" alt="Image" src="https://github.com/user-attachments/assets/b33df3f2-15bc-4855-b6cb-c1b84e7ba9d9" />
#### aarch
<img width="1067" height="911" alt="Image" src="https://github.com/user-attachments/assets/1bfcd649-9bc9-44ff-a74e-5a26a7070c49" />
### Expected behavior
`target_module='all-linear'` should exclude lm_head in lora tuning. At least, x86, arm arch should have identical behavior.
|
https://github.com/huggingface/peft/issues/2878
|
closed
|
[] | 2025-10-29T03:43:02Z
| 2025-12-07T15:03:33Z
| 4
|
HuangChiEn
|
huggingface/peft
| 2,877
|
peft config 'all-linear' include lm_head, is there anyway to remove it ?
|
I'm not sure is it a bug or my modification affect the peft ?
> since some issue reveal that 'all-linear' will not include the lm_head
```python
if 'internvl' in self.variant.lower():
if '3_5' in self.variant:
self.model = AutoModelForImageTextToText.from_pretrained(self.variant, trust_remote_code=True)
# internvl3.5, lm_head is not part of language_model !?
lm_head = self.model.lm_head
self.model = self.model.language_model
self.model.lm_head = lm_head
# then
from peft import get_peft_model
from peft import LoraConfig
print('Using PEFT model')
peft_config = LoraConfig(
inference_mode=False,
r=self.lora_r,
lora_alpha=self.lora_alpha,
lora_dropout=self.lora_dropout,
target_modules="all-linear",
)
self.model = get_peft_model(self.model, peft_config)
```
if the modification do affect the peft config, is there any way to exclude the lm_head by setting LoraConfig ?
peft version : 0.17.0
Can anyone kindly give me some suggestion ?
many TKS ~
---
Update :
peft have different behavior between x86 and aarch ?
error message while loading the pretrain weight :
<img width="1702" height="186" alt="Image" src="https://github.com/user-attachments/assets/e13b167f-4215-446a-9f7b-42ba7d690029" />
#### x86 arch, normal
<img width="978" height="567" alt="Image" src="https://github.com/user-attachments/assets/ee758c1e-46b7-4edd-9b2a-21d8f6fbfa5b" />
#### aarch, bug occurs
<img width="1067" height="911" alt="Image" src="https://github.com/user-attachments/assets/94755fc8-67da-4200-a408-7929cec0f6f4" />
|
https://github.com/huggingface/peft/issues/2877
|
closed
|
[] | 2025-10-29T02:19:21Z
| 2025-10-29T03:43:20Z
| 1
|
HuangChiEn
|
huggingface/lerobot
| 2,335
|
How to Visualize All Episodes of a LeRobot Dataset Locally?
|
Hi everyone, I have a question about LeRobot datasets. I'd like to inspect my data locally, but using the command
_lerobot-dataset-viz --repo-id=${HF_USER}/record-test --episode-index=0_
only allows me to view one episode at a time, which is quite cumbersome.
Is there a way to visualize all episodes of a dataset locally—similar to [visualize dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset),
where I can easily browse through all episodes?
Thanks!
|
https://github.com/huggingface/lerobot/issues/2335
|
open
|
[
"question",
"dataset"
] | 2025-10-29T02:01:01Z
| 2025-12-29T12:18:57Z
| null |
Vacuame
|
vllm-project/vllm
| 27,692
|
it run on rtx 5060 ti 16 gb
|
### Your current environment
https://github.com/bokkob556644-coder/suc-vllm-rtx-5060-ti-16-gb/blob/main/suc_vllm.txt
### How would you like to use vllm
[I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
](https://github.com/bokkob556644-coder/suc-vllm-rtx-5060-ti-16-gb/blob/main/suc_vllm.txt)
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27692
|
open
|
[
"usage"
] | 2025-10-28T21:43:00Z
| 2025-10-28T21:43:16Z
| 1
|
bokkob556644-coder
|
huggingface/transformers
| 41,919
|
LFM2 image_processing_lfm2_vl_fast.py Mean Std swapped?
|
### System Info
In LFM2-VL image_processing_lfm2_vl_fast.py line 212 following the MEAN and STD from imagenet is used for preprocessing.
However it seems like they are swapped:
image_mean = IMAGENET_STANDARD_STD
image_std = IMAGENET_STANDARD_MEAN
or is this correct ?
### Who can help?
@Cyrilvallez
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Have a look at https://github.com/huggingface/transformers/blob/main/src/transformers/models/lfm2_vl/image_processing_lfm2_vl_fast.py
### Expected behavior
Not optimized VLM Behaviour
|
https://github.com/huggingface/transformers/issues/41919
|
closed
|
[
"bug"
] | 2025-10-28T16:17:44Z
| 2025-10-31T15:02:40Z
| 4
|
florianvoss-commit
|
vllm-project/vllm
| 27,667
|
[Usage]: DeepseekOCR on CPU missing implementation for fused_topk
|
### Your current environment
Try to test if it is possible to run DeepseekOCR on CPU using current git main branch.
Fails because there is no implementation of `fused_topk` for CPU.
```
INFO 10-28 15:41:18 [v1/worker/cpu_model_runner.py:77] Warming up model for the compilation...
ERROR: Traceback (most recent call last):
File "/opt/venv/lib/python3.12/site-packages/starlette/routing.py", line 677, in lifespan
async with self.lifespan_context(app) as maybe_state:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/starlette/routing.py", line 566, in __aenter__
await self._router.startup()
File "/opt/venv/lib/python3.12/site-packages/starlette/routing.py", line 654, in startup
await handler()
File "/app/start_server.py", line 161, in startup_event
initialize_model()
File "/app/start_server.py", line 84, in initialize_model
llm = LLM(
^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 336, in __init__
self.llm_engine = LLMEngine.from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 188, in from_engine_args
return cls(
^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 122, in __init__
self.engine_core = EngineCoreClient.make_client(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 95, in make_client
return InprocClient(vllm_config, executor_class, log_stats)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 264, in __init__
self.engine_core = EngineCore(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 109, in __init__
num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 234, in _initialize_kv_caches
self.model_executor.initialize_from_config(kv_cache_configs)
File "/opt/venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 113, in initialize_from_config
self.collective_rpc("compile_or_warm_up_model")
File "/opt/venv/lib/python3.12/site-packages/vllm/v1/executor/uniproc_executor.py", line 73, in collective_rpc
return [run_method(self.driver_worker, method, args, kwargs)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/v1/serial_utils.py", line 459, in run_method
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/v1/worker/cpu_worker.py", line 105, in compile_or_warm_up_model
self.model_runner.warming_up_model()
File "/opt/venv/lib/python3.12/site-packages/vllm/v1/worker/cpu_model_runner.py", line 80, in warming_up_model
self._dummy_run(
File "/opt/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3464, in _dummy_run
outputs = self.model(
^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/model_executor/models/deepseek_ocr.py", line 582, in forward
hidden_states = self.language_model(
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/model_executor/models/deepseek.py", line 495, in forward
hidden_states = self.model(
^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
|
https://github.com/vllm-project/vllm/issues/27667
|
open
|
[
"usage"
] | 2025-10-28T16:14:40Z
| 2025-10-28T16:14:40Z
| 0
|
brainlag
|
vllm-project/vllm
| 27,661
|
[RFC]: Consolidated tool call parser implementations by type (JSON, Python, XML, Harmony)
|
### Motivation.
When someone wants to add a new tool call parser today, they typically choose an existing tool call parser that looks close to what is needed, copy it into a new file, and adjust things here and there as needed for their specific model. Sometimes tests get added, and sometimes not. Sometimes the changes to the copied parser make meaningful fixes, and sometimes the changes to the copied parser add bugs.
Generally, we have a few buckets of tool call parsers based on the format the models are trained to output - JSON, Python, XML, or Hamony style tool calls. But, we have N different implementations of streaming partial JSON parsing, N different python parsing, and so on. Instead of multiple copies of each of those, ideally we'd maintain one high quality implementation for streaming partial JSON parsing that's extensible enough to handle the needs of individual model differences.
### Proposed Change.
The overall change I propose is a refactoring of the existing tool call parsers, lowering the burden to add a new tool call parser, reducing the maintenance and bug permutations possible, and providing us higher test coverage of all tool call parsers so we can systematically track and fix bugs as reported in one place.
General steps proposed:
**Test coverage**
Before starting any refactor, the focus will be on building confidence in the existing state of all our tool call parsers by focusing on adding and extending their test suites.
- [ ] Add a new common tool call parser unit test suite for all tool call parsers lacking any tests
- #27599
- [ ] Reorganize existing tool call parser tests to cleanly separate unit tests that just need a tokenizer from integration tests that need actual running inference servers.
- Today we have `tests/tool_use` that is mostly integration tests, and `tests/entrypoints/openai/tool_parsers` that is mostly unit tests, but there's a mix of each in both. The plan is to move integration tests to `tests/tool_use` since that's where most of those live, and unit tests in `tests/entrypoints/openai/tool_parsers` that can all be run without an accelerator and execute quickly.
- [ ] Review the history of each tool call parser, bugs filed against that tool call parser, and special statements in the code of each tool parser to identify special case handling. Create a test for each of these special cases.
- [ ] Refactor existing tool call parser tests to use the common test suite for all tool call parsers while retaining any model-specific tests required by the previous review of parsers.
- [ ] File issues of type bug for every test in the common suite that is marked as "expected fail" for various tool call parsers. There will be a number of these, with tool call parsers that do not meet the standards of the common suite today. These represent low-hanging fruit for us to find and fix for each parser.
- Some fixes may be trivial, and can happen before consolidating implementations just to incrementally raise the quality of our parsers. Some fixes may not be trivial, and may only happen after consolidating implementations.
**Refactoring and consolidation**
After we have the expanded test suite, we'll have the confidence to undertake this refactor without introducing a lot of new bugs as each parser has some bespoke logic today that needs to be accounted for.
- [ ] Consolidate all the partial and streaming JSON parsing logic into a central place that every JSON-style tool call parser consumes. Ensure no test regressions
- [ ] Consolidate all the partial and streaming Python parsing logic into a central place that every Python-style tool call parser consumes.
**Post-consolidation bug squashing and docs**
- [ ] Remove any remaining `xfail` markers in the test suite across all tool parser test suites.
- [ ] Update contributor docs that discuss how to add a new tool call parser, how to reuse the common logic for JSON, Python, XML, etc parsing instead of writing new, and how to use the new common test suite to simplify testing of the new parser.
### Feedback Period.
This is ongoing work and feedback is accepted at any time while this issue is open. Initial stages of expanding our test coverage have already started, but there's at least a couple of weeks to provide feedback before work gets to the point of actual refactoring and consolidating of the tool call parsers.
### CC List.
_No response_
### Any Other Things.
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27661
|
open
|
[
"RFC"
] | 2025-10-28T14:54:10Z
| 2025-10-30T16:14:09Z
| 2
|
bbrowning
|
pytorch/torchtitan
| 1,950
|
Break the tests/integration_tests/run_tests.py UT
|
### Bug description
https://github.com/pytorch/torchtitan/pull/1922 this patch break the existing tests/integration_tests/run_tests.py
Error :
[rank0]:[rank0]: Traceback (most recent call last):
[rank0]:[rank0]: File "/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/runpy.py", line 196, in _run_module_as_main
[rank0]:[rank0]: return _run_code(code, main_globals, None,
[rank0]:[rank0]: File "/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/runpy.py", line 86, in _run_code
[rank0]:[rank0]: exec(code, run_globals)
[rank0]:[rank0]: File "/home/dvasanth/workspace/torchtitan_repos/torchtitan/torchtitan/train.py", line 683, in <module>
[rank0]:[rank0]: trainer.train()
[rank0]:[rank0]: File "/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 358, in wrapper
[rank0]:[rank0]: return f(*args, **kwargs)
[rank0]:[rank0]: File "/home/dvasanth/workspace/torchtitan_repos/torchtitan/torchtitan/train.py", line 608, in train
[rank0]:[rank0]: self.train_step(data_iterator)
[rank0]:[rank0]: File "/home/dvasanth/workspace/torchtitan_repos/torchtitan/torchtitan/train.py", line 508, in train_step
[rank0]:[rank0]: loss = self.forward_backward_step(input_dict, labels)
[rank0]:[rank0]: File "/home/dvasanth/workspace/torchtitan_repos/torchtitan/torchtitan/train.py", line 453, in forward_backward_step
[rank0]:[rank0]: self.pp_schedule.step(
[rank0]:[rank0]: File "/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/distributed/pipelining/schedules.py", line 626, in step
[rank0]:[rank0]: self._step_microbatches(args_split, kwargs_split, targets_split, losses)
[rank0]:[rank0]: File "/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/distributed/pipelining/schedules.py", line 728, in _step_microbatches
[rank0]:[rank0]: self._initialize_stage(arg_mbs[0], kwarg_mbs[0])
[rank0]:[rank0]: File "/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/distributed/pipelining/schedules.py", line 585, in _initialize_stage
[rank0]:[rank0]: self._stage._prepare_forward_infra(self._n_microbatches, args, kwargs)
[rank0]:[rank0]: File "/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/distributed/pipelining/stage.py", line 1525, in _prepare_forward_infra
[rank0]:[rank0]: outputs = self._shape_inference(args, kwargs)
[rank0]:[rank0]: File "/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/distributed/pipelining/stage.py", line 1455, in _shape_inference
[rank0]:[rank0]: outputs = self.submod(*args, **kwargs)
[rank0]:[rank0]: File "/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1780, in _wrapped_call_impl
[rank0]:[rank0]: return self._call_impl(*args, **kwargs)
[rank0]:[rank0]: File "/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1886, in _call_impl
[rank0]:[rank0]: return inner()
[rank0]:[rank0]: File "/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1834, in inner
[rank0]:[rank0]: result = forward_call(*args, **kwargs)
[rank0]:[rank0]: TypeError: Transformer.forward() got an unexpected keyword argument 'return_outputs'
### Versions
commit 8228c0845aa8b2e6e9672c30f40fe4af9588dca2 (HEAD -> main, origin/main, origin/HEAD)
|
https://github.com/pytorch/torchtitan/issues/1950
|
closed
|
[
"question"
] | 2025-10-28T13:14:37Z
| 2025-10-29T08:55:34Z
| null |
dayanandav
|
huggingface/lerobot
| 2,329
|
smolvla base model ( the Vlm part) to other model
|
Can I change smolvla base model ( the Vlm part) to other model?
What should I do?
Thanks
|
https://github.com/huggingface/lerobot/issues/2329
|
closed
|
[
"question",
"policies"
] | 2025-10-28T12:28:44Z
| 2025-10-31T15:09:12Z
| null |
smartparrot
|
pytorch/tutorials
| 3,625
|
Will you release the TorchRL C++ API in the future, similar to the PyTorch C++ API?
|
Will you release the TorchRL C++ API in the future, similar to the PyTorch C++ API? We look forward to using the TorchRL C++ API in the future.
|
https://github.com/pytorch/tutorials/issues/3625
|
open
|
[
"question",
"Reinforcement Learning"
] | 2025-10-28T11:27:52Z
| 2025-10-28T15:36:30Z
| null |
hyl20012
|
vllm-project/vllm
| 27,649
|
[Usage]: Qwen3-32B on RTX PRO 6000 (55s First Token Delay and 15t/s)
|
Why does the Qwen3-32B model take 55 seconds before producing the first token, and why is the generation speed only 15t/s?
My vLLM configuration:
Device: GB202GL [RTX PRO 6000 Blackwell Server Edition]
Nvidia Driver Version:580.95.05
CUDA Version:13.0
Docker configuration:
```sh
PORT=8085
MODEL_PATH=Qwen/Qwen3-32B
SERVED_MODEL_NAME=vLLM-Qwen3-32B
docker run -d \
--runtime nvidia \
--gpus all \
-v /data/projects/docker/vllm/.cache/huggingface:/root/.cache/huggingface \
-p $PORT:8000 \
--env "HUGGING_FACE_HUB_TOKEN=$HUGGING_FACE_HUB_TOKEN" \
--name $SERVED_MODEL_NAME \
--restart unless-stopped \
--ipc=host \
vllm/vllm-openai:v0.11.0 \
--model /root/.cache/huggingface/$MODEL_PATH \
--served-model-name $SERVED_MODEL_NAME \
--dtype bfloat16 \
--gpu-memory-utilization 0.92 \
--max-model-len 32768 \
--max-num-seqs 64 \
--tensor-parallel-size 1 \
--api-key sk-vx023nmlrtTmlC
```
|
https://github.com/vllm-project/vllm/issues/27649
|
open
|
[
"usage"
] | 2025-10-28T10:49:43Z
| 2025-11-07T02:30:26Z
| 4
|
yizhitangtongxue
|
vllm-project/vllm
| 27,646
|
[Usage]: How to use vllm bench serve to bench remote deployed vllm models (can't bench when ep enabled!!!)
|
### Your current environment
I deployed dpskv3 in a remote server using:
```
export VLLM_USE_V1=1
export VLLM_ALL2ALL_BACKEND=deepep_low_latency
vllm serve /models/hf/models--deepseek-ai--DeepSeek-V3 --tensor-parallel-size 1 --data-parallel-size 8 --enable-expert-parallel --no-enforce-eager --load-format dummy
```
And on another server:
```
VLLM_USE_V1=1 vllm bench serve --model /models/hf/models--deepseek-ai--DeepSeek-V3/ --endpoint /v1/completions --dataset-name sharegpt --dataset-path /datasets/ShareGPT/ShareGPT_V3_unfiltered_cleaned_split.json --num-prompts 10 --ready-check-timeout-sec 0 --ip 10.102.212.22 --port 8000
```
where 10.102.212.22 is the server ip, 8000 is the default port
And I got this below error on server:
```
"POST /v1/completions HTTP/1.1" 404 Not Found
```
### How would you like to use vllm
I want to run inference of a deepseekv3.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27646
|
open
|
[
"usage"
] | 2025-10-28T09:56:37Z
| 2025-10-28T15:23:06Z
| 3
|
Valerianding
|
huggingface/transformers
| 41,910
|
Breaking change about AWQ Fused modules due to Attention Refactor
|
### System Info
transformers==5.0.0dev
autoawq==0.2.9
autoawq_kernels==0.0.9
torch==2.6.0+cu124
### Who can help?
Due to PR #35235, the `past_key_values` is no longer a returned value of attention modules.
However, when using AWQ models with Fused modules [AWQ Fused modules docs](https://huggingface.co/docs/transformers/main/en/quantization/awq#fused-modules), there will be an error like issue #38554
```bash
hidden_states, _ = self.self_attn(
ValueError: too many values to unpack (expected 2)
```
So we can hack the `awq.modules.fused.attn.QuantAttentionFused` to avoid returning `past_key_values`. Therefore, I create a primary PR #41909 to fix it.
However, for special `rope_type` such as LLaMA3, the RoPE implementation in AutoAWQ will cause error, since `awq.modules.fused.attn.RoPE` supports default RoPE only.
Maybe we can implement and maintain `AwqRoPE` and `AwqQuantAttentionFused` in `transformers.integrations.awq`? Or we can maintain `huggingface/AutoAWQ` as `casper-hansen/AutoAWQ` is archived.
I'd like to refine my PR to help transformers fix this bug!
@SunMarc @MekkCyber
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AwqConfig, AutoModelForCausalLM, AutoTokenizer
# model_path = "./llama-3.1-8b-instruct-awq"
model_path = "./qwen2.5-7b-instruct-awq"
# model_path = "./qwen3-8b-awq"
awq_config = AwqConfig(
bits=4,
do_fuse=True,
fuse_max_seq_len=8192
)
model = AutoModelForCausalLM.from_pretrained(model_path, quantization_config=awq_config).to("cuda:0")
print(model)
tokenizer = AutoTokenizer.from_pretrained(model_path)
max_new_tokens = 1024 if "qwen3" in model_path else 32
messages = []
prompt1 = "What is the result of 3+5?"
messages.append({"role": "user", "content": prompt1})
text1 = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs1 = tokenizer(text1, return_tensors="pt").to("cuda:0")
generated_ids1 = model.generate(**inputs1, max_new_tokens=max_new_tokens)
output_ids1 = generated_ids1[0, len(inputs1.input_ids[0]) :].tolist()
output1 = tokenizer.decode(output_ids1, skip_special_tokens=True)
messages.append({"role": "assistant", "content": output1})
print("Output 1:", output1)
prompt2 = "What about adding 10 to that result?"
messages.append({"role": "user", "content": prompt2})
text2 = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs2 = tokenizer(text2, return_tensors="pt").to("cuda:0")
generated_ids2 = model.generate(**inputs2, max_new_tokens=max_new_tokens)
output_ids2 = generated_ids2[0, len(inputs2.input_ids[0]) :].tolist()
output2 = tokenizer.decode(output_ids2, skip_special_tokens=True)
messages.append({"role": "assistant", "content": output2})
print("Output 2:", output2)
```
### Expected behavior
There is no error.
|
https://github.com/huggingface/transformers/issues/41910
|
closed
|
[
"bug"
] | 2025-10-28T08:29:03Z
| 2025-11-20T13:41:34Z
| 3
|
fanqiNO1
|
vllm-project/vllm
| 27,636
|
[Usage]: vllm如何保留qwen3-vl中的special token
|
### Your current environment
我微调过的qwen3-vl模型的grounding格式为:<|object_ref_start|>图片<|object_ref_end|><|box_start|>(x1,y1),(x2,y2)<|box_end|>
使用vllm serve推理的格式是:图片(460,66),(683,252),这个是直接忽略了special token么,是否有方法可以保留。
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27636
|
open
|
[
"usage"
] | 2025-10-28T06:52:16Z
| 2025-10-28T06:52:16Z
| 0
|
qfs666
|
huggingface/diffusers
| 12,553
|
Reason to move from OpenCV to ffmpeg
|
I see that `diffusers.utils.export_to_video()` encourages ffmpeg usage instead of OpenCV. Can you share the reason? I'm looking for a way to add video decoding to my project so I'm collecting arguments.
|
https://github.com/huggingface/diffusers/issues/12553
|
open
|
[] | 2025-10-28T06:49:48Z
| 2025-11-07T13:27:03Z
| 10
|
Wovchena
|
vllm-project/vllm
| 27,634
|
[Usage]: how to use --quantization option of `vllm serve`?
|
### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu129
Is debug build : False
CUDA used to build PyTorch : 12.9
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-5.15.0-160-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 11.5.119
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration : GPU 0: NVIDIA GeForce RTX 4090 D
Nvidia driver version : 570.195.03
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 9950X3D 16-Core Processor
CPU family: 26
Model: 68
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 8839.3555
CPU min MHz: 3000.0000
BogoMIPS: 8583.32
Flags: fpu vme de pse tsc msr pae mce cx8
apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc
cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse
4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm ex
tapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tc
e topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l
3 hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adj
ust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx
smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt
xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx
_vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbr
v svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefi
lter pfthreshold v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke
avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcnt
dq rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d
Virtualization: AMD-V
L1d cache: 768 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 128 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic I
BRS; IBPB conditional; STIBP always-on; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsa: Not
|
https://github.com/vllm-project/vllm/issues/27634
|
open
|
[
"usage"
] | 2025-10-28T06:24:38Z
| 2025-10-28T15:57:47Z
| 3
|
Septemberlemon
|
pytorch/pytorch
| 166,363
|
All Docker build failed due to Ubuntu archive outage
|
## Current Status
Closed
## Error looks like
Docker build Error:
```
#9 82.65 W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/jammy-updates/InRelease Could not connect to archive.ubuntu.com:80 (185.125.190.82), connection timed out Could not connect to archive.ubuntu.com:80 (185.125.190.81), connection timed out Could not connect to archive.ubuntu.com:80 (91.189.91.81), connection timed out Could not connect to archive.ubuntu.com:80 (91.189.91.82), connection timed out Could not connect to archive.ubuntu.com:80 (91.189.91.83), connection timed out Could not connect to archive.ubuntu.com:80 (185.125.190.83), connection timed out [IP: 185.125.190.81 80]
```
Multiple CI/CD jobs are timing out on Calculate Image step
## Incident timeline (all times pacific)
Started - Oct 27, 2025 06:00:36 PM
Market resolved on Oct 27, 2025 06:48:43 PM https://status.canonical.com/#/incident/KNms6QK9ewuzz-7xUsPsNylV20jEt5kyKsd8A-3ptQFMY_6s8e7AbWcGbatrjSU_aoghGrAcVK7slWXgWMkizA==
However observed failure on Oct 27, 2025 7:10 PM - https://github.com/pytorch/pytorch/actions/runs/18859957307/job/53816104822
Reverted a PR causing trigger docker rebuild https://github.com/pytorch/pytorch/pull/165470 Oct 27, 7:15PM
Confirmed that issue is resolved at Oct 28, 2025 6:00AM
## User impact
Multiple CI/CD failures
## Root cause
Component "archive.ubuntu.com" and a few other components are Down
https://status.canonical.com/#/incident/KNms6QK9ewuzz-7xUsPsNylV20jEt5kyKsd8A-3ptQFMY_6s8e7AbWcGbatrjSU_aoghGrAcVK7slWXgWMkizA==
## Mitigation
Reverted the PR that trigger docker rebuild https://github.com/pytorch/pytorch/pull/165470
## Prevention/followups
*How do we prevent issues like this in the future?*
|
https://github.com/pytorch/pytorch/issues/166363
|
closed
|
[] | 2025-10-28T02:42:58Z
| 2025-10-28T13:57:51Z
| 0
|
atalman
|
huggingface/candle
| 3,151
|
Tensor conversion to_vec1() failing on 0.9.2-alpha.1 - Metal
|
Dependencies
```toml
candle-core = { git = "https://github.com/huggingface/candle", rev = "df618f8", features = ["metal"] }
candle-nn = { git = "https://github.com/huggingface/candle", rev = "df618f8", features = ["metal"] }
candle-transformers = { git = "https://github.com/huggingface/candle", rev = "df618f8", features = ["metal"] }
```
Running on Macbook M2 Pro - Metal - Tahoe 26.0.1
Since upgrading to 0.9.2-alpha.1, BERT operations on Metal have started hanging when converting rank-1 tensor to Vec<32>. This seems to be affecting any ops that attempt to synchronize or move data from GPU to CPU. Not sure if this is directly related to the update but rolling back to 0.9.1 or using CPU as device fixes the issue.
Some example of ops that are failing...
```rust
tensor.device().synchronize()
tensor.to_device()
tensor.to_vec1()
```
Actual code being run...
```rust
let (token_ids, token_type_ids, attention_mask) = self.encode_text(text)?;
let hidden_states = self
.forward_model(&token_ids, &token_type_ids, &attention_mask)
.await
.map_err(|e| {
log::error!("Failed to forward to model: {}", e);
e
})?;
let embeddings = self
.apply_mean_pooling(&hidden_states, &attention_mask)
.map_err(|e| {
log::error!("Failed to apply mean pooling: {}", e);
e
})?;
...
fn apply_mean_pooling(
&self,
hidden_states: &Tensor,
attention_mask: &Tensor,
) -> Result<Vec<f32>> {
log::info!("Applying mean pooling to hidden states...");
let attention_mask_for_pooling = attention_mask
.to_dtype(hidden_states.dtype())?
.unsqueeze(2)?;
let sum_mask = attention_mask_for_pooling.sum(1)?;
let pooled = (hidden_states.broadcast_mul(&attention_mask_for_pooling)?).sum(1)?;
let sum_mask_safe = sum_mask.clamp(NUMERICAL_STABILITY_EPSILON, f32::MAX)?;
let pooled = pooled.broadcast_div(&sum_mask_safe)?;
let denom = pooled
.sqr()?
.sum_keepdim(1)?
.sqrt()?
.clamp(NUMERICAL_STABILITY_EPSILON, f32::MAX)?;
let pooled = pooled.broadcast_div(&denom)?;
let pooled = pooled.squeeze(0)?;
// HANGING HERE ... no errors
// Tensor shape - Tensor[dims 1024; f32, metal:4294968337]
let embeddings = pooled.to_vec1::<f32>().map_err(|e| Error::TensorOp {
operation: format!("Failed to convert tensor to f32 vector: {}", e),
})?;
Ok(embeddings)
}
```
|
https://github.com/huggingface/candle/issues/3151
|
closed
|
[] | 2025-10-27T21:36:17Z
| 2025-11-06T22:44:14Z
| 2
|
si-harps
|
vllm-project/vllm
| 27,604
|
[Bug]: Is Flashinfer Attn backend supposed to work with FP8 KV cache on Hopper?
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Collecting environment information...
==============================
System Info
==============================
OS : Amazon Linux 2023.7.20250428 (x86_64)
GCC version : (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version : Could not collect
CMake version : version 3.26.4
Libc version : glibc-2.34
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.6 (main, May 6 2025, 20:22:13) [GCC 11.5.0 20240719 (Red Hat 11.5.0-5)] (64-bit runtime)
Python platform : Linux-6.1.134-150.224.amzn2023.x86_64-x86_64-with-glibc2.34
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version : 570.133.20
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
BogoMIPS: 5299.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.3.1
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cudnn-frontend==1.15.0
[pip3] nvidia-cufft-cu12==11.3.3.83
[pi
|
https://github.com/vllm-project/vllm/issues/27604
|
open
|
[
"bug",
"nvidia"
] | 2025-10-27T20:22:37Z
| 2025-11-06T02:37:17Z
| 10
|
jmkuebler
|
huggingface/smolagents
| 1,834
|
Discussion: how to edit the messages sent to the underlying LLM
|
Hi! I'm working on a feature to allow a user to add callbacks to modify the content before it is sent to the LLM, inside the agent loop.
I noticed this strange behavior where the first user message must start with "New Task:", otherwise I get this cryptic and misleading error message.
""Error:\nError while parsing tool call from model output: The model output does not contain any JSON blob.\nNow let's retry: take care not to repeat previous errors! If you have retried several times, try a completely different approach.\n""
So I think I have two (or maybe one question):
1. Is my approach to control the messages flow by wrapping the `generate` member function of a Smolagent correct? Or do you recommend a better way to modify messages before sending them to the underlying LLM?
2. Is it expected that the first user message needs to start with New Task:, or have I found a bug or missing assertion somewhere in the code? Thanks!
https://github.com/mozilla-ai/any-agent/blob/f2475d7507c5a78e241ff5f0883b546d796d29fc/src/any_agent/callbacks/wrappers/smolagents.py#L75
I'm on smolagents==1.22.0, python 3.13.
UPDATE: I'm no longer sure that adding "New Task:" is the fix, I am still seeing intermittent errors even when I have that text added. It seems like there some sort of race condition, I'm confused about where the "messages" content should be edited, since it seems like maybe it's being stored or referenced in multiple conditions? Any help appreciated!
|
https://github.com/huggingface/smolagents/issues/1834
|
closed
|
[] | 2025-10-27T17:28:38Z
| 2025-10-27T19:02:39Z
| null |
njbrake
|
pytorch/vision
| 9,251
|
roi_align onnx export fails while seemingly supported in torchvision code
|
### 🐛 Describe the bug
ONNX export of a model using roi_align fails:
Code:
```
import torch
from torch import nn
from torchvision.ops import roi_align
class TestModel(nn.Module):
def forward(self, x, b):
return roi_align(x, b, output_size=(7, 7), spatial_scale=1/16.0)
x = torch.zeros((1, 128, 40, 40))
b = torch.zeros((300, 5))
model = TestModel()
onnx_model = torch.onnx.export(model, (x, b), opset_version=22, report=True, verbose=True)
```
The strange thing is that I am seeing support for ROIAlign ops in the code: https://github.com/pytorch/vision/blob/218d2ab791d437309f91e0486eb9fa7f00badc17/torchvision/ops/_register_onnx_ops.py
Just unsure how to use it or activate the support.
The ONNX conversion report is attached.
[onnx_export_2025-10-27_17-13-55-895426_conversion.md](https://github.com/user-attachments/files/23168838/onnx_export_2025-10-27_17-13-55-895426_conversion.md)
### Versions
Collecting environment information...
PyTorch version: 2.9.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 13 (trixie) (x86_64)
GCC version: (Debian 14.2.0-19) 14.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.41
Python version: 3.13.5 (main, Jun 25 2025, 18:55:22) [GCC 14.2.0] (64-bit runtime)
Python platform: Linux-6.12.48+deb13-amd64-x86_64-with-glibc2.41
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 550.163.01
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 7700X 8-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 74%
CPU max MHz: 5573.0000
CPU min MHz: 400.0000
BogoMIPS: 8983.06
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs ba
|
https://github.com/pytorch/vision/issues/9251
|
open
|
[] | 2025-10-27T16:21:07Z
| 2025-10-28T11:58:28Z
| 2
|
timstokman
|
pytorch/pytorch
| 166,303
|
Pytorch Operators on older pytorch version
|
### 📚 The doc issue
Hi team,
I've seen that PyTorch has recently been transitioning to `pip install` (https://github.com/pytorch/pytorch/issues/152276).
For projects doing custom operators like Kaolin we want to support a reasonable version matrix of PyTorch, what are we supposed to do?
The documentation for custom operators is not accessible on older versions (automatically lead to latest version).
### Suggest a potential alternative/fix
_No response_
cc @svekars @sekyondaMeta @AlannaBurke
|
https://github.com/pytorch/pytorch/issues/166303
|
open
|
[
"needs reproduction",
"module: docs",
"triaged"
] | 2025-10-27T14:04:02Z
| 2025-10-27T16:55:38Z
| 2
|
Caenorst
|
huggingface/peft
| 2,873
|
Can I use Lora fine-tuning twice?
|
I’m planning to work with a two-stage LoRA fine-tuning pipeline (Stage 1: SFT with code completion outputs; Stage 2: SFT with full-code outputs; RL follows). My question is:
When I continue training the same LoRA adapter in Stage 2, will I risk overwriting or degrading the knowledge learned during Stage 1 ? In other words, does continuing on the same adapter effectively preserve the Stage 1 capabilities, or should I be using a separate adapter (or merging strategy) to ensure both sets of skills remain intact?
Thank you for any guidance or best‐practice pointers!
|
https://github.com/huggingface/peft/issues/2873
|
closed
|
[] | 2025-10-27T12:51:45Z
| 2025-12-05T15:05:00Z
| 8
|
tohokulgq
|
vllm-project/vllm
| 27,572
|
[Bug]: chat/completions stream intermittently returns null as finish_reason
|
### Your current environment
```
My env:
vllm 0.10.0
```
### 🐛 Describe the bug
```
+ curl -kLsS https://127.0.0.1:7888/v1/chat/completions -H 'Content-Type: application/json' --data '{
"model": "ibm/granite-3-8b-instruct",
"stream": true,
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is the weather like in Warsaw?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
}
},
"required": ["location"]
}
}
],
"tool_choice": "auto"
}'
data: {"id":"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece","object":"chat.completion.chunk","created":1761566772,"model":"ibm/granite-3-8b-instruct","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece","object":"chat.completion.chunk","created":1761566772,"model":"ibm/granite-3-8b-instruct","choices":[{"index":0,"delta":{"content":"<"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece","object":"chat.completion.chunk","created":1761566772,"model":"ibm/granite-3-8b-instruct","choices":[{"index":0,"delta":{"content":"tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece","object":"chat.completion.chunk","created":1761566772,"model":"ibm/granite-3-8b-instruct","choices":[{"index":0,"delta":{"content":"_"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece","object":"chat.completion.chunk","created":1761566772,"model":"ibm/granite-3-8b-instruct","choices":[{"index":0,"delta":{"content":"call"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece","object":"chat.completion.chunk","created":1761566772,"model":"ibm/granite-3-8b-instruct","choices":[{"index":0,"delta":{"content":">"},"logprobs":null,"finish_reason":null}]}
data: [DONE]
```
This happens after running several requests sequentially.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27572
|
open
|
[
"bug"
] | 2025-10-27T12:14:03Z
| 2025-11-24T20:27:24Z
| 13
|
shuynh2017
|
pytorch/torchtitan
| 1,936
|
Is it possible to train Vision-Language Model with different parallelism plan for vision and language parts of the model?
|
can we train a Vision-Language Model using torchtitan?
And can we set different parallelism plan for different parts of the model: fsdp2+dp for vision part, and fsdp2+dp+sp+ep+pp for the llm part? If it is possible, how to do it?
Thanks very much.
|
https://github.com/pytorch/torchtitan/issues/1936
|
open
|
[] | 2025-10-27T06:47:47Z
| 2025-10-27T14:16:04Z
| 2
|
airlsyn
|
huggingface/chat-ui
| 1,957
|
Fail to use proxy
|
How to make this web app go through local proxy?
I tried a few methods, all of which don't work.
|
https://github.com/huggingface/chat-ui/issues/1957
|
open
|
[
"support"
] | 2025-10-27T06:31:51Z
| 2025-10-30T03:31:24Z
| 2
|
geek0011
|
pytorch/pytorch
| 166,282
|
Why does my PR still show "Missing CLA Authorization" even though I have already signed the CLA document?
|
### 🚀 The feature, motivation and pitch
Why does my PR still show "Missing CLA Authorization" even though I have already signed the CLA document?
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/166282
|
closed
|
[] | 2025-10-27T01:19:21Z
| 2025-10-27T16:45:23Z
| 1
|
wenlinchong17-web
|
huggingface/diffusers
| 12,547
|
Fine tuning Dreambooth Flux Kontext I2I Error: the following arguments are required: --instance_prompt
|
### Describe the bug
Hello HF team, @sayakpaul @bghira
I'm encountering a persistent issue when trying to fine-tune the black-forest-labs/FLUX.1-Kontext-dev model using the train_dreambooth_lora_flux_kontext.py script.
I am following the [official README instructions](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md#training-kontext) for Image-to-Image (I2I) finetuning. My goal is to train a transformation on my own dataset, which is structured for I2I (condition image, target image, and text instruction).
### The Problem
Every time I run the script with the correct arguments for I2I finetuning, I get a : `the following arguments are required: --instance_prompt`
When I run this [Reproduction], I receive the error: `the following arguments are required: --instance_prompt.`
To isolate the issue from my personal dataset, I also tested the exact example command provided in the documentation (the one using `kontext-community/relighting`). I found that this command also fails with `the identical the following arguments are required: --instance_prompt` error.
Given that both my custom command and the official example command are failing in the same way, I am trying to understand the origin of this error. It seems the `--instance_prompt` argument is being required even when all I2I-specific arguments are provided.
### Environment
**Script**: `examples/dreambooth/train_dreambooth_lora_flux_kontext.py`
**Diffusers Version**: I am using the specific commit `05e7a854d0a5661f5b433f6dd5954c224b104f0b` (installed via `pip install -e .` from a clone), as recommended in the README.
Could you please help me understand why this might be happening? Is this expected behavior, or am I perhaps missing a configuration step?
Thank you for your time!
### Reproduction
### How to Reproduce
I am running the following command, which provides all the necessary arguments for I2I finetuning using my (`dataset_name`, `image_column`, `cond_image_column`, and `caption_column`) using my public dataset:
```
accelerate launch /local-git-path/train_dreambooth_lora_flux_kontext.py \
--pretrained_model_name_or_path="black-forest-labs/FLUX.1-Kontext-dev" \
--output_dir="/local-path/kontext-finetuning-v1" \
--dataset_name="MichaelMelgarejoTotto/mi-dataset-kontext" \
--image_column="output" \
--cond_image_column="file_name" \
--caption_column="instruccion" \
--mixed_precision="bf16" \
--resolution=1024 \
--train_batch_size=1 \
--guidance_scale=1 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--optimizer="adamw" \
--use_8bit_adam \
--cache_latents \
--learning_rate=1e-4 \
--lr_scheduler="constant" \
--lr_warmup_steps=200 \
--max_train_steps=1000 \
--rank=16 \
--seed="0"
```
### Logs
```shell
train_dreambooth_lora_flux_kontext.py: error: the following arguments are required: --instance_prompt
```
### System Info
- 🤗 Diffusers version: 0.35.0.dev0
- Platform: Linux-4.18.0-305.3.1.el8.x86_64-x86_64-with-glibc2.28
- Running on Google Colab?: No
- Python version: 3.10.19
- PyTorch version (GPU?): 2.7.1+cu118 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.36.0
- Transformers version: 4.57.1
- Accelerate version: 1.11.0
- PEFT version: 0.17.1
- Bitsandbytes version: 0.48.1
- Safetensors version: 0.6.2
- xFormers version: not installed
- Accelerator: NA
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
<img width="639" height="289" alt="Image" src="https://github.com/user-attachments/assets/52a5168d-0089-4aab-834e-fa39cab0034d" />
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/12547
|
closed
|
[
"bug"
] | 2025-10-27T00:21:34Z
| 2025-10-28T02:31:42Z
| 7
|
MichaelMelgarejoFlorez
|
huggingface/transformers
| 41,876
|
LlamaAttention num_heads
|
### System Info
In older version of transformers, LlamaAttention init attribute num_heads.
class LlamaAttention(nn.Module):
def __init__(self, config):
self.num_heads = config.num_attention_heads
self.head_dim = config.hidden_size // config.num_attention_heads
However, in the recent versions, this attribute has been removed and thus causing mismatched when using previous codes. It ssems num_key_value_heads is also deprecated. This issue could be addressed by adding:
self.num_heads = config.num_attention_heads # shanhx
self.num_key_value_heads = config.num_key_value_heads
Is there any reasons why these attributes are removed? Is it intended or a bug?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
At least the num_heads stil remained at 4.44. But missed in 4.54.
### Expected behavior
Missing many attributes in LlamaAttention.
|
https://github.com/huggingface/transformers/issues/41876
|
closed
|
[
"bug"
] | 2025-10-27T00:07:31Z
| 2025-10-31T00:13:31Z
| 2
|
shanhx2000
|
huggingface/transformers
| 41,874
|
Distributed training of SigCLIP
|
https://github.com/huggingface/transformers/blob/v4.57.1/src/transformers/models/siglip/modeling_siglip.py#L983, here define how to compute sigclip loss. In sigclip, different tpu will exchange data with each other. I want to know how to train a model in this way.
|
https://github.com/huggingface/transformers/issues/41874
|
closed
|
[] | 2025-10-26T14:43:51Z
| 2025-12-04T08:02:55Z
| 1
|
zyk1559676097-dot
|
huggingface/transformers
| 41,861
|
transformers.Adafactor is almost 2x slower on Windows than Linux - even WSL is slow what can be reason?
|
I am training Qwen Image model with Kohya Musubi tuner : https://github.com/kohya-ss/musubi-tuner
Exactly same setup and same machine on Linux is almost 2x faster
9.5 second / it vs 5.8 second / it
On Windows it can't utilize GPU power it utilizes like 250 watt out of 575 watt
What can be culprit?
transformers==4.54.1
torch 2.8
CUDA 12.9
tested on RTX 5090
this is what codex tells but i don't know if it is true doesnt make sense to me
<img width="1637" height="736" alt="Image" src="https://github.com/user-attachments/assets/81b687c7-801e-4265-a2fd-6d1eae065637" />
### Who can help?
trainer: @SunMarc
kernels: @MekkCyber @drbh
|
https://github.com/huggingface/transformers/issues/41861
|
closed
|
[
"bug"
] | 2025-10-25T15:49:47Z
| 2025-12-03T08:02:55Z
| null |
FurkanGozukara
|
pytorch/pytorch
| 166,238
|
[Dynamo][BUG] Regression about `collections.defaultdict` creation
|
### 🐛 Describe the bug
See CI error log: https://github.com/pytorch/pytorch/actions/runs/18803810990/job/53655896530#step:27:2137
### Error logs
```pytb
----------------------------- Captured stdout call -----------------------------
inline_call [("Unsupported function call
Explanation: Dynamo does not know how to trace the function `<class 'collections.defaultdict'>`
Hint: Avoid calling `<class 'collections.defaultdict'>` in your code.
Hint: Please report an issue to PyTorch.
Developer debug context:
call_function UserDefinedClassVariable(<class 'collections.defaultdict'>) [GetAttrVariable(DefaultDictVariable(), default_factory), ConstDictVariable()] {}
For more details about this graph break, please visit: https://meta-pytorch.github.io/compile-graph-break-site/gb/gb0147.html", 1)]
- generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/dynamo.test_misc/dynamo.test_misc-271de5e392c25fc0.xml -
=========================== short test summary info ============================
FAILED [0.2732s] dynamo/test_misc.py::MiscTestsPyTree::test_pytree_tree_map_dict_order_cxx - torch._dynamo.exc.Unsupported: Unsupported function call
Explanation: Dynamo does not know how to trace the function `<class 'collections.defaultdict'>`
Hint: Avoid calling `<class 'collections.defaultdict'>` in your code.
Hint: Please report an issue to PyTorch.
Developer debug context: call_function UserDefinedClassVariable(<class 'collections.defaultdict'>) [GetAttrVariable(DefaultDictVariable(), default_factory), ConstDictVariable()] {}
```
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela
|
https://github.com/pytorch/pytorch/issues/166238
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-must-fix",
"dynamo-variable-tracker"
] | 2025-10-25T15:26:06Z
| 2025-11-05T06:09:41Z
| 4
|
XuehaiPan
|
huggingface/transformers
| 41,859
|
Human Verification not working?
|
### System Info
Hello! I need your help because I can't verify my identity via email: I receive a link, open it, but get a blank page and nothing else(((
I've tried several times.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Navigate to the Hugging Face website.
2. Register or log in to your account.
3. Go to the identity verification section.
4. Submit a request for the identity verification link.
5. Get the confirmation email to arrive.
6. Follow confirmation link in email
7. Get blank page in site example https://huggingface.co/email_confirmation/zKFZszGtcabRsYOURYmCQkXdfzIY
### Expected behavior
The identity verification link should work
|
https://github.com/huggingface/transformers/issues/41859
|
closed
|
[
"bug"
] | 2025-10-25T10:48:52Z
| 2025-10-26T12:29:10Z
| 4
|
thefued
|
pytorch/pytorch
| 166,233
|
license: Is it possible to stop using Conda in the Dockerfile? Due to Conda’s licensing issues, many companies have already received legal warning letters.
|
### 🚀 The feature, motivation and pitch
Starting this year, many companies have received legal letters from Conda’s lawyers, explicitly stating that using Conda requires a paid license. Although I have checked Conda’s official website, it does not clearly specify this. I also noticed that the current PyTorch Dockerfile still uses Conda, which makes me very concerned. Therefore, I strongly recommend removing Conda and using **uv** or building Python from source as the base environment instead.
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @atalman
|
https://github.com/pytorch/pytorch/issues/166233
|
open
|
[
"module: binaries",
"triaged",
"module: docker",
"better-engineering"
] | 2025-10-25T08:50:40Z
| 2025-10-28T03:42:37Z
| 2
|
WangxuP
|
huggingface/lerobot
| 2,311
|
Question: How I can train only online without dataset?
|
How I can train only online? without need of dataset. Can I do it without hugging face repo id? only local?
I try like that without success:
```
cat > "train_cfg.json" <<'JSON'
{
"job_name": "hilserl_fetch_pick_v4_cpu",
"seed": 0,
"env": {
"type": "gymnasium-robotics",
"task": "FetchPickAndPlace-v4",
"episode_length": 200,
"features_map": {
"action": "action",
"agent_pos": "observation.state",
"top": "observation.image",
"pixels/top": "observation.image"
},
"features": {
"action": {
"type": "ACTION",
"shape": [
4
]
},
"agent_pos": {
"type": "STATE",
"shape": [
4
]
},
"pixels/top": {
"type": "VISUAL",
"shape": [
480,
480,
3
]
}
}
},
"policy": {
"type": "sac",
"device": "cpu",
"concurrency": {
"actor": "threads",
"learner": "threads"
},
"repo_id": "None",
"push_to_hub": false
},
"dataset": {
"repo_id": "online-buffer",
"root": "${{ github.workspace }}/dataset",
"use_imagenet_stats": true
}
}
JSON
mkdir -p dataset/online-buffer
export HF_HUB_OFFLINE=1
export HF_HUB_DISABLE_TELEMETRY=1
export HF_DATASETS_OFFLINE=1
export WANDB_MODE=disabled
# Launch learner and actor (one shell)
python -m lerobot.rl.learner --config_path "train_cfg.json"
python -m lerobot.rl.actor --config_path "train_cfg.json"
```
|
https://github.com/huggingface/lerobot/issues/2311
|
open
|
[
"question",
"dataset"
] | 2025-10-25T05:07:48Z
| 2025-10-27T08:50:11Z
| null |
talregev
|
vllm-project/vllm
| 27,505
|
[Bug]: Value error, Found conflicts between 'rope_type=default' (modern field) and 'type=mrope'
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
vllm 0.11.0
transformers 5.0.0.dev0
torch 2.8.0+cu129
model base: Qwen2.5-VL-7B-instruct. How to solve this problem?
<img width="1250" height="602" alt="Image" src="https://github.com/user-attachments/assets/c6b13dff-1d6a-4872-a959-f8076fff43e6" />
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27505
|
open
|
[
"bug"
] | 2025-10-25T04:39:53Z
| 2025-10-26T07:33:27Z
| 1
|
asirgogogo
|
vllm-project/vllm
| 27,504
|
[Usage]: `add_vision_id` ignored for Qwen 2.5-VL-32B-Instruct
|
### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)
Python platform : Linux-6.8.0-85-generic-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
Nvidia driver version : 570.124.06
cuDNN version : Probably one of the following:
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn.so.9.8.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_adv.so.9.8.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_cnn.so.9.8.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9.8.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_graph.so.9.8.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_heuristic.so.9.8.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_ops.so.9.8.0
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn.so.9.8.0
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_adv.so.9.8.0
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_cnn.so.9.8.0
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9.8.0
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_graph.so.9.8.0
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_heuristic.so.9.8.0
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_ops.so.9.8.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz
CPU family: 6
Model: 106
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 2
Stepping: 6
CPU(s) scaling MHz: 23%
CPU max MHz: 3600.0000
CPU min MHz: 800.0000
BogoMIPS: 6200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 40 MiB (32 instances)
L3 cache: 72 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-3
|
https://github.com/vllm-project/vllm/issues/27504
|
open
|
[
"usage"
] | 2025-10-25T03:42:44Z
| 2025-10-26T07:32:49Z
| 1
|
justachetan
|
pytorch/pytorch
| 166,219
|
Why are there so many warnings when building the C++ libtorch project? How to resolve it?
|
### 🐛 Describe the bug
When I compile the C++ libtorch project, there are many warnings. How can I resolve them? My configuration: Win11, MSVC, libtorch 2.8.0. My C++ code is as follows:
```cpp
#include <torch/torch.h>
#include <iostream>
int main() {
torch::Tensor tensor_zeros = torch::zeros({3, 3});
std::cout << "Zeros Tensor:\n" << tensor_zeros << "\n\n";
torch::Tensor tensor_ones;
if (torch::cuda::is_available()) {
tensor_ones = torch::ones({2, 2}, torch::kFloat).to(torch::kCUDA);
std::cout << "Ones Tensor on CUDA:\n" << tensor_ones << "\n\n";
} else {
tensor_ones = torch::ones({2, 2}, torch::kFloat);
std::cout << "CUDA not available. Ones Tensor on CPU:\n" << tensor_ones << "\n\n";
}
std::vector<float> data = {1.0, 2.0, 3.0, 4.0};
torch::Tensor tensor_from_vector = torch::from_blob(data.data(), {2, 2});
std::cout << "Tensor from vector:\n" << tensor_from_vector << "\n";
if (torch::cuda::is_available()) {
auto cpu_tensor = torch::rand({5, 5});
auto gpu_tensor = cpu_tensor.to(torch::kCUDA);
std::cout << "Tensor on GPU:\n" << gpu_tensor << "\n";
}
return 0;
}
```
The warnings during compilation are as follows:
```
[1/5] Copying DLL files to build directory
[2/5] Scanning E:\coding\cppcode\libtorchtest\test.cpp for CXX dependencies
[3/5] Generating CXX dyndep file CMakeFiles\test.dir\CXX.dd
[4/5] Building CXX object CMakeFiles\test.dir\test.cpp.obj
H:\software\Visual_Studio_022\VC\Tools\MSVC\14.43.34808\include\optional(82): warning C4267: “初始化”: 从“size_t”转换到“int”,可能丢失数据
H:\software\Visual_Studio_022\VC\Tools\MSVC\14.43.34808\include\optional(82): note: 模板实例化上下文(最早的实例化上下文)为
G:\software\libtorch280_cu126Release\include\ATen/core/function_schema.h(438): note: 查看对正在编译的函数 模板 实例化“std::optional<int>::optional<const I,0>(_Ty2 &&) noexcept”的引用
with
[
I=size_t,
_Ty2=size_t
]
G:\software\libtorch280_cu126Release\include\ATen/core/function_schema.h(438): note: 请参阅 "c10::FunctionSchema::argumentIndexWithName" 中对 "std::optional<int>::optional" 的第一个引用
H:\software\Visual_Studio_022\VC\Tools\MSVC\14.43.34808\include\optional(258): note: 查看对正在编译的函数 模板 实例化“std::_Optional_construct_base<_Ty>::_Optional_construct_base<const unsigned __int64>(std::in_place_t,const unsigned __int64 &&)”的引用
with
[
_Ty=int
]
E:\coding\cppcode\libtorchtest\test.cpp(32): note: 查看对正在编译的函数 模板 实例化“std::_Optional_destruct_base<_Ty,true>::_Optional_destruct_base<const unsigned __int64>(std::in_place_t,const unsigned __int64 &&) noexcept”的引用
with
[
_Ty=int
]
H:\software\Visual_Studio_022\VC\Tools\MSVC\14.43.34808\include\xutility(492): warning C4267: “初始化”: 从“size_t”转换到“_Ty”,可能丢失数据
with
[
_Ty=unsigned int
]
H:\software\Visual_Studio_022\VC\Tools\MSVC\14.43.34808\include\xutility(492): note: 模板实例化上下文(最早的实例化上下文)为
G:\software\libtorch280_cu126Release\include\torch/csrc/dynamo/compiled_autograd.h(236): note: 查看对正在编译的函数 模板 实例化“unsigned int &std::vector<std::_Vbase,std::allocator<std::_Vbase>>::emplace_back<const _Ty&>(const _Ty &)”的引用
with
[
_Ty=size_t
]
G:\software\libtorch280_cu126Release\include\torch/csrc/dynamo/compiled_autograd.h(236): note: 请参阅 "torch::dynamo::autograd::TensorArgs::lookup" 中对 "std::vector<std::_Vbase,std::allocator<std::_Vbase>>::emplace_back" 的第一个引用
H:\software\Visual_Studio_022\VC\Tools\MSVC\14.43.34808\include\vector(909): note: 查看对正在编译的函数 模板 实例化“_Ty &std::vector<_Ty,std::allocator<_Ty>>::_Emplace_one_at_back<const unsigned __int64&>(const unsigned __int64 &)”的引用
with
[
_Ty=std::_Vbase
]
H:\software\Visual_Studio_022\VC\Tools\MSVC\14.43.34808\include\vector(830): note: 查看对正在编译的函数 模板 实例化“_Ty &std::vector<_Ty,std::allocator<_Ty>>::_Emplace_back_with_unused_capacity<const unsigned __int64&>(const unsigned __int64 &)”的引用
with
[
_Ty=std::_Vbase
]
H:\software\Visual_Studio_022\VC\Tools\MSVC\14.43.34808\include\vector(845): note: 查看对正在编译的函数 模板 实例化“void std::_Construct_in_place<unsigned int,const _Ty&>(unsigned int &,const _Ty &) noexcept”的引用
with
[
_Ty=size_t
]
H:\software\Visual_Studio_022\VC\Tools\MSVC\14.43.34808\include\xutility(502): note: 查看对正在编译的函数 模板 实例化“_Ty *std::construct_at<_Ty,const unsigned __int64&>(_Ty *const ,const unsigned __int64 &) noexcept(<expr>)”的引用
with
[
_Ty=unsigned int
]
H:\software\Visual_Studio_022\VC\Tools\MSVC\14.43.34808\include\xutility(506): warning C4267: “初始化”: 从“size_t”转换到“unsigned int”,可能丢失数据
H:\software\Visual_Studio_022\VC\Tools\MSVC\14.43.34808\include\xutility(492): warning C4267: “初始化”: 从“size_t”转换到“_Ty”,可能丢失数据
with
[
_Ty=int
]
H:\software\Visual_Studio_022\VC\Tools\MSVC\14.43.34808\include\xutility(492): note: 模板实例化上下文(最早的实
|
https://github.com/pytorch/pytorch/issues/166219
|
open
|
[
"module: windows",
"module: cpp-extensions",
"triaged"
] | 2025-10-25T03:09:34Z
| 2025-10-25T15:39:20Z
| null |
hyl20012
|
huggingface/lighteval
| 1,028
|
How to evaluate MMLU-Pro
|
Hi,
Thank you for the wonderful work!
I just want to ask how to perform the evaluation on MMLU-Pro, as I don't see any related code besides the README.
|
https://github.com/huggingface/lighteval/issues/1028
|
open
|
[] | 2025-10-24T20:03:10Z
| 2025-11-04T10:40:46Z
| null |
qhz991029
|
pytorch/pytorch
| 166,180
|
AOTI _register_aoti_cleanup line 47
|
### 🐛 Describe the bug
Hi,
Trying to run [this code](https://huggingface.co/spaces/zerogpu-aoti/wan2-2-fp8da-aoti-faster/tree/main) on Modal, I got this error message I absolute don't know how to interpret
### Error logs
```
File "<ta-01K8BA92H6RT7D4R3V6CBA2Q9T>:/usr/local/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
File "<ta-01K8BA92H6RT7D4R3V6CBA2Q9T>:/usr/local/lib/python3.12/site-packages/diffusers/pipelines/wan/pipeline_wan_i2v.py", line 756, in __call__
File "<ta-01K8BA92H6RT7D4R3V6CBA2Q9T>:/usr/local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
File "<ta-01K8BA92H6RT7D4R3V6CBA2Q9T>:/usr/local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
File "<ta-01K8BA92H6RT7D4R3V6CBA2Q9T>:/usr/local/lib/python3.12/site-packages/diffusers/models/transformers/transformer_wan.py", line 663, in forward
File "<ta-01K8BA92H6RT7D4R3V6CBA2Q9T>:/usr/local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
File "<ta-01K8BA92H6RT7D4R3V6CBA2Q9T>:/usr/local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
File "<ta-01K8BA92H6RT7D4R3V6CBA2Q9T>:/usr/local/lib/python3.12/site-packages/spaces/zero/torch/aoti.py", line 77, in __call__
File "<ta-01K8BA92H6RT7D4R3V6CBA2Q9T>:/usr/local/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^^
File "<ta-01K8BA92H6RT7D4R3V6CBA2Q9T>:/usr/local/lib/python3.12/site-packages/spaces/zero/torch/aoti.py", line 47, in _register_aoti_cleanup
File "<ta-01K8BA92H6RT7D4R3V6CBA2Q9T>:/usr/local/lib/python3.12/pathlib.py", line 1056, in iterdir
for name in os.listdir(self):
^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/proc/2/map_files'
```
### Versions
PyTorch version: 2.9.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.6
Libc version: glibc-2.35
Python version: 3.12.1 | packaged by Anaconda, Inc. | (main, Jan 19 2024, 15:51:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-nccl-cu12==2.27.5
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] torch==2.9.0
[pip3] torchao==0.14.1
[pip3] torchaudio==2.9.0
[pip3] torchvision==0.24.0
[pip3] triton==3.5.0
[conda] numpy 1.26.4 pypi_0 pypi
cc @chauhang @penguinwu
|
https://github.com/pytorch/pytorch/issues/166180
|
closed
|
[
"oncall: pt2"
] | 2025-10-24T18:58:27Z
| 2025-10-28T09:20:17Z
| 2
|
christopher5106
|
huggingface/tokenizers
| 1,879
|
rust tokenizer
|
Hello.
Is there a rust tokenizer please? Chat gpt told me there used to be.
Best regards!
|
https://github.com/huggingface/tokenizers/issues/1879
|
open
|
[] | 2025-10-24T17:03:04Z
| 2025-10-24T22:03:31Z
| 2
|
gogo2464
|
pytorch/ao
| 3,243
|
TorchAO Missing 3.13T (free-threading) Wheels
|
Latest `0.14.1` cuda builds does produce wheels for `3.13t` which is the `nogil` build of Python.
On Ubuntu 24.04 x86_64
```py
# pip install torchao==0.14.1 --index-url https://download.pytorch.org/whl/cu130 -U
Looking in indexes: https://download.pytorch.org/whl/cu130
ERROR: Could not find a version that satisfies the requirement torchao==0.14.1 (from versions: none)
ERROR: No matching distribution found for torchao==0.14.1
# python --version
Python 3.13.8
# python --version
Python 3.13.8
# pip show torch
Name: torch
Version: 2.9.0+cu130
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org
Author:
Author-email: PyTorch Team <packages@pytorch.org>
License: BSD-3-Clause
Location: /root/vm313t/lib/python3.13t/site-packages
Requires: filelock, fsspec, jinja2, networkx, nvidia-cublas, nvidia-cuda-cupti, nvidia-cuda-nvrtc, nvidia-cuda-runtime, nvidia-cudnn-cu13, nvidia-cufft, nvidia-cufile, nvidia-curand, nvidia-cusolver, nvidia-cusparse, nvidia-cusparselt-cu13, nvidia-nccl-cu13, nvidia-nvjitlink, nvidia-nvshmem-cu13, nvidia-nvtx, setuptools, sympy, triton, typing-extensions
Required-by: accelerate, bitblas, causal_conv1d, flash_attn, GPTQModel, lm_eval, MemLord, peft, torchvision
```
Reported here
https://github.com/pytorch/ao/issues/2919#issuecomment-3443814140
And reproduced by another user here:
https://github.com/pytorch/ao/issues/2919#issuecomment-3444060877
|
https://github.com/pytorch/ao/issues/3243
|
open
|
[] | 2025-10-24T16:53:03Z
| 2025-10-30T19:30:57Z
| 1
|
Qubitium
|
vllm-project/vllm
| 27,482
|
[Bug]: `return_token_ids` missing tokens when using tool calls
|
### Your current environment
Testing with latest vLLM builds from main, as of Fri Oct 24th 2025 (when this bug was opened).
### 🐛 Describe the bug
The `return_token_ids` parameter that is supposed to return all generated token ids back to the client is missing quite a few tokens for Chat Completion streaming requests that result in tool calls being generated. Exactly how many and where they are missing in the request will depend on the tool call parser in use as well as the exact request format.
Here's a minimal reproducer.
First, run vLLM with a tool call parser and model. I use a Granite model for testing here, but it should be roughly the same for any model with a tool call parser.
```
vllm serve ibm-granite/granite-3.3-8b-instruct \
--enable-auto-tool-choice \
--tool-call-parser granite
```
Then, send a streaming tool call request to the server and check the response for missing tokens:
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="fake")
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": { "type": "string", "description": "The city, e.g. San Francisco, CA" },
"unit": { "type": "string", "enum": ["celsius", "fahrenheit"] }
},
"required": ["location"]
}
}
},
]
response = client.chat.completions.create(
model="ibm-granite/granite-3.3-8b-instruct",
messages=[{"role": "user", "content": "What is the weather in Sydney in celsius?"}],
tools=tools,
tool_choice="auto",
stream=True,
stream_options={
"include_usage": True,
"continuous_usage_stats": True,
},
extra_body={"return_token_ids": True},
)
returned_token_ids = []
last_completion_tokens = 0
for event in response:
if not getattr(event, "choices", None):
continue
choice = event.choices[0]
usage = event.usage
if hasattr(choice, "token_ids"):
returned_token_ids.extend(choice.token_ids)
num_token_ids = len(choice.token_ids)
else:
num_token_ids = 0
elapsed_completion_tokens = usage.completion_tokens - last_completion_tokens
if elapsed_completion_tokens != num_token_ids:
raise ValueError(
"Model generated more tokens than returned by return_token_ids!\n"
f"All tokens returned so far: {returned_token_ids}"
)
last_completion_tokens = usage.completion_tokens
```
Running that, I get the following output:
```
python return_token_ids_test.py
Traceback (most recent call last):
File "/Volumes/SourceCode/vllm/return_token_ids_test.py", line 49, in <module>
raise ValueError(
ValueError: Model generated more tokens than returned by return_token_ids!
All tokens returned so far: [49154, 48685]
```
If I add a bit of debug logging into vLLM server side and run it again, I can see the list of tokens that should have been returned:
`current_token_ids: [49154, 7739, 8299, 563, 3447, 2645, 563, 313, 16716, 6161, 910, 392, 313, 2243, 563, 313, 3308, 101, 3263, 3918, 313, 426, 563, 313, 371, 81, 1700, 81, 15859, 48685]`
All of the tokens between the first and last in that list were missed by `return_token_ids`.
This code is not executed for every generated token when tool call parser (or reasoning parsers, most likely) are in use: https://github.com/vllm-project/vllm/blob/61089465a6101790635ed96c26df3e9a57d8d2c9/vllm/entrypoints/openai/serving_chat.py#L1090
The reason is because we return early at: https://github.com/vllm-project/vllm/blob/61089465a6101790635ed96c26df3e9a57d8d2c9/vllm/entrypoints/openai/serving_chat.py#L1063
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27482
|
closed
|
[
"bug"
] | 2025-10-24T16:10:31Z
| 2025-12-04T19:09:41Z
| 2
|
bbrowning
|
vllm-project/vllm
| 27,479
|
[Bug]: Low GPU utilization with Embedding Model
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
Initializing LLM(model="Qwen/Qwen3-Embedding-0.6B", task="embed") on a single B200 (180 GB) immediately reserves ~80% GPU memory (likely PagedAttention KV block pre-allocation). During embedding, GPU-Util stays <40%, whereas a naive Transformers inference with batch_size=512 reaches >80% utilization and memory use on the same box.
Is heavy KV Cache pre-allocation expected for task="embed" (prefill-only)? And is there any method to improve the GPU-Util?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27479
|
open
|
[
"bug"
] | 2025-10-24T15:18:05Z
| 2025-10-24T15:25:38Z
| 1
|
JhaceLam
|
vllm-project/vllm
| 27,477
|
[Bug]: First prompt token missing when requested with "echo"
|
### Your current environment
vllm installed from main:
`vllm 0.11.1rc3.dev23+g61089465a.precompiled`
### 🐛 Describe the bug
Is it expected behavior that echo isn't returning the first token of the prompt?
I am trying to collect exact prompt_token_ids which went into the model served with vllm serve , so I am doing this:
```bash
VLLM_LOGGING_LEVEL=DEBUG vllm serve openai/gpt-oss-20b -tp 1 --enforce-eager --return-tokens-as-token-ids --enable-log-requests --enable-prompt-tokens-details
```
and with this snippet:
```python
from openai import OpenAI
client = OpenAI(
api_key="EMPTY",
base_url="http://localhost:8000/v1"
)
messages = [
{"role": "user", "content": "Continue: The quick brown fox"},
]
response = client.chat.completions.create(
model="openai/gpt-oss-20b",
messages=messages,
temperature=0.0,
max_tokens=1024,
logprobs=True,
extra_body={
"echo": True,
}
)
print(response.model_extra['prompt_logprobs'])
```
I am seeing: `[None, 17360, 200008, ...]` whereas the vllm server logs are printing this: `[200006, 17360, 200008, ...]` which is correct as the first token is and should be `200006` == `<|start|>` . Not sure why is it `None` in the ChatCompletion object
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27477
|
closed
|
[
"bug"
] | 2025-10-24T14:43:50Z
| 2025-10-24T15:04:01Z
| 2
|
eldarkurtic
|
huggingface/text-generation-inference
| 3,336
|
Get inference endpoint model settings via client
|
### Feature request
Enable commands via clients such as `OpenAI` that would get model settings from an inference endpoint.
Does this exist and I just can't find it?
### Motivation
There is currently no clear way to get inference model settings directly from an endpoint. Individual base models have their original settings, but this does not necessarily translate to an endpoint. As an example, [Microsoft's Phi-3 model](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) supports 128k context length as input, but if instantiated as an endpoint on a 24GB gpu the allowed input context length is less (48k).
The only way I have found to access the information regarding an individual endpoint is via `huggingface_hub`, specifically:
```
from huggingface_hub import get_inference_endpoint
endpoint = get_inference_endpoint(ENDPOINT_NAME, namespace=USERNAME, token=api_key)
```
To get the general settings, you can then access the `raw` dict of the endpoint's image. For example, if I want to get the context length of a specific model at an endpoint, I can do it this way:
```
# the settings/specs of the endpoint in a 'llamacpp' image
settings = endpoint.raw['model']['image']['llamacpp']
# this allows me to get info like context length (via the ['ctxSize']) key
>>> print(settings['ctxSize'])
48000
```
This is problematic when sending prompts to an endpoint - if it were easier to query model properties programmatically, then I could write code to adjust queries on the fly appropriately depending on the target model. As it is, the sender needs to know the properties of a particular endpoint beforehand. IMO what is needed is to be able to get this info directly from a client.
In the OpenAI client in the Huggingface Inference API there seems to be some functionality for this, i.e. I can instantiate a client:
```
client = OpenAI(
base_url=endpoint, # AWS/server URL
api_key=api_key, # huggingface token
)
```
Then I can get a list of models at that url:
```
print(client.models.list())
```
But this only prints out basic information, which doesn't include such things as context length. Is there a way to get this info from the client that I'm just missing? I have noticed when there are errors related to input length, the client returns an error with the key `n_ctx`. For example, if a model I'm working with has a 12k context window and I send 13k tokens, the error is:
```
openai.BadRequestError: Error code: 400 - {'error': {'code': 400, 'message': 'the request exceeds the available context size, try increasing it', 'type': 'exceed_context_size_error', 'n_prompt_tokens': 13954, 'n_ctx': 12032}}
```
This tells me that the client has access to the overall settings, but it's not clear to me how to get them.
### Your contribution
Happy to work on this if someone can point me where to look for relevant code that would pass inference endpoint settings info to the client, perhaps via the `client.models.list()` method.
|
https://github.com/huggingface/text-generation-inference/issues/3336
|
closed
|
[] | 2025-10-24T13:07:15Z
| 2025-10-30T14:10:46Z
| 1
|
lingdoc
|
huggingface/datasets
| 7,829
|
Memory leak / Large memory usage with num_workers = 0 and numerous dataset within DatasetDict
|
### Describe the bug
Hi team, first off, I love the datasets library! 🥰
I'm encountering a potential memory leak / increasing memory usage when training a model on a very large DatasetDict.
Setup: I have a DatasetDict containing 362 distinct datasets, which sum up to ~2.8 billion rows.
Training Task: I'm performing contrastive learning with SentenceTransformer and Accelerate on a single node with 4 H100, which requires me to sample from only one dataset at a time.
Training Loop: At each training step, I sample ~16,000 examples from a single dataset, and then switch to a different dataset for the next step. I iterate through all 362 datasets this way.
Problem: The process's memory usage continuously increases over time, eventually causing a stale status where GPUs would stop working. It seems memory from previously sampled datasets isn't being released. I've set num_workers=0 for all experiments.
Chart 1: Standard DatasetDict The memory usage grows steadily until it make the training stale (RSS memory) <img width="773" height="719" alt="Image" src="https://github.com/user-attachments/assets/6606bef5-1153-4f2d-bf08-82da249d6e8d" />
Chart 2: IterableDatasetDict I also tried to use IterableDatasetDict and IterableDataset. The memory curve is "smoother," but the result is the same: it grows indefinitely and the training become stale. <img width="339" height="705" alt="Image" src="https://github.com/user-attachments/assets/ee90c1a1-6c3b-4135-9edc-90955cb1695a" />
Any feedback or guidance on how to manage this memory would be greatly appreciated!
### Steps to reproduce the bug
WIP, I'll add some code that manage to reproduce this error, but not straightforward.
### Expected behavior
The memory usage should remain relatively constant or plateau after a few steps. Memory used for sampling one dataset should be released before or during the sampling of the next dataset.
### Environment info
Python: 3.12
Datasets: 4.3.0
SentenceTransformers: 5.1.1
|
https://github.com/huggingface/datasets/issues/7829
|
open
|
[] | 2025-10-24T09:51:38Z
| 2025-11-06T13:31:26Z
| 4
|
raphaelsty
|
huggingface/transformers
| 41,842
|
Incorrect usage of `num_items_in_batch`?
|
It seems that `num_items_in_batch` is computed for all items in the batch [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2430).
However, when loss is computed in the `training_step`, it is computed for each input in the batch one by one. Does it make sense to pass `num_items_in_batch` (for the whole batch) or should that number be for that particular input only?
Right now, the entire batch's `num_items_in_batch` is used [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2486).
|
https://github.com/huggingface/transformers/issues/41842
|
closed
|
[] | 2025-10-24T07:36:00Z
| 2025-12-01T08:02:48Z
| 2
|
gohar94
|
vllm-project/vllm
| 27,463
|
[Usage]: How to request DeepSeek-OCR with http request
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
i want to request DeepSeek-OCR with http, is any example for it?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27463
|
closed
|
[
"usage"
] | 2025-10-24T07:07:29Z
| 2025-10-29T17:26:49Z
| 8
|
YosanHo
|
huggingface/lerobot
| 2,306
|
how to use groot without flash attention
|
my system is ubuntu 20.04 with glibc 2.3.1 which is not supported flash attention, If I can modify the config of groot to use it with normal attention?
|
https://github.com/huggingface/lerobot/issues/2306
|
open
|
[
"question",
"policies",
"dependencies"
] | 2025-10-24T06:35:18Z
| 2025-11-04T01:28:38Z
| null |
shs822
|
huggingface/lerobot
| 2,305
|
Error dependence about the `Transformer` library
|
### System Info
```Shell
- lerobot version: 0.4.0
- Platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39
- Python version: 3.12.12
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- PyTorch version: 2.7.0+cu128
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.8
- GPU model: NVIDIA RTX PRO 6000 Blackwell Workstation Edition
- Using GPU in script?: <fill in>
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
# Environment
I used the `uv` tools to auto-solve the environment. The `pyproject.toml` is shown as following.
```
[project]
name = "openpi-pytorch-env2"
version = "0.1.0"
description = "Add your description here"
requires-python = "==3.12.12"
dependencies = [
# Pytorch 依赖项
"torch==2.7.0",
"torchvision==0.22.0",
"torchaudio==2.7.0",
"pytorch_lightning",
# lerobot-libero
"libero @ git+https://github.com/huggingface/lerobot-libero.git#egg=libero",
# lerobot
"lerobot[all] @ git+https://github.com/huggingface/lerobot.git@v0.4.0",
]
[tool.uv.sources]
torch = { index = "pytorch-cu128" }
torchvision = { index = "pytorch-cu128" }
torchaudio = { index = "pytorch-cu128" }
[[tool.uv.index]]
name = "pytorch-cu128"
url = "https://download.pytorch.org/whl/cu128"
explicit = true
```
# BUG Report
When I was running the `pi0` code
```
import os
import torch
from lerobot.policies.pi0.modeling_pi0 import PI0Policy
from transformers import AutoTokenizer
MODEL_PATH = os.path.expanduser("~/Models/pi0_base")
policy = PI0Policy.from_pretrained(MODEL_PATH)
```
There are errors like:
```
An incorrect transformer version is used, please create an issue on https://github.com/huggingface/lerobot/issues
ImportError: cannot import name 'check' from 'transformers.models.siglip' (/opt/miniforge3/envs/pi0_torch2/lib/python3.12/site-packages/transformers/models/siglip/__init__.py)
During handling of the above exception, another exception occurred:
File "/home/robot/pi0/openpi_pytorch2/test_simple.py", line 22, in <module>
policy = PI0Policy.from_pretrained(MODEL_PATH)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: An incorrect transformer version is used, please create an issue on https://github.com/huggingface/lerobot/issues
```
The transformer lib is auto-solved by the `pyproject.toml` in `lerobot` lib. Can you solve the error? Thanks
### Expected behavior
Loading the weights successfully.
|
https://github.com/huggingface/lerobot/issues/2305
|
open
|
[
"question",
"policies",
"dependencies"
] | 2025-10-24T05:59:32Z
| 2025-11-14T16:01:49Z
| null |
sunshineharry
|
vllm-project/vllm
| 27,454
|
[Usage]: How to set the expert id on each EP by myself after setting EP in Deepseek (how to reorder experts?)
|
### Your current environment
```text
vllm 0.8.5
```
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27454
|
open
|
[
"usage"
] | 2025-10-24T03:15:16Z
| 2025-10-24T07:27:50Z
| 2
|
HameWu
|
vllm-project/vllm
| 27,448
|
[Usage]: how to pass multi turn multimode messages to Vllm?
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27448
|
open
|
[
"usage"
] | 2025-10-24T02:41:45Z
| 2025-10-24T03:33:13Z
| 1
|
cqray1990
|
huggingface/lerobot
| 2,304
|
How to load local model?
|
For example, i'm trying to fine-tune pi0, so I downloaded pi0_base locallly and save it in [position A,like lerobot/models/pi0_base] ,which has 5 files in total,including model.safetensors.
Then how to load it in code? I used to just set model.path=[position A] But followed tuorial, it uses pretrained_path_or_name as key words.
Howover, my code raised error here:
```python
print(f"Loading model from: {pretrained_name_or_path}")
try:
from transformers.utils import cached_file
# Try safetensors first
resolved_file = cached_file(
pretrained_name_or_path,
"model.safetensors",
cache_dir=kwargs.get("cache_dir"),
force_download=kwargs.get("force_download", False),
resume_download=kwargs.get("resume_download"),
proxies=kwargs.get("proxies"),
use_auth_token=kwargs.get("use_auth_token"),
revision=kwargs.get("revision"),
# local_files_only=kwargs.get("local_files_only", False),
local_files_only=True # I set this for experiment but failed too
)
from safetensors.torch import load_file
original_state_dict = load_file(resolved_file)
print("✓ Loaded state dict from model.safetensors")
except Exception as e:
print(f"Could not load state dict from remote files: {e}")
print("Returning model without loading pretrained weights")
return model
```
Its outputs:
Loading model from: /home/user/working_folder/lerobot/local/model/pi0_base (I use this absolute path)
Could not load state dict from remote files: /home/user/working_folder/lerobot/local/model/pi0_base does not appear to have a file named model.safetensors. Checkout 'https://huggingface.co//home/user/working_folder/lerobot/local/model/pi0_base/tree/main' for available files.
It seems that the program see my pretrain_path_or_name as a repo_id :/
How can I introduce local pretrained path?
* Ok I know that my file is incorrect. It's my bad not code's
|
https://github.com/huggingface/lerobot/issues/2304
|
closed
|
[] | 2025-10-24T01:59:26Z
| 2025-10-24T02:33:25Z
| null |
milong26
|
vllm-project/vllm
| 27,441
|
[Bug]: vllm/v1/core/sched/scheduler.py: Unintended reordering of requests during scheduling
|
### Your current environment
<details>
This error is independent of the environment.
</details>
### 🐛 Describe the bug
### Description
The function `schedule()` in [vllm/v1/core/sched/scheduler.py](https://github.com/vllm-project/vllm/blob/main/vllm/v1/core/sched/scheduler.py) is responsible for scheduling inference requests.
In certain cases — such as when a request is waiting for KV blocks from a remote prefill worker or when the token budget is exhausted — the request must be reinserted into the waiting queue `self.waiting`.
Currently, the implementation pops such requests, prepends them to skipped_waiting_requests, and then prepends skipped_waiting_requests back to self.waiting.
However, this behavior can shuffle the request order, potentially impacting the tail latency of request serving.
### How to Fix
Replace all calls to `skipped_wating_requests.prepend_request(request)` with `skipped_wating_requests.add_request(request)`
### Result
<img width="1445" height="801" alt="Image" src="https://github.com/user-attachments/assets/4e81a662-c527-4b15-a5d1-8e78150961e8" />
The figure compares the request-serving timelines of the original (left) and fixed (right) versions.
* X-axis: Time
* Y-axis: Request ID (submission order)
* Green: Duration while the request is in `self.waiting`
* Black: Time between GPU memory allocation and completion of the request’s prefill computation
* Red: Time between the end of prefill computation and GPU memory release (while waiting for the remote decoder to read KV blocks)
The scheduling policy used is FCFS.
In the original version, requests are shuffled under resource pressure. After applying the fix, the request serving order remains consistent, as expected.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27441
|
open
|
[
"bug"
] | 2025-10-23T22:35:50Z
| 2025-11-22T04:20:35Z
| 1
|
dongha-yoon
|
pytorch/ao
| 3,232
|
nvfp4: why do we need to call weight.contiguous for Qwen3 during lm-eval?
|
TODO @andrewor14 add repro
|
https://github.com/pytorch/ao/issues/3232
|
open
|
[] | 2025-10-23T21:20:54Z
| 2025-10-28T22:36:03Z
| 1
|
vkuzo
|
huggingface/lerobot
| 2,303
|
Question: Does the follower arm have an api for scripting movement?
|
Hi, apologies if this has been answered before or if it's not the right place to ask. I've been using the SO-101 arms for imitation learning, but recently I've wanted to try and test out the follower arm for embodied reasoning models such as Gemini ER 1.5. To do this, I figure I would need to have some way to map outputs from the ER model (coordinates or general, high-level movements) to movements for the SO-101. Does the SO-101 has an API for this type of low-level movement control, e.g. if I just wanted to move it pre-scripted in space using coordinates or motor motion? What would the code for this type of low-level movement look?
Thank you so much for any and all help!
|
https://github.com/huggingface/lerobot/issues/2303
|
open
|
[
"question",
"robots",
"python"
] | 2025-10-23T20:40:56Z
| 2025-10-23T22:29:28Z
| null |
Buttmunky1
|
huggingface/lerobot
| 2,294
|
Question about the HuggingFaceVLA/smolvla_libero Model Configuration
|
Hello,
Lerobot has officially ported [LIBERO](https://github.com/huggingface/lerobot/issues/1369#issuecomment-3323183721), and we can use the checkpoint at [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero) to evaluate the LIBERO benchmark.
However, the model configuration of [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero) appears to differ from the [original model](https://huggingface.co/lerobot/smolvla_base). For example:
[lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base/blob/main/config.json)
```json
{
"vlm_model_name": "HuggingFaceTB/SmolVLM2-500M-Video-Instruct",
"load_vlm_weights": true,
"add_image_special_tokens": false,
"attention_mode": "cross_attn",
"prefix_length": 0,
"pad_language_to": "max_length",
"num_expert_layers": 0,
"num_vlm_layers": 16,
"self_attn_every_n_layers": 2,
"expert_width_multiplier": 0.75
}
```
[HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero/blob/main/config.json)
```json
{
"vlm_model_name": "HuggingFaceTB/SmolVLM2-500M-Instruct",
"load_vlm_weights": true,
"add_image_special_tokens": false,
"attention_mode": "cross_attn",
"prefix_length": 0,
"pad_language_to": "longest",
"num_expert_layers": -1,
"num_vlm_layers": 0, <- it becomes 32 when model is initialized
"self_attn_every_n_layers": 2,
"expert_width_multiplier": 0.5,
}
```
In particular, `num_vlm_layers` is set to 32 across all layers, which is not consistent with the [paper](https://arxiv.org/pdf/2506.01844) where they use half of them (16 layers).
Could you provide the original model checkpoint and the training recipe so we can reproduce the LIBERO benchmark performance?
|
https://github.com/huggingface/lerobot/issues/2294
|
open
|
[
"question",
"policies"
] | 2025-10-23T13:37:48Z
| 2025-10-30T07:49:17Z
| null |
Hesh0629
|
vllm-project/vllm
| 27,413
|
[Usage]: how to request a qwen2.5-VL-7B classify model served by vllm using openai SDK?
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I launch a server with the following command to serving a Qwen2.5-VL-7B model finetued for seqence classification. (this model replaced the lm_head with a 2 classes score_head)
The launch command is :
```
vllm serve --model=//video_classification/qwenvl_7b_video_cls/v5-20251011-121851/2340_vllm_format --served_model_name Qwen2.5-7B-shenhe --task=classify --port=8080 --tensor-parallel-size=2
```
I don't know how to request the server with the openAI sdk.
I use the code snnipet showed below which works well with pure text, but it got 400 bad request when I put the video url into the prompt
this works well:
```
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright contributors to the vLLM project
"""Example Python client for classification API using vLLM API server
NOTE:
start a supported classification model server with `vllm serve`, e.g.
vllm serve jason9693/Qwen2.5-1.5B-apeach
"""
import argparse
import pprint
import requests
def post_http_request(payload: dict, api_url: str) -> requests.Response:
headers = {"User-Agent": "Test Client"}
response = requests.post(api_url, headers=headers, json=payload)
return response
def parse_args():
parse = argparse.ArgumentParser()
parse.add_argument("--host", type=str, default="localhost")
parse.add_argument("--port", type=int, default=8000)
parse.add_argument("--model", type=str, default="jason9693/Qwen2.5-1.5B-apeach")
return parse.parse_args()
def main(args):
host = args.host
port = args.port
model_name = args.model
api_url = f"http://{host}:{port}/classify"
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
payload = {
"model": model_name,
"input": prompts,
}
classify_response = post_http_request(payload=payload, api_url=api_url)
pprint.pprint(classify_response.json())
if __name__ == "__main__":
args = parse_args()
main(args)
```
but if I replace the prompts with multimodal data, the server doesn't work.
```
video_url = "https://js-ad.a.yximgs.com/bs2/ad_nieuwland-material/t2i2v/videos/3525031242883943515-140276939618048_24597237897733_v0_1759927515165406_3.mp4"
prompts = [
{"role": "user", "content": [
{"type": "text", "text": "你是一个专业的视频质量分析师,请你仔细判断下方提供的视频是否存在质量问题\n质量问题包括但不限于:\n1.画面质量差,画面模糊,亮度闪烁\n2.画面中文字存在模糊问题\n3.视频画面不符合真实物理逻辑,例如凭空产生的人物肢体、头像、手指手臂数量不对,腿部不自然等问题\n4.画面运动不符合物理规律,例如凭空产生的物体,画面卡顿、晃动、抖动、跳动等\n\n如果视频存在问题请返回0,如果视频不存在问题请返回1。\n## 视频内容如下\n"},
{"type": "video", "video": f"{video_url}"},
]
}
]
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27413
|
open
|
[
"good first issue",
"usage"
] | 2025-10-23T12:32:25Z
| 2025-10-25T00:18:54Z
| 12
|
muziyongshixin
|
huggingface/transformers.js
| 1,447
|
How to use half precision ONNX models?
|
### Question
Hi,
I just exported a detection model with fp16 using optimum.
`--dtype fp16 `
This is my pipeline:
```javascript
const model = await AutoModel.from_pretrained(
"./onnx_llama",
{ dtype: "fp16", device: "cpu" }
const processor = await AutoProcessor.from_pretrained("./onnx_llama");
const { pixel_values, reshaped_input_sizes } = await processor(image);
const buffer = await fs.readFile("image3.jpg");
const blob = new Blob([buffer]);
const image = await RawImage.fromBlob(blob);
const { pixel_values, reshaped_input_sizes } = await processor(image);
const { output0 } = await model({ pixel_values: tensor });
```
Using this results in:
An error occurred during model execution: "Error: Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(float16))".
Which makes sense, however when i try to convert to fp16 "manually"
```javascript
const fp16data = Float16Array.from(pixel_values.data); //float32ArrayToUint16Array(pixel_values.data);
const tensor = new Tensor("float16", fp16data, pixel_values.dims);
const { output0 } = await model({ pixel_values:tensor });
```
I get:
`Tensor.data must be a typed array (4) for float16 tensors, but got typed array (0).`
What's going on here? I tried to converting the `pixel_data.data` to a UInt16Array manually but that has no effect as it gets converted to a Float16Array in the tensor constructor anyway.
Help is much appreciated!
Thanks
|
https://github.com/huggingface/transformers.js/issues/1447
|
open
|
[
"question"
] | 2025-10-23T09:18:26Z
| 2025-10-23T09:18:26Z
| null |
richarddd
|
huggingface/transformers
| 41,810
|
How do you use t5gemma decoder with a different encoder?
|
I am trying to combine the t5gemma decoder with a pretrained deberta encoder that I have trained from scratch using `EncoderDecoderModel`.
Here is the code:
```
model_1 = "WikiQuality/pre_filtered.am"
model_2 = "google/t5gemma-2b-2b-ul2"
encoder = AutoModel.from_pretrained(model_1)
decoder = AutoModel.from_pretrained(model_2, dtype=torch.bfloat16)
model = EncoderDecoderModel(encoder=encoder, decoder=decoder)
```
The above code raises the error:
```
AttributeError: 'T5GemmaConfig' object has no attribute 'hidden_size'
```
From this I understand that `hidden_size` is accesible from `decoder.config.decoder.hidden_size` and not `decoder.config.hidden_size`, which is where EncoderDecoderModel is looking. So I change my code to load the encoder-decoder model to this:
```
model = EncoderDecoderModel(encoder=encoder, decoder=decoder.decoder)
```
This gives me the following error:
```
ValueError: Unrecognized model identifier: t5_gemma_module. Should contain one of aimv2, aimv2_vision_model, albert, align, altclip, apertus, arcee, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, bitnet, blenderbot, blenderbot-small, blip, blip-2, blip_2_qformer, bloom, blt, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, cohere2_vision, colpali, colqwen2, conditional_detr, convbert, convnext, convnextv2, cpmant, csm, ctrl, cvt, d_fine, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v2, deepseek_v3, deepseek_vl, deepseek_vl_hybrid, deformable_detr, deit, depth_anything, depth_pro, deta, detr, dia, diffllama, dinat, dinov2, dinov2_with_registers, dinov3_convnext, dinov3_vit, distilbert, doge, donut-swin, dots1, dpr, dpt, edgetam, edgetam_video, edgetam_vision_model, efficientformer, efficientloftr, efficientnet, electra, emu3, encodec, encoder-decoder, eomt, ernie, ernie4_5, ernie4_5_moe, ernie_m, esm, evolla, exaone4, falcon, falcon_h1, falcon_mamba, fastspeech2_conformer, fastspeech2_conformer_with_hifigan, flaubert, flava, flex_olmo, florence2, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, gemma3n, gemma3n_audio, gemma3n_text, gemma3n_vision, git, glm, glm4, glm4_moe, glm4v, glm4v_moe, glm4v_moe_text, glm4v_text, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gpt_oss, gptj, gptsan-japanese, granite, granite_speech, granitemoe, granitemoehybrid, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hgnet_v2, hiera, hubert, hunyuan_v1_dense, hunyuan_v1_moe, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, internvl, internvl_vision, jamba, janus, jetmoe, jukebox, kosmos-2, kosmos-2.5, kyutai_speech_to_text, layoutlm, layoutlmv2, layoutlmv3, led, levit, lfm2, lfm2_vl, lightglue, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longcat_flash, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, metaclip_2, mgp-str, mimi, minimax, ministral, mistral, mistral3, mixtral, mlcd, mllama, mm-grounding-dino, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, modernbert-decoder, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmo3, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, ovis2, owlv2, owlvit, paligemma, parakeet, parakeet_ctc, parakeet_encoder, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, perception_encoder, perception_lm, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_omni, qwen2_5_vl, qwen2_5_vl_text, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen2_vl_text, qwen3, qwen3_moe, qwen3_next, qwen3_omni_moe, qwen3_vl, qwen3_vl_moe, qwen3_vl_moe_text, qwen3_vl_text, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam2, sam2_hiera_det_model, sam2_video, sam2_vision_model, sam_hq, sam_hq_vision_model, sam_vision_model, seamless_m4t, seamless_m4t_v2, seed_oss, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip2_vision_model, siglip_vision_model, smollm3, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, t5gemma, table-transformer, tapas, textnet, tim
|
https://github.com/huggingface/transformers/issues/41810
|
closed
|
[] | 2025-10-23T08:48:19Z
| 2025-12-01T08:02:53Z
| 1
|
kushaltatariya
|
pytorch/pytorch
| 166,116
|
[CCA] CUDACachingAllocator always release physical memory handle when the expandable segment unmaps.
|
This may not be a bug. I'm just confused about the CUDACachingAllocator behavior.
When enable expandable segments, CCA uses the CUDA virtual memory API.([cuMemCreate](https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__VA.html#group__CUDA__VA_1g899d69a862bba36449789c64b430dc7c)/[cuMemRelease](https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__VA.html#group__CUDA__VA_1g3014f0759f43a8d82db951b8e4b91d68), etc.).
I've noticed that CCA will call `cuMemRelease` any time it unmaps a physical memory handle as shown here: https://github.com/pytorch/pytorch/blob/bf5aa9e42eb4049aad56264dacefd638233924b5/c10/cuda/CUDACachingAllocator.cpp#L704
And when map virtual address to physical memory, it will re-create the physical memory handle : https://github.com/pytorch/pytorch/blob/bf5aa9e42eb4049aad56264dacefd638233924b5/c10/cuda/CUDACachingAllocator.cpp#L441
My question is , does the physical memory handle really need to be released anytime when we do unmap? Can we reuse the handle for next mapping? I think maybe there will be some performance gain when we reuse these handles?
|
https://github.com/pytorch/pytorch/issues/166116
|
open
|
[
"triaged",
"module: CUDACachingAllocator"
] | 2025-10-23T07:30:24Z
| 2025-10-29T02:57:00Z
| 3
|
PHLens
|
huggingface/accelerate
| 3,818
|
Duplicate W&B initialization in offline mode
|
### System Info
```Shell
- `Accelerate` version: 1.10.1
```
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
When using Accelerate with `wandb` in **offline mode**, two separate W&B runs are created for a single training process.
This happens because both the `start` and the `store_init_configuration` method of `WandBTracker` call `wandb.init()`, which leads to redundant initialization.
https://github.com/huggingface/accelerate/blob/a12beee389f6bd37cfae0aba233db03f375f7f80/src/accelerate/tracking.py#L318-L325
https://github.com/huggingface/accelerate/blob/a12beee389f6bd37cfae0aba233db03f375f7f80/src/accelerate/tracking.py#L343-L350
Is there any plan to refine the duplication?
### Expected behavior
initialize wandb run only 1 time
|
https://github.com/huggingface/accelerate/issues/3818
|
closed
|
[
"good first issue"
] | 2025-10-23T02:19:38Z
| 2025-12-16T13:10:48Z
| 3
|
ShuyUSTC
|
pytorch/pytorch
| 166,106
|
[Feature][BUG] need support for DispatchKey.AutocastXPU
|
### 🚀 The feature, motivation and pitch
details information in this [issue](https://github.com/intel/intel-xpu-backend-for-triton/issues/5366#issuecomment-3433362148).
i get error when i use torch.compile+autocast+triton:
```
File "D:\miniconda3\envs\compile\Lib\site-packages\torch\_ops.py", line 493, in dispatch
raise NotImplementedError(
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
NotImplementedError: could not find kernel for HigherOrderOperator triton_kernel_wrapper_mutation at dispatch key DispatchKey.AutocastXPU (resolved from DispatchKey.AutocastXPU)
```
i found DispatchKey.AutocastCPU and DispatchKey.AutocastCUDA in [https://github.com/pytorch/pytorch/blame/c746feb86a1459db5f6294730d1d72ed15f16dd3/torch/_higher_order_ops/triton_kernel_wrap.py#L1364](https://github.com/pytorch/pytorch/blame/c746feb86a1459db5f6294730d1d72ed15f16dd3/torch/_higher_order_ops/triton_kernel_wrap.py#L1364)
but no DispatchKey.AutocastXPU.
so i think it's not a bug. i think pytorch need support this feature. does pytorch have some plan?
### Alternatives
_No response_
### Additional context
_No response_
cc @gujinghui @EikanWang @fengyuan14 @guangyey
|
https://github.com/pytorch/pytorch/issues/166106
|
open
|
[
"triaged",
"module: xpu"
] | 2025-10-23T01:58:34Z
| 2025-10-23T14:47:11Z
| 1
|
xiaohoua
|
pytorch/vision
| 9,249
|
Non-local versions of torch are only available for linux(/mac) aarch64
|
When checking https://download.pytorch.org/whl/torchvision/ for e.g. 0.24.0 on Python 3.12, the following list of wheels is available for non-local (no `+`) versions:
```
torchvision-0.24.0-cp312-cp312-macosx_11_0_arm64.whl
torchvision-0.24.0-cp312-cp312-manylinux_2_28_aarch64.whl
torchvision-0.24.0-cp312-cp312-manylinux_2_28_aarch64.whl
torchvision-0.24.0-cp312-cp312-manylinux_2_28_aarch64.whl
torchvision-0.24.0-cp312-cp312-manylinux_2_28_aarch64.whl
torchvision-0.24.0-cp312-cp312-manylinux_2_28_aarch64.whl
```
This caused resolution problems for uv users on x86_64 linux (https://github.com/astral-sh/uv/issues/16386), I'm not sure if that's intentional? It also seems that the `manylinux_2_28_aarch64` wheels are duplicated.
Another user reported a different problem with https://download.pytorch.org/whl/nightly/cu128 in the uv discord (https://discord.com/channels/1039017663004942429/1039017663512449056/1430596302764249100):
```
Resolved 176 packages in 22ms
error: Distribution `torchvision==0.25.0.dev20251012 @ registry+https://download.pytorch.org/whl/nightly/cu128` can't be installed because it doesn't have a source distribution or wheel for the current platform
hint: You're on Linux (`manylinux_2_35_x86_64`), but `torchvision` (v0.25.0.dev20251012) only has wheels for the following platform: `manylinux_2_28_aarch64`; consider adding your platform to `tool.uv.required-environments` to ensure uv resolves to a version with compatible wheels
```
I'm not sure if this is a bug or intentional, I wanted to discuss how we can improve the user experience either on the torch side or on the uv side.
|
https://github.com/pytorch/vision/issues/9249
|
closed
|
[] | 2025-10-22T17:04:55Z
| 2025-12-15T19:09:29Z
| 3
|
konstin
|
vllm-project/vllm
| 27,347
|
[Usage]: vllm: error: unrecognized arguments: --all2all-backend deepep_low_latency
|
### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 3.31.6
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform : Linux-5.15.0-89-generic-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA H200
GPU 1: NVIDIA H200
GPU 2: NVIDIA H200
GPU 3: NVIDIA H200
GPU 4: NVIDIA H200
GPU 5: NVIDIA H200
GPU 6: NVIDIA H200
GPU 7: NVIDIA H200
Nvidia driver version : 570.133.20
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0
Off-line CPU(s) list: 1-191
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) PLATINUM 8558
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 2
CPU(s) scaling MHz: 76%
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 520 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mit
|
https://github.com/vllm-project/vllm/issues/27347
|
closed
|
[
"usage"
] | 2025-10-22T14:36:18Z
| 2025-10-22T15:07:13Z
| 1
|
Valerianding
|
vllm-project/vllm
| 27,343
|
[Usage]: Can't get result from /pooling api when using Qwen2.5-Math-PRM-7B online
|
### Your current environment
```
The output of `python collect_env.py`
Collecting environment information... [140/1781]
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Could not collect
CMake version : version 3.22.1
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 20:16:04) [GCC 11.2.0] (64-bit runti
me)
Python platform : Linux-5.15.0-153-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.4.99
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A800 80GB PCIe
GPU 3: NVIDIA A800 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
GPU 5: NVIDIA A100 80GB PCIe
GPU 6: NVIDIA A800 80GB PCIe
GPU 7: NVIDIA A800 80GB PCIe
Nvidia driver version : 550.54.15
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virt
|
https://github.com/vllm-project/vllm/issues/27343
|
closed
|
[
"usage"
] | 2025-10-22T13:36:51Z
| 2025-10-23T03:39:13Z
| 3
|
zgc6668
|
pytorch/ao
| 3,226
|
question of blockwise quant fp8 training
|
Hi, the [blockwise_fp8_training](https://github.com/pytorch/ao/tree/7e68d5ee6fe6749a667edd2510d5fd2b599a27e2/torchao/prototype/blockwise_fp8_training) has been there for a while. Is there any reason we dont merge it into [float8](https://github.com/pytorch/ao/tree/main/torchao/float8) folder?
And current moe training only supports `FP8_ROWWISE` and `MXFP8`, will `FP8_BlockWise` be considered to be added into `torchao` in the near future? (mainly for h100 users)
thanks!
|
https://github.com/pytorch/ao/issues/3226
|
open
|
[
"float8",
"moe"
] | 2025-10-22T13:18:40Z
| 2025-10-24T04:00:47Z
| 3
|
rakkit
|
huggingface/transformers.js
| 1,446
|
Zhare-AI/sd-1-5-webgpu on HuggingFace.co lists itself as Transformer.js supported?
|
### Question
[Zhare-AI/sd-1-5-webgpu](https://huggingface.co/Zhare-AI/sd-1-5-webgpu) is a `text-to-image` model and is marked as Transformers.js compatible, and even shows demo code using Transformers.js on its `huggingface.co` page. Their example code fails with an error saying `text-to-image` is not supported in Transformers.js.
The problem is `text-to-image` is not supported in 3.7.6 and does not appear to even be supported in the v4 branch. I asked them on their `huggingface.co` discussions what version of Transformers.js their model is compatible with but no reply yet. Apparently someone else asked them the same thing 18 days ago and never got a reply.
I am very interested adding a Transformers.js demo for `text-to-image` to my Blazor WASM library [SpawnDev.BlazorJS.TransformersJS](https://github.com/LostBeard/SpawnDev.BlazorJS.TransformersJS), but not sure what I am missing.
|
https://github.com/huggingface/transformers.js/issues/1446
|
closed
|
[
"question"
] | 2025-10-22T12:20:16Z
| 2025-10-24T14:33:17Z
| null |
LostBeard
|
vllm-project/vllm
| 27,336
|
[Feature]: Make promt_token_ids optional in streaming response (disable by default)
|
### 🚀 The feature, motivation and pitch
Starting with v0.10.2, the first server-sent event (SSE) in streaming responses now includes the full list of `prompt_token_ids`.
While this can be useful for debugging or detailed inspection, it introduces several practical issues in production environments:
1. Large payload size:
For long prompts, this significantly increases the size of the first streaming event. This can increase latency, cause network throttling, and reduce streaming responsiveness.
2. Parser and infrastructure limitations:
Some clients and intermediate parsers have message size limits. The larger first event may cause them to fail or disconnect, requiring changes across multiple components in existing systems that previously handled smaller initial events.
3. Breaking change in behavior:
Previously, streaming responses did not include prompt token IDs, so this change affects compatibility with existing clients expecting smaller events.
### Suggested Fix
Make the inclusion of prompt_token_ids optional per request and disabled by default (same as `return_token_ids`), restoring the previous behavior.
### Alternatives
Alternatively, provide an API flag or configuration option to exclude `prompt_token_ids` globally for the entire server, so that no streaming response include this field.
### Additional context
For example, the first streaming response for a prompt of ~130k tokens can now exceed 600KB, while some parsers and scanners have default buffer sizes of 64KB (which was previously sufficient).
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27336
|
closed
|
[
"feature request"
] | 2025-10-22T11:42:41Z
| 2025-10-27T11:06:45Z
| 1
|
Gruner-atero
|
huggingface/transformers
| 41,775
|
Hugging Face website and models not reachable
|
### System Info
```
$ pip show transformers
Name: transformers
Version: 4.57.1
Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Home-page: https://github.com/huggingface/transformers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)
Author-email: transformers@huggingface.co
```
```
$ python --version
Python 3.12.3
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. `python -c 'from transformers import pipeline; pipeline = pipeline(task="text-generation", model="Qwen/Qwen2.5-1.5B")'`
I am getting connection issues:
```
OSError: We couldn't connect to 'https://huggingface.co' to load the files, and couldn't find them in the cached files.
Check your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
```
It rather funny that it recommends checking https://huggingface.co/docs/transformers/installation#offline-mode when https://huggingface.co is not reachable :-) Maybe this information, e.g. about mirrors, could be hosted somewhere else?
### Expected behavior
The examples should work as documented.
|
https://github.com/huggingface/transformers/issues/41775
|
closed
|
[
"bug"
] | 2025-10-22T07:40:32Z
| 2025-11-21T08:10:00Z
| 8
|
christian-rauch
|
vllm-project/vllm
| 27,319
|
[Usage]: Quantized FusedMoE crashed in graph compiled stage
|
### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : 19.0.0git (https://github.com/RadeonOpenCompute/llvm-project roc-6.4.3 25224 d366fa84f3fdcbd4b10847ebd5db572ae12a34fb)
CMake version : version 3.31.6
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+rocm6.4
Is debug build : False
CUDA used to build PyTorch : N/A
ROCM used to build PyTorch : 6.4.43482-0f2d60242
==============================
Python Environment
==============================
Python version : 3.12.11 | packaged by conda-forge | (main, Jun 4 2025, 14:45:31) [GCC 13.3.0] (64-bit runtime)
Python platform : Linux-6.8.0-79-generic-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration : AMD Radeon PRO W7900 Dual Slot (gfx1100)
Nvidia driver version : Could not collect
cuDNN version : Could not collect
HIP runtime version : 6.4.43482
MIOpen runtime version : 3.4.0
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
BIOS Vendor ID: Advanced Micro Devices, Inc.
Model name: AMD EPYC 9554 64-Core Processor
BIOS Model name: AMD EPYC 9554 64-Core Processor Unknown CPU @ 3.1GHz
BIOS CPU family: 107
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 51%
CPU max MHz: 3100.0000
CPU min MHz: 1500.0000
BogoMIPS: 6199.71
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc amd_ibpb_ret arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 128 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:
|
https://github.com/vllm-project/vllm/issues/27319
|
closed
|
[
"rocm",
"usage"
] | 2025-10-22T06:29:32Z
| 2025-10-24T02:19:55Z
| 1
|
Rus-P
|
vllm-project/vllm
| 27,298
|
[Doc]: Update metrics documentation to remove V0 references and add v1 changes.
|
## Problem
The metrics documentation in `docs/design/metrics.md` still contains references to V0 metrics implementation, but V0 metrics have been removed after @njhill 's PR https://github.com/vllm-project/vllm/pull/27215 was merged. To avoid confusion, I think we should remove this and update it with the new set of v1 metrics.
Was curious if we want to keep this v0 reference and add the v1 details on top of this.
### Suggest a potential alternative/fix
1. Remove all V0 references from the metrics documentation.
2. Update the introduction to focus on V1 metrics only.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27298
|
closed
|
[
"documentation"
] | 2025-10-21T22:08:48Z
| 2025-10-22T13:29:17Z
| 1
|
atalhens
|
pytorch/pytorch
| 166,020
|
[doc] Clarify that torch.mean doesn't support integer dtypes like torch.long
|
### 📚 The doc issue
[doc] Clarify that torch.mean doesn't support integer dtypes like torch.long
**Page:** `torch.mean` documentation
**Problem:** The documentation for `torch.mean` doesn't explicitly mention that integer dtypes (like `torch.long`) are not supported and will raise a runtime error.
**Current behavior:** When users try:
```python
torch.mean(torch.tensor([1, 2, 3], dtype=torch.long))
```
They get the error: `RuntimeError: mean not implemented for 'Long'`
However, this limitation isn't mentioned in the current documentation, leading to confusion about whether this is a bug or intended behavior.
**Expected:** The documentation should clearly state that `torch.mean` requires floating-point input types and explain why integer types are not supported.
**Location:** This affects the `torch.mean` documentation page at https://pytorch.org/docs/stable/generated/torch.mean.html
### Suggest a potential alternative/fix
Add a note in the "Notes" section of `torch.mean` documentation:
"Note: `torch.mean` requires floating-point dtypes for input tensors. Integer dtypes (like `torch.long`, `torch.int`) are not supported because the mean operation typically results in floating-point values. If you need integer division, consider using `torch.div` with the `rounding_mode` parameter instead."
cc @svekars @sekyondaMeta @AlannaBurke
|
https://github.com/pytorch/pytorch/issues/166020
|
closed
|
[
"triaged"
] | 2025-10-21T19:27:50Z
| 2025-10-21T22:13:29Z
| 1
|
har5hdeep5harma
|
pytorch/pytorch
| 166,014
|
Make Inductor Fallback Nodes Less Reliant on Invariants from Functionalization / AOT Autograd
|
### 🐛 Describe the bug
Inductor has generic support for invoking operators as they would have been [in eager execution](https://github.com/pytorch/pytorch/blob/3dfd0c75847aad61a24e63d91bb330083db11857/torch/_inductor/graph.py#L1626-L1630). This path is hardened and works well both for custom ops and for bisecting a bad inductor lowering. However, it relies on invariants provided by AOT Autograd. If we want to compile without functionalization and decomposition, we may need to make it less reliant.
#### Problem 1: Aliasing Relationships
The aliasing relationships of the graph must be statically known and correct. Likely the easiest and best path forward is to make sure that we have runtime checking of the aliasing relationships of custom ops. See https://github.com/pytorch/pytorch/issues/165349. When things are incorrect, there are two failure modes:
**Incorrectly marked as aliasing**
An operator signature or meta may statically indicate that an input and output are aliasing when at execution time a new tensor will be returned. In this case, we will delay deleting the input until the output's final use, which can increase peak memory.
See: https://github.com/pytorch/pytorch/pull/163182#discussion_r2380201053 There's no reason why we can't delete the input eagerly here, since the view should keep the tensor alive.
**Incorrectly marked as non-aliasing**
The failure mode here is that we may reuse the buffer with `config.inplace_buffers = True`. See this discussion on an operator which was incorrectly marked: https://github.com/pytorch/pytorch/issues/165349
Both of these failure modes interact with the scheduler in a) [DCE (Dead Code Elimination)](https://github.com/pytorch/pytorch/blob/c40048472cc4e28f44e8e5835cae319add231bf5/torch/_inductor/scheduler.py#L2860) and b) [Weak dependency mutation ordering](https://github.com/pytorch/pytorch/blob/c40048472cc4e28f44e8e5835cae319add231bf5/torch/_inductor/scheduler.py#L1102-L1104)
#### Problem 2: Limited Mutation Support
Mutation has a limited form, mostly on inputs, and aliasing is limited with fallback nodes.
See related issue: https://github.com/pytorch/pytorch/issues/166009
### Versions
main
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben
|
https://github.com/pytorch/pytorch/issues/166014
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-10-21T18:59:09Z
| 2025-10-21T18:59:31Z
| 0
|
eellison
|
vllm-project/vllm
| 27,268
|
[Usage]: failed to infer device type on GCP COS despite nvidia container toolkit installed
|
### Your current environment
I failed to run this script on GCP COS.
### How would you like to use vllm
I was trying to use VLLM on a Google Cloud (GCP) Container-Optimized OS (COS) instance via Docker.
I followed GCP's [documentation](https://cloud.google.com/container-optimized-os/docs/how-to/run-gpus) to install the nvidia driver, including mapping nvidia driver-related dirs to the Docker container. All tests worked fine.
However, when trying to start a VLLM server via Docker, I got the error that `libcuda.so.1` cannot be found and VLLM failed to infer device info. I tried to change the target dirs in the mapping to like `/usr/local/lib`, `/usr/local/cuda/lib`, etc. But no luck.
I also tried adding the flags `--runtime nvidia --gpus all` per [this instruction](https://docs.vllm.ai/en/v0.8.4/deployment/docker.html) but got the error that `Error response from daemon: unknown or invalid runtime name: nvidia.`
If someone can shed the light of where vllm official Docker image looks for CUDA stuff, it will be greatly appreciated. Thanks in advance.
The complete command and error:
```
$ docker run -v ~/.cache/huggingface:/root/.cache/huggingface --env "HUGGING_FACE_HUB_TOKEN=<secret>" -p 8010:8000 --ipc=host vllm/vllm-openai:latest --model mistralai/Mistral-7B-v0.1
INFO 10-21 08:13:18 [__init__.py:220] No platform detected, vLLM is running on UnspecifiedPlatform
WARNING 10-21 08:13:23 [_custom_ops.py:20] Failed to import from vllm._C with ImportError('libcuda.so.1: cannot open shared object file: No such file or directory')
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1949, in <module>
parser = make_arg_parser(parser)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/cli_args.py", line 263, in make_arg_parser
parser = AsyncEngineArgs.add_cli_args(parser)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1714, in add_cli_args
parser = EngineArgs.add_cli_args(parser)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 919, in add_cli_args
vllm_kwargs = get_kwargs(VllmConfig)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 281, in get_kwargs
return copy.deepcopy(_compute_kwargs(cls))
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 182, in _compute_kwargs
default = field.default_factory()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/pydantic/_internal/_dataclasses.py", line 123, in __init__
s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
File "/usr/local/lib/python3.12/dist-packages/vllm/config/device.py", line 58, in __post_init__
raise RuntimeError(
RuntimeError: Failed to infer device type, please set the environment variable `VLLM_LOGGING_LEVEL=DEBUG` to turn on verbose logging to help debug the issue.
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27268
|
open
|
[
"usage"
] | 2025-10-21T15:24:21Z
| 2025-10-21T15:24:21Z
| 0
|
forrestbao
|
vllm-project/vllm
| 27,265
|
[Usage]: Cannot register custom model (Out-of-Tree Model Integration)
|
```
### Your current environment
==============================
Versions of relevant libraries
==============================
[pip3] flake8==7.1.1
[pip3] flashinfer==0.1.6+cu124torch2.4
[pip3] flashinfer-python==0.2.5
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-ml-py==12.560.30
[pip3] nvidia-modelopt==0.31.0
[pip3] nvidia-modelopt-core==0.31.0
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pynvml==12.0.0
[pip3] pyzmq==26.2.0
[pip3] sentence-transformers==3.3.1
[pip3] torch==2.6.0
[pip3] torch_memory_saver==0.0.6
[pip3] torchao==0.9.0
[pip3] torchaudio==2.6.0
[pip3] torchdata==0.11.0
[pip3] torchprofile==0.0.4
[pip3] torchtext==0.18.0
[pip3] torchvision==0.21.0
[pip3] transformer_engine_torch==2.3.0
[pip3] transformers==4.51.1
[pip3] triton==3.2.0
[conda] flashinfer 0.1.6+cu124torch2.4 pypi_0 pypi
[conda] flashinfer-python 0.2.5 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-ml-py 12.560.30 pypi_0 pypi
[conda] nvidia-modelopt 0.31.0 pypi_0 pypi
[conda] nvidia-modelopt-core 0.31.0 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pynvml 12.0.0 pypi_0 pypi
[conda] pyzmq 26.2.0 pypi_0 pypi
[conda] sentence-transformers 3.3.1 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torch-memory-saver 0.0.6 pypi_0 pypi
[conda] torchao 0.9.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchdata 0.11.0 pypi_0 pypi
[conda] torchprofile 0.0.4 pypi_0 pypi
[conda] torchtext 0.18.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] transformer-engine-torch 2.3.0 pypi_0 pypi
[conda] transformers 4.51.1 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
==============================
vLLM Info
==============================
ROCM Version : Could not collect
vLLM Version : 0.8.5.post1
```
# How would you like to use vllm
Hi, I'm trying to integrate a custom multi-modal model (Qwen2_5_VLForConditionalGeneration_Vilavt) using the out-of-tree plugin system, following the official documentation and the vllm_add_dummy_model example.
### The Issue:
The model loading behavior is inconsistent between single-GPU and multi-GPU (tensor parallel) modes:
- Single-GPU (CUDA_VISIBLE_DEVICES=0): Everything works perfectly. The engine initializes, and I can run inference.
- Multi-GPU (CUDA_VISIBLE_DEVICES=0,1,2,3): The engine fails to start. Although the logs from VllmWorker processes show that my custom model is successfully registered, the main EngineCore process throws a ValueError, complaining that the model cannot be found.
I've successfully created a package `vllm_vilavt`, installed it with `pip install -e .` , and my `setup.py` correctly points to a register() function in the entry_points.
My `setup.py`:
```
from setuptools import setup, find_packages
setup(
name="vllm_vilavt",
version="0.1",
packages=find_packages(),
entry_points={
"vllm.general_plugins":
["register_vilavt_model = vllm_
|
https://github.com/vllm-project/vllm/issues/27265
|
closed
|
[
"usage"
] | 2025-10-21T14:17:17Z
| 2025-10-25T13:19:40Z
| 1
|
Hyperwjf
|
vllm-project/vllm
| 27,263
|
[Responses API] Support tool calling and ouput token streaming
|
Splitting off from #14721
> FYI a start has been made here https://github.com/vllm-project/vllm/pull/20504
>
> That PR (which was merged to `main` on [7/9/2025](https://github.com/vllm-project/vllm/pull/20504#event-18495144925)) explicitly has an unchecked boxes for
>
> * [ ] Tool/functional calling support
> * [ ] Output token streaming
>
> Any plans to implement those features? I think that is what is needed to support agentic coding tools like codex. See:
>
> * https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html#harmony-format-support
_Originally posted by @bartlettroscoe in [#14721](https://github.com/vllm-project/vllm/issues/14721#issuecomment-3321963360)_
|
https://github.com/vllm-project/vllm/issues/27263
|
open
|
[] | 2025-10-21T12:36:44Z
| 2025-12-07T01:06:46Z
| 4
|
markmc
|
pytorch/pytorch
| 165,985
|
Can I provide a Chinese version of the readme file to submit
|
### 📚 The doc issue
Can I provide a Chinese version of the readme file to submit?
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/pytorch/issues/165985
|
closed
|
[] | 2025-10-21T11:32:28Z
| 2025-10-27T23:04:37Z
| 1
|
wenlinchong17-web
|
vllm-project/vllm
| 27,252
|
[Usage]: ”@app.post("/generate")“ API is support qwen2_vl or not?
|
### Your current environment
i want tot know ”@app.post("/generate")“ API support qwen2_vl or not?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27252
|
open
|
[
"usage"
] | 2025-10-21T07:30:11Z
| 2025-10-21T07:30:11Z
| 0
|
wwkww
|
huggingface/lerobot
| 2,269
|
how to configure pi0_base to train with single camera dataset
|
Hi,
I'm trying to train pi0_base with "lerobot/aloha_sim_transfer_cube_human" dataset which has only one camera input "observation.images.top". However, pi0 seems to expect three camera inputs:
"observation.images.base_0_rgb",
"observation.images.left_wrist_0_rgb",
"observation.images.right_wrist_0_rgb"
"ValueError: All image features are missing from the batch. At least one expected. (batch: dict_keys(['action', 'next.reward', 'next.done', 'next.truncated', 'info', 'action_is_pad', 'task', 'index', 'task_index', 'observation.images.top', 'observation.state', 'observation.language.tokens', 'observation.language.attention_mask'])) (image_features: {'observation.images.base_0_rgb': PolicyFeature(type=<FeatureType.VISUAL: 'VISUAL'>, shape=(3, 224, 224)), 'observation.images.left_wrist_0_rgb': PolicyFeature(type=<FeatureType.VISUAL: 'VISUAL'>, shape=(3, 224, 224)), 'observation.images.right_wrist_0_rgb': PolicyFeature(type=<FeatureType.VISUAL: 'VISUAL'>, shape=(3, 224, 224))}) Exception in thread Thread-2 (_pin_memory_loop): Traceback (most recent call last): File "/root/.local/share/mamba/envs/lerobot/lib/python3.10/threading.py", line 1016, in _bootstrap_inner"
Is there a command-line argument I can use to set the single camera input to train with the pi0_base model?
|
https://github.com/huggingface/lerobot/issues/2269
|
open
|
[
"question",
"policies",
"dataset"
] | 2025-10-21T01:32:50Z
| 2025-10-21T17:36:17Z
| null |
dalishi
|
vllm-project/vllm
| 27,233
|
gguf run good
|
### Your current environment
from vllm import LLM, SamplingParams
gguf_path = "/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf"
llm = LLM(
gguf_path,
tokenizer="Qwen/Qwen3-1.7B"
)
params = SamplingParams(
temperature=0.8,
top_p=0.9,
top_k=40,
max_tokens=200,
)
outputs = llm.generate(["Who is Napoleon Bonaparte?"], params)
print(outputs[0].outputs[0].text)
### How would you like to use vllm
I want to run inferevenv) m@m-HP-Z440-Workstation:~/Desktop/vllm/vllm/examples/offline_inference/basic$
(venv) m@m-HP-Z440-Workstation:~/Desktop/vllm/vllm/examples/offline_inference/basic$ python3
Python 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> from vllm import LLM, SamplingParams
INFO 10-21 03:05:39 [__init__.py:216] Automatically detected platform cuda.
>>>
>>> gguf_path = "/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf"
>>>
>>> llm = LLM(
... gguf_path,
... tokenizer="Qwen/Qwen3-1.7B"
... )
INFO 10-21 03:05:41 [utils.py:233] non-default args: {'tokenizer': 'Qwen/Qwen3-1.7B', 'disable_log_stats': True, 'model': '/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf'}
INFO 10-21 03:06:14 [model.py:547] Resolved architecture: Qwen3ForCausalLM
`torch_dtype` is deprecated! Use `dtype` instead!
ERROR 10-21 03:06:14 [config.py:278] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf'. Use `repo_type` argument if needed., retrying 1 of 2
ERROR 10-21 03:06:16 [config.py:276] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf'. Use `repo_type` argument if needed.
INFO 10-21 03:06:16 [model.py:1730] Downcasting torch.float32 to torch.bfloat16.
INFO 10-21 03:06:16 [model.py:1510] Using max model len 32768
INFO 10-21 03:06:16 [scheduler.py:205] Chunked prefill is enabled with max_num_batched_tokens=8192.
(EngineCore_DP0 pid=67528) INFO 10-21 03:06:41 [core.py:644] Waiting for init message from front-end.
(EngineCore_DP0 pid=67528) INFO 10-21 03:06:41 [core.py:77] Initializing a V1 LLM engine (v0.11.0) with config: model='/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf', speculative_config=None, tokenizer='Qwen/Qwen3-1.7B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=gguf, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=gguf, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf, enable_prefix_caching=True, chunked_prefill_enabled=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2","vllm.mamba_mixer","vllm.short_conv","vllm.linear_attention","vllm.plamo2_mamba_mixer","vllm.gdn_attention","vllm.sparse_attn_indexer"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":[2,1],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"use_inductor_graph_partition":false,"pass_config":{},"max_capture_size":512,"local_cache_dir":null}
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Ex
|
https://github.com/vllm-project/vllm/issues/27233
|
open
|
[
"usage"
] | 2025-10-21T00:11:26Z
| 2025-10-22T00:44:10Z
| 12
|
kmnnmk212-source
|
pytorch/xla
| 9,684
|
RFC: Evolving PyTorch/XLA for a more native experience on TPU
|
### Motivation
For many years, `torch_xla` has been the primary way for the community to run PyTorch programs on Cloud TPUs. It has successfully enabled the training of massive models by bringing the power of the XLA compiler to the PyTorch ecosystem.
The current implementation, while powerful, presents a developer experience that can sometimes feel distinct from "native" PyTorch. The reliance on a lazy tensor model and explicit graph tracing (`xm.mark_step`) creates a separation from PyTorch's eager-first philosophy. This can introduce challenges in debugging, complicates integration with the broader PyTorch ecosystem, and requires users to learn a `torch_xla`-specific set of APIs and concepts.
We believe we can deliver a more seamless and native experience for PyTorch users on TPUs. The goal is to provide the best of both worlds: the interactive, flexible development experience of PyTorch's eager mode and the world-class performance of the XLA compiler for scaled-out workloads.
---
### Proposal: A Native TPU Backend
We propose a TPU backend for PyTorch that is designed to align with modern PyTorch architecture and eager-first design. The goal is to make a "native" device in PyTorch, where `tensor.to('tpu')` feels just as natural and intuitive as `tensor.to('cuda')`. This new direction aims to fully embrace PyTorch's eager mode while still leveraging the powerful XLA compiler for performance-critical code paths.
The core principles of this new stack are:
1. **XLA**: Similarly to `torch_xla`, our proposal assumes that we can continue to rely on XLA as the underlying compiler infrastructure. However, we would call it in a profoundly different way which enables new techniques and a better user experience. Note that on TPU, compilation is required for the best performance — but it should be possible to hide the compile times.
1. **Eager Mode with Deferred Execution**: Similar to standard PyTorch eager mode, ops are being dispatched. However, the new stack can then choose to compile and execute individual ops, shorter or longer sequences of ops, or potential candidates for fusion clusters—all the way up to a full compile of a forward or backward pass. <br />
Compilation would happen asynchronously, which means compilation of graphs and their execution could overlap, and compilation results would be cached. We would work with the XLA team to further reduce overall compile time overhead with techniques such as persistent deduping and by limiting inlining and unrolling. As a result, the compile time overhead would be drastically minimized even for larger incrementally compiled graphs.
1. **JIT**: This approach would enable a true just-in-time compilation engine with recompilation, feedback-directed optimizations, autotuning, and active memory management to avoid OOMs. With this, users would get the eager experience but with compiled performance after just a few inferences or training steps.
With these principles in mind, we could deliver on the following features:
1. **Eager Execution by Default**: As described above, operations will appear as being eagerly executed, just as they do on CPU or GPU, even though they are being compiled in the background with minimal, and mostly hidden, compile time overhead. This would provide a familiar, intuitive, and much easier-to-debug workflow where users can inspect tensors and use standard Python tooling.
1. **Integration with `torch.compile`**: For maximizing performance, TPU would integrate as a first-class backend for `torch.compile`. This would allow users to get the performance benefits of XLA compilation and TPUs at scale on their performance-critical code with a simple `@torch.compile` decorator.
1. **Distributed Training via DTensor**: The new backend would natively support PyTorch's distributed APIs. This would allow users to leverage advanced, large-scale distributed training strategies like Fully Sharded Data Parallel (FSDP) and other model parallelism techniques out of the box, making it much simpler to scale up models.
1. **A More "PyTorch Native" Feel**: The end goal is to abstract away the complexities of the underlying compiler. Developing for a TPU should not require a fundamentally different programming model. This would mean moving away from `torch_xla`-specific APIs and toward the standard PyTorch API surface. This approach would provide the best of both worlds: the interactive, flexible development experience of PyTorch's eager mode and the world-class performance of the XLA compiler for scaled-out workloads.
---
### We Want Your Feedback!
We're excited for this direction, and to bring together PyTorch's eager mode and the XLA compiler in a way that helps the community achieve new levels of performance and scale. This is a significant undertaking, and we want to build it with the community. We're open to feedback on this direction.
- Does this proposal address the pain points you've experienced with `torch_xla?`
- Are there specific work
|
https://github.com/pytorch/xla/issues/9684
|
open
|
[
"RFC"
] | 2025-10-20T22:12:20Z
| 2025-12-19T04:58:36Z
| 18
|
qcc4cp
|
vllm-project/vllm
| 27,228
|
[Installation]: Compatibility with PyTorch 2.9.0?
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How you are installing vllm
Is there a version of vllm that is compatible with the latest PyTorch release 2.9.0?
```
pip install vllm==0.11.0
pip install torch==2.9.0
```
```
$ vllm bench latency --input-len 256 --output-len 256 --model Qwen3/Qwen3-8B --batch-size 1
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27228
|
closed
|
[
"installation"
] | 2025-10-20T21:10:24Z
| 2025-10-21T22:40:15Z
| 3
|
andrewor14
|
pytorch/pytorch
| 165,933
|
[Distributed] fully_shard: support no_shard (ddp) strategy?
|
### 🚀 The feature, motivation and pitch
It looks like the `fully_shard` API is recommended these days over `torch.distributed.FSDP`. The latter allows a `ShardingStrategy` argument to control the degree of sharding (i.e. zero1/2/3) - this is useful in some cases where we don't want to shard the params, only grads, or not shard anything at all, and just use FSDP for its CPU offload / mixed precision features.
Checking the `fully_shard` docs: https://docs.pytorch.org/docs/stable/distributed.fsdp.fully_shard.html, it appears to support zero-2/HSDP but not `NO_SHARD`. Couple questions:
2) Are there any plans to add no_shard (DDP) support?
3) If not for (2), would `torch.distributed.FSDP` be supported and recommended for these use cases?
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci
|
https://github.com/pytorch/pytorch/issues/165933
|
open
|
[
"oncall: distributed"
] | 2025-10-20T20:48:14Z
| 2025-10-22T14:44:13Z
| 0
|
rohan-varma
|
vllm-project/vllm
| 27,208
|
[Feature]: Upgrade CUDA version to 12.9.1 in docker images
|
### 🚀 The feature, motivation and pitch
The current builds display warning logs like these
```
Warning: please use at least NVCC 12.9 for the best DeepGEMM performance
```
Can we bump this version easily?
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27208
|
closed
|
[
"feature request"
] | 2025-10-20T16:08:49Z
| 2025-10-21T21:20:19Z
| 1
|
jhuntbach-bc
|
pytorch/pytorch
| 165,909
|
AWS was down, GHA infrastructure effected / recovering
|
> NOTE: Remember to label this issue with "`ci: sev`"
> If you want autorevert to be disabled, keep the ci: disable-autorevert label
<!-- Add the `merge blocking` label to this PR to prevent PRs from being merged while this issue is open -->
## Current Status
Mitigated, queues are recovering.
AWS experienced a big outage (https://health.aws.amazon.com/health/status) this morning resulting in most of our GHA infra going down with them.
We are still in the process of recovering and will update as soon as our services are able to recover.
## Error looks like
*Provide some way users can tell that this SEV is causing their issue.*
## Incident timeline (all times pacific)
*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*
## User impact
*How does this affect users of PyTorch CI?*
## Root cause
*What was the root cause of this issue?*
## Mitigation
*How did we mitigate the issue?*
## Prevention/followups
*How do we prevent issues like this in the future?*
|
https://github.com/pytorch/pytorch/issues/165909
|
closed
|
[
"ci: sev",
"ci: sev-mitigated"
] | 2025-10-20T15:28:48Z
| 2025-10-21T16:41:19Z
| 0
|
seemethere
|
pytorch/pytorch
| 165,907
|
Feedback on profiler key_averages documentation
|
### 📚 The doc issue
It would be great to have more documentation on how to use key_averages beyond the Table method. Right now there is no documentation for the EventList and FunctionEventAvg data types.
### Suggest a potential alternative/fix
Adding pages for EventList and FunctionEventAvg classes would be a good start, and it would be nice to have an easy way to create a dataframe from the results.
cc @svekars @sekyondaMeta @AlannaBurke @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
|
https://github.com/pytorch/pytorch/issues/165907
|
closed
|
[
"module: docs",
"actionable",
"oncall: profiler"
] | 2025-10-20T14:56:48Z
| 2025-11-14T02:03:22Z
| 0
|
alexracape
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.