repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
โ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
vllm-project/vllm
| 31,091
|
[Usage]: Image Embedding Models (CLIP, Siglip, etc)
|
### Your current environment
```text
root@3904bdeddb91:/vllm-workspace# python3 collect_env.py
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu129
Is debug build : False
CUDA used to build PyTorch : 12.9
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.12 (main, Oct 10 2025, 08:52:57) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-6.8.0-87-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.9.86
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA RTX PRO 6000 Blackwell Workstation Edition
GPU 1: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
Nvidia driver version : 580.65.06
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7502 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2500.0000
CPU min MHz: 1500.0000
BogoMIPS: 4999.95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es ibpb_exit_to_user
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Vulnerability Vmscape: Mitigation; IBPB before exit to userspace
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.5.3
|
https://github.com/vllm-project/vllm/issues/31091
|
closed
|
[
"usage"
] | 2025-12-21T04:10:10Z
| 2025-12-23T03:26:40Z
| 2
|
JamesDConley
|
huggingface/lerobot
| 2,690
|
[Bug] Pi0 Inference RuntimeError: Dimension mismatch in Gemma eager_attention_forward (Causal Mask vs Attn Weights)
|
https://github.com/huggingface/lerobot/issues/2690
|
closed
|
[
"bug",
"question",
"policies",
"dataset",
"CI",
"performance",
"robots",
"examples",
"training"
] | 2025-12-20T16:08:36Z
| 2025-12-22T09:34:57Z
| null |
SMWTDDY
|
|
huggingface/lerobot
| 2,689
|
problem regarding to update aloha sim dataset version v2.1 to v3.0
|
### Ticket Type
๐ Bug Report (Something isn't working)
### Environment & System Info
```Shell
lerobot version 3.0, h100 gpu, openpi repository, training aloha simulation with pi0.5
```
### Description
During training aloha simulation, I updated lerobot aloha sim insertion dataset from compatible with 2.1 to 3.0, the training results showing aloha joints are working weirdly (showing spark of joint actions).
The dataset conversion followed as below.
```
lerobot.datasets.backward_compatibility.BackwardCompatibilityError:
The dataset you requested (lerobot/aloha_sim_insertion_scripted) is in 2.1 format.
We introduced a new format since v3.0 which is not backward compatible with v2.1.
Please, update your dataset to the new format using this command:
python -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=lerobot/aloha_sim_insertion_scripted
```
### Context & Reproduction
_No response_
### Relevant logs or stack trace
```Shell
```
### Checklist
- [ ] I have searched existing tickets to ensure this isn't a duplicate.
- [ ] I am using the latest version of the `main` branch.
- [ ] I have verified this is not an environment-specific problem.
### Additional Info / Workarounds
_No response_
|
https://github.com/huggingface/lerobot/issues/2689
|
open
|
[
"bug",
"question",
"dataset",
"simulation",
"CI",
"robots",
"training"
] | 2025-12-20T13:42:39Z
| 2025-12-24T00:06:09Z
| null |
conscious-choi
|
sgl-project/sglang
| 15,524
|
[Bug] Deepseek R1 multi-turn tool calling not working
|
### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [x] Please use English. Otherwise, it will be closed.
### Describe the bug
The multi-turn tool calling failed with error: `{"object":"error","message":"'dict object' has no attribute 'name'","type":"BadRequest","param":null,"code":400}`
Here is the example query:
```
curl http://127.0.0.1:7080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "deepseek-ai/DeepSeek-R1",
"stream": false,
"messages": [
{
"role": "user",
"content": "What is the weather like in San Francisco?"
},
{
"role": "assistant",
"content": "I will check the weather for San Francisco. Please hold on.",
"tool_calls": [
{
"id": "call_ab97cb439a5e41cfbdd8960c",
"type": "function",
"function": {
"name": "get_weather",
"arguments": "{\"location\": \"San Francisco, CA\"}"
}
}
]
},
{
"role": "tool",
"tool_call_id": "call_ab97cb439a5e41cfbdd8960c",
"content": "70 degrees and foggy"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state (both required), e.g. San Francisco, CA."
}
},
"required": [
"location"
]
}
}
}
]
}'
```
However, the same query works for the image back in August.
### Reproduction
*) Start server on B200
```
python3 -m sglang.launch_server \
--model-path nvidia/DeepSeek-R1-0528-NVFP4 \
--port 7080 \
--host 0.0.0.0 \
--tp-size=8 \
--ep-size=8 \
--moe-runner-backend=flashinfer_trtllm \
--enable-flashinfer-allreduce-fusion \
--tool-call-parser=deepseekv3 \
--chat-template=/sgl-workspace/sglang/examples/chat_template/tool_chat_template_deepseekr1.jinja \
--speculative-num-steps=3 \
--speculative-eagle-topk=1 \
--speculative-num-draft-tokens=4 \
--speculative-algorithm=EAGLE \
--trust-remote-code
```
*) send query
```
curl http://127.0.0.1:7080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "deepseek-ai/DeepSeek-R1",
"stream": false,
"messages": [
{
"role": "user",
"content": "What is the weather like in San Francisco?"
},
{
"role": "assistant",
"content": "I will check the weather for San Francisco. Please hold on.",
"tool_calls": [
{
"id": "call_ab97cb439a5e41cfbdd8960c",
"type": "function",
"function": {
"name": "get_weather",
"arguments": "{\"location\": \"San Francisco, CA\"}"
}
}
]
},
{
"role": "tool",
"tool_call_id": "call_ab97cb439a5e41cfbdd8960c",
"content": "70 degrees and foggy"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state (both required), e.g. San Francisco, CA."
}
},
"required": [
"location"
]
}
}
}
]
}'
```
### Environment
```
Python: 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA B200
GPU 0,1,2,3,4,5,6,7 Compute Capability: 10.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.9, V12.9.86
CUDA Driver Version: 580.95.05
PyTorch: 2.9.1+cu129
sglang: 0.5.6.post2
sgl_kernel: 0.3.19
flashinfer_python: 0.5.3
flashinfer_cubin: 0.5.3
flashinfer_jit_cache: Module Not Found
triton: 3.5.1
transformers: 4.57.1
torchao: 0.9.0
numpy: 2.3.5
aiohttp: 3.13.2
fastapi: 0.124.2
hf_transfer: 0.1.9
huggingface_hub: 0.36.0
interegular: 0.3.3
modelscope: 1.33.0
orjson: 3.11.5
outlines: 0.1.11
packaging: 25.0
psutil: 7.1.3
pydantic: 2.12.5
python-multipart: 0.0.20
pyzmq: 27.1.0
uvicorn: 0.38.0
uvloop: 0.22.1
vllm: Module Not Found
xgrammar: 0.1.27
openai: 2.6.1
tiktoken: 0.12.0
anthropic: 0.75.0
litellm: Module Not Found
decord2: 2.0.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 0-55,112-167 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV1
|
https://github.com/sgl-project/sglang/issues/15524
|
closed
|
[] | 2025-12-20T10:31:36Z
| 2025-12-21T01:29:43Z
| 2
|
ynwang007
|
vllm-project/vllm
| 31,066
|
[Doc]: Formatting issue in markdown file
|
### ๐ The doc issue
in [paged_attention.md](https://github.com/vllm-project/vllm/blob/ff2168bca3a195b835c64a5c9012d7b6a9f34e61/docs/design/paged_attention.md#query), there is an issue where a pictures arent formatted correctly and only show the html link .
For example, specifically, in the Query subsection, we can see:
`{ align="center" alt="q_vecs" width="70%" }`
The asset isnt loaded correctly.
There are a total of **7 such issues**, particularly, we have
- Query subsection - 2 instances.
- Key subsection - 2 instances.
- Value subsection - 3 instances
### Suggest a potential alternative/fix
Perhaps the reference for the images can be checked, it must be broken somewhere
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/31066
|
closed
|
[
"documentation"
] | 2025-12-20T06:23:44Z
| 2025-12-22T01:38:56Z
| 1
|
ssaketh-ch
|
pytorch/pytorch
| 170,926
|
Could we have a unified method on c10::Stream to access the underlying pointer that the c10::Stream wraps?
|
As title.
As I understand it, the device generic c10::Stream object is intended to wrap an underlying pointer to the stream object for the accelerator (e.g. `cudaStream_t` for CUDA, `hipStream_t` for ROCM `sycl::queue&` for XPU etc.). I see that there are methods like the following on `CUDAStream`/`XPUStream` that allow users to access the underlying pointer to the respective underlying object.
https://github.com/pytorch/pytorch/blob/e782dc0a4e7e8a048de520bd45f1bfa969ed7e3a/c10/cuda/CUDAStream.h#L143-L144
https://github.com/pytorch/pytorch/blob/e782dc0a4e7e8a048de520bd45f1bfa969ed7e3a/c10/xpu/XPUStream.h#L116-L117
Would it make sense to have a unified method on the base c10::Stream that returns this to the user? e.g. perhaps `void** native_ptr()` that the user then casts to the type that they expect?
## Use Case
I've added a [device-generic ABI stable wrapper for c10::Stream](https://github.com/pytorch/pytorch/blob/main/torch/csrc/stable/accelerator.h#L45-L64) to torch/csrc/stable/accelerator.h . It is returned when the user uses the ABI stable variant of `getCurrentStream` that wraps `at::accelerator::getCurrentStream` https://github.com/pytorch/pytorch/blob/e782dc0a4e7e8a048de520bd45f1bfa969ed7e3a/torch/csrc/stable/accelerator.h#L66-L70
https://github.com/pytorch/pytorch/blob/e782dc0a4e7e8a048de520bd45f1bfa969ed7e3a/torch/csrc/inductor/aoti_torch/shim_common.cpp#L1510-L1518
I'm looking for a unified way to access the underlying pointer (e.g. so a user can pass it to a raw CUDA/HIP/XPU API but it seems like there is no unified method to access this. The only method that seems close is [`id()`](https://github.com/pytorch/pytorch/blob/e782dc0a4e7e8a048de520bd45f1bfa969ed7e3a/c10/xpu/XPUStream.h#L89-L93) which returns a StreamId which is not directly interpretable by the user (for example on CUDA some part of it might be an index into the internal pool of streams used by pytorch).
cc @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens @albanD @guangyey @EikanWang
|
https://github.com/pytorch/pytorch/issues/170926
|
open
|
[
"triaged",
"module: PrivateUse1",
"module: accelerator"
] | 2025-12-20T00:35:56Z
| 2025-12-31T02:31:09Z
| 3
|
mikaylagawarecki
|
pytorch/torchtitan
| 2,168
|
Wrong commands in compiler_toolkit .md?
|
### Bug description
The commands in the readme page of https://github.com/pytorch/torchtitan/tree/main/torchtitan/experiments/compiler_toolkit are wrong?
Only the first flex_attention command has `--model.flavor=debugmodel_flex_attn`, the other three don't, and I don't see flex_attention ops in the graph modules if I don't specify the model.flavor.
### Versions
main 1bd2548b14da014b1ec560830f8bdefb6ca568f4
|
https://github.com/pytorch/torchtitan/issues/2168
|
open
|
[] | 2025-12-19T23:25:19Z
| 2025-12-19T23:31:37Z
| 2
|
yushangdi
|
vllm-project/vllm
| 31,044
|
[CI Failure]: Blackwell Fusion Tests
|
### Name of failing test
FAILED tests/compile/test_fusion_attn.py::test_attention_quant_pattern[AttentionBackendEnum.TRITON_ATTN-nvidia/Llama-4-Scout-17B-16E-Instruct-FP8-TestAttentionFp8StaticQuantPatternModel--quant_fp8-dtype1-533-128-40-8] - AssertionError: Tensor-likes are not close!
### Basic information
- [x] Flaky test
- [ ] Can reproduce locally
- [ ] Caused by external libraries (e.g. bug in `transformers`)
### ๐งช Describe the failing test
On B200:
FAILED tests/compile/test_fusion_attn.py::test_attention_quant_pattern[AttentionBackendEnum.TRITON_ATTN-nvidia/Llama-4-Scout-17B-16E-Instruct-FP8-TestAttentionFp8StaticQuantPatternModel--quant_fp8-dtype1-533-128-40-8] - AssertionError: Tensor-likes are not close!
```bash
pytest -v -x tests/compile/test_fusion_attn.py::test_attention_quant_pattern
```
### ๐ History of failing test
x
### CC List.
x
|
https://github.com/vllm-project/vllm/issues/31044
|
open
|
[
"help wanted",
"torch.compile",
"ci-failure"
] | 2025-12-19T18:49:59Z
| 2025-12-26T21:58:25Z
| 3
|
robertgshaw2-redhat
|
vllm-project/vllm
| 31,043
|
[BugFix]: move torch.Size across graphs in split_graph
|
### ๐ The feature, motivation and pitch
When fixing a moe x cudagraph issue (see #30914), we found that `split_graph` may generate a submodule that returns a torch.Size and later another submodule that takes torch.Size. This errors since pt2 somehow does not support `torch.Size` as output yet.
One fix is to manually reorder some lines in the model code to avoid this split happen between getting the `torch.Size` and using it. But this is too intrusive and requires manual efforts on many models.
A more automated approach is to have a graph pass in `split_graph` to move the torch.Size a bit to avoid patterns like
```
# Old:
size = tensor_a.shape
some_cg_unsafe_op
tensor_b = tensor_b.view(size)
```
---->
```
# New:
some_cg_unsafe_op
size = tensor_a.shape
tensor_b = tensor_b.view(size)
```
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/31043
|
open
|
[
"help wanted",
"feature request",
"torch.compile"
] | 2025-12-19T18:24:58Z
| 2025-12-22T21:23:04Z
| 1
|
BoyuanFeng
|
vllm-project/vllm
| 31,039
|
[Feature]: Integrate Sonic MoE
|
### ๐ The feature, motivation and pitch
https://x.com/wentaoguo7/status/2001773245318541324?s=46&t=jLcDgQXDbYe6HgFmTNYgpg
https://github.com/Dao-AILab/sonic-moe
Curious to see benchmarks!
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/31039
|
open
|
[
"help wanted",
"good first issue",
"feature request"
] | 2025-12-19T17:29:59Z
| 2026-01-04T14:10:21Z
| 4
|
robertgshaw2-redhat
|
sgl-project/sglang
| 15,481
|
[Bug] Seeded Deterministic/Batch Invariant Inference Not Working on v1/completions endpoint
|
### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [x] Please use English. Otherwise, it will be closed.
### Describe the bug
Iโm trying to enable batch-invariant (deterministic) inference while serving SGLang behind an OpenAI API-compatible interface.
Deterministic inference docs: https://docs.sglang.io/advanced_features/deterministic_inference.html
## What works
The native /generate endpoint correctly varies output by seed and is repeatable per seed.
Example request:
POST {base}/generate
```json
{
"text": "generate a uuid. UUID:",
"sampling_params": {
"temperature": 1,
"max_new_tokens": 32,
"sampling_seed": 0
}
}
```
Behavior: changing sampling_seed changes the output; repeating with the same sampling_seed reproduces it.
## What doesnโt work
On the OpenAI-compatible endpoint POST {base}/v1/completions, seed appears to have no effect (even with temperature=1 and top_p=1).
Example:
POST {base}/v1/completions
```json
{
"model": "Qwen/Qwen3-30B-A3B",
"prompt": "generate a uuid. UUID: ",
"max_tokens": 32,
"temperature": 1,
"top_p": 1,
"n": 1,
"seed": 0
}
```
Behavior: response is the same regardless of seed value.
Expected behavior
With --enable-deterministic-inference, I expected the OpenAI-compatible endpoints to:
* honor seed as the sampling seed (analogous to sampling_seed), and
* remain deterministic/repeatable for the same (prompt, params, seed).
### Reproduction
Server launch:
```bash
exec python3 -m sglang.launch_server \
--model-path "Qwen/Qwen3-30B-A3B" \
--host 0.0.0.0 \
--port 8000 \
--tp "1" \
--attention-backend "triton" \
--context-length "32000" \
--trust-remote-code \
--enable-deterministic-inference
```
POST {base}/v1/completions
```json
{
"model": "Qwen/Qwen3-30B-A3B",
"prompt": "generate a uuid. UUID: ",
"max_tokens": 32,
"temperature": 1,
"top_p": 1,
"n": 1,
"seed": 0
}
```
varying the seed results in same output
### Environment
==========
== CUDA ==
==========
CUDA Version 12.9.1
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
Auto-detected 1 GPU(s)
Python: 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0]
CUDA available: True
GPU 0: NVIDIA RTX PRO 6000 Blackwell Server Edition
GPU 0 Compute Capability: 12.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.9, V12.9.86
CUDA Driver Version: 580.105.08
PyTorch: 2.9.1+cu129
sglang: 0.5.6.post2
sgl_kernel: 0.3.19
flashinfer_python: 0.5.3
flashinfer_cubin: 0.5.3
flashinfer_jit_cache: Module Not Found
triton: 3.5.1
transformers: 4.57.1
torchao: 0.9.0
numpy: 2.3.5
aiohttp: 3.13.2
fastapi: 0.124.2
hf_transfer: 0.1.9
huggingface_hub: 0.36.0
interegular: 0.3.3
modelscope: 1.33.0
orjson: 3.11.5
outlines: 0.1.11
packaging: 25.0
psutil: 7.1.3
pydantic: 2.12.5
python-multipart: 0.0.20
pyzmq: 27.1.0
uvicorn: 0.38.0
uvloop: 0.22.1
vllm: Module Not Found
xgrammar: 0.1.27
openai: 2.6.1
tiktoken: 0.12.0
anthropic: 0.75.0
litellm: Module Not Found
decord2: 2.0.0
NVIDIA Topology:
[4mGPU0 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID[0m
GPU0 X SYS 0-63,128-191 0 N/A
NIC0 SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_bond_0
ulimit soft: 1024
|
https://github.com/sgl-project/sglang/issues/15481
|
closed
|
[
"bug",
"high priority"
] | 2025-12-19T15:04:26Z
| 2025-12-20T04:32:15Z
| 8
|
jamesheavey
|
huggingface/lerobot
| 2,684
|
How to manually push a dataset
|
Say you `lerobot-record` a dataset with the flag `--dataset.push_to_hub=False`, or you encounter any problem at uploading time.
Is using `hf upload` enough, or does `lerobot` datasets need additional stuff?
|
https://github.com/huggingface/lerobot/issues/2684
|
open
|
[
"documentation",
"question",
"dataset"
] | 2025-12-19T13:00:20Z
| 2025-12-19T15:41:42Z
| null |
mcres
|
vllm-project/vllm
| 31,023
|
[Doc]: FP8 KV Cache: Does softmax output multiply with FP8 V directly or after dequantization?
|
### ๐ The doc issue
https://docs.vllm.ai/en/v0.8.5.post1/features/quantization/quantized_kvcache.html
Question:
In the FP8 KV Cache implementation, after computing attention scores and softmax at higher precision (FP16/BF16), is the resulting attention weight matrix:
Quantized to FP8 and multiplied directly with FP8 V cache, or
Multiplied with V cache after dequantizing V to higher precision?
The documentation mentions "no fused dequantization and attention operations yet" but doesn't specify the precision of this final multiplication. Clarifying this detail would help understand the accuracy-performance tradeoff.
Thanks!
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/31023
|
closed
|
[
"documentation"
] | 2025-12-19T10:33:22Z
| 2025-12-22T00:41:38Z
| 0
|
jorjiang
|
pytorch/pytorch
| 170,867
|
Operator benchmark: option to measure GPU execution time only (less CPU noise)
|
### ๐ The feature, motivation and pitch
Hello,
[Operator benchmark](https://github.com/pytorch/pytorch/tree/main/benchmarks/operator_benchmark) currently measures time in a way that [could be prone to CPU noise](https://github.com/pytorch/pytorch/blob/eba9265a580c6dc3e928ef341c23cab96ccf8b07/benchmarks/operator_benchmark/benchmark_core.py#L350). Is it possible to only measure GPU execution time using `torch.cuda.Event`?
If this change is made, this benchmark can be used more robustly for detecting possible regressions across updates, since it would produce more repeatable results.
### Alternatives
* Make the time-measuring code leaner, primarily measuring time spent on the GPU using `torch.cuda.Event` (with appropriate synchronization)..
* Currently, each operator has a separate file and its settings are [hardcoded in separate files](https://github.com/pytorch/pytorch/blob/999d94b5ede5f4ec111ba7dd144129e2c2725b03/benchmarks/operator_benchmark/pt/as_strided_test.py#L10). We could instead define all operators in a single file, similar to something like:
```
op_defs = {
"add": {
"init": lambda input_dict: {
"input1": torch.rand(input_dict["shape"], dtype=getattr(torch, input_dict["dtype"]), device="cuda"),
"input2": torch.rand(input_dict["shape"], dtype=getattr(torch, input_dict["dtype"]), device="cuda"),
},
"func": lambda input_dict: input_dict["input1"] + input_dict["input2"],
},
```
### Additional context
_No response_
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise @mwootton
|
https://github.com/pytorch/pytorch/issues/170867
|
open
|
[
"oncall: profiler"
] | 2025-12-19T10:31:38Z
| 2025-12-20T22:52:15Z
| 0
|
apakbin
|
vllm-project/vllm
| 31,019
|
[Bug]: Qwen3-VL 2:4 sparsity llm-compressor RuntimeError: shape mismatch (0.12, 0.13rc2)
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0] (64-bit runtime)
Python platform : Linux-6.14.0-1017-azure-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to :
GPU models and configuration : GPU 0: NVIDIA H100 NVL
Nvidia driver version : 580.95.05
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9V84 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 40
Socket(s): 1
Stepping: 1
BogoMIPS: 4800.09
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves user_shstk avx512_bf16 clzero xsaveerptr rdpru arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 160 MiB (5 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-39
Vulnerability Gather data sampling: Not affected
Vulnerability Ghostwrite: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsa: Vulnerable: Clear CPU buffers attempted, no microcode
Vulnerability Tsx async abort: Not affected
Vulnerability Vmscape: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.5.3
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime
|
https://github.com/vllm-project/vllm/issues/31019
|
open
|
[
"bug",
"help wanted",
"good first issue"
] | 2025-12-19T09:18:00Z
| 2025-12-24T12:16:01Z
| 4
|
SorenDreano
|
vllm-project/vllm
| 31,016
|
[Bug]: FlashInfer Incompatible with Sleep Mode
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### ๐ Describe the bug
Here is a script to reproduce the bug:
I use vllm=v0.10.1 and flashinfer-python=v0.5.3.
```
from vllm import LLM, SamplingParams
if __name__ == "__main__":
model_pth = "xxx/Qwen3-1.7B"
tp_size = 1
llm = LLM(
model=model_pth,
enable_sleep_mode=True,
tensor_parallel_size=tp_size,
gpu_memory_utilization=0.7,
)
llm.sleep(level=1)
llm.wake_up()
prompts = [
"What is AI?",
"Where is the Machu Picchu located?",
"What is the capital of France?",
"Who painted the Mona Lisa?",
]
sampling_params = SamplingParams(
temperature=0.7,
top_p=0.9,
max_tokens=64,
)
outputs = llm.generate(prompts, sampling_params)
for i, out in enumerate(outputs):
prompt = prompts[i]
generated = out.outputs[0].text
print(f"Prompt {i}: {prompt!r}")
print(f"Generation: {generated}\n")
```
### Root Cause
The bug occurs because the FlashInfer backendโs `attn_metadata` is stateful. It holds a `block_table_arange` tensor that is initialized once and then reused across subsequent calls to `build`:
```python
self.block_table_arange = torch.arange(
max_num_pages_per_req,
dtype=torch.int32,
device=self.device,
)
```
This `block_table_arange` tensor is allocated in the mempool with the `"kv_cache"` tag. It gets discarded after calling `llm.sleep`, but is not recreated when the engine wakes up, which leads to incorrect values and thus wrong outputs.
Specifically, this will cause bad rollout outputs in VERL using vllm + flashinfer.
### Temporary Fix
Here is a patch as a temporary workaround. Itโs not an ideal solution, but it works:
```python
from vllm.v1.attention.backends.flashinfer import FlashInferMetadataBuilder
import torch
def patch_flashinfer_build():
old_build = FlashInferMetadataBuilder.build
def new_build(*args, **kwargs):
self = args[0]
max_num_pages_per_req = self.block_table_arange.numel()
self.block_table_arange.copy_(
torch.arange(
max_num_pages_per_req,
device=self.block_table_arange.device,
dtype=self.block_table_arange.dtype,
)
)
return old_build(*args, **kwargs)
FlashInferMetadataBuilder.build = new_build
patch_flashinfer_build()
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/31016
|
open
|
[
"bug",
"help wanted"
] | 2025-12-19T08:04:19Z
| 2025-12-19T23:17:47Z
| 1
|
xiaoxiaosuaxuan
|
huggingface/transformers.js
| 1,490
|
Example models for each pipeline
|
### Question
Right now, I sorta use the docs and some searches to find good default models for https://workglow.dev/ for each pipeline that transformerjs has to offer. But they are not really the best, either in size or performance.
It would be great to have a list for each pipeline for fast and effective, best of breed, and a workhorse that is in between. Like a good, better, best.
|
https://github.com/huggingface/transformers.js/issues/1490
|
open
|
[
"question"
] | 2025-12-19T07:37:16Z
| 2025-12-19T17:41:01Z
| null |
sroussey
|
vllm-project/vllm
| 31,004
|
[New Model]: T5Gemma 2
|
### The model to consider.
https://huggingface.co/collections/google/t5gemma-2
### The closest model vllm already supports.
_No response_
### What's your difficulty of supporting the model you want?
I know vLLM dropped encoder-decoder support, but can we bring it back?
https://huggingface.co/docs/transformers/model_doc/t5gemma2
https://blog.google/technology/developers/t5gemma-2/
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/31004
|
open
|
[
"new-model"
] | 2025-12-19T03:55:00Z
| 2025-12-20T21:37:34Z
| 1
|
ducviet00-h2
|
sgl-project/sglang
| 15,443
|
SGLang Diffusion Cookbook Proposal
|
# ๐จ [Community Contribution] Create SGLang Diffusion Models Cookbook
## ๐ฏ Goal
Create a comprehensive cookbook for diffusion models in SGLang, demonstrating SGLang's performance advantages for image and video generation workloads.
## ๐ Scope
### Models to Cover
**Image Generation:**
- Flux-1 Dev
- Flux-2
- SDXL-Turbo
- Qwen Image Edit
**Video Generation:**
- Wan 2.1
- Wan 2.2
### Content Structure
Each model section includes:
1. **Model Introduction**
- Capabilities and use cases
- Resolution/quality specifications
- Style examples and output samples
- Links to official resources
2. **SGLang Deployment**
- One-command server launch
- Client usage example
- Model-specific optimization tips
3. **Performance Benchmarks**
- Throughput (images/sec or videos/min)
- Latency and memory usage
- Comparison: SGLang vs Diffusers vs ComfyUI
- Bar charts and scaling analysis
- Reproducible benchmark scripts
## ๐ฆ Deliverables
```
cookbook/diffusion/
โโโ README.md # Main cookbook
โโโ examples/ # Usage scripts per model
โ โโโ flux1_basic.py
โ โโโ sdxl_turbo.py
โ โโโ wan21_video.py
โ โโโ ...
โโโ benchmarks/
โ โโโ bench_image.py
โ โโโ bench_video.py
โ โโโ compare_backends.py
โ โโโ run_all.sh
โโโ assets/
โโโ output_examples/ # Curated generation examples
```
## ๐ Timeline
**Phase 1 (Weeks 1-2):** MVP with Flux-1 + SDXL-Turbo
**Phase 2 (Weeks 3-4):** Add remaining image models
**Phase 3 (Weeks 5-6):** Video models + comprehensive benchmarks
## ๐ช How to Contribute
We need help with:
### Required Contributors (2-3 people)
- [ ] **Benchmark Engineer**: Run performance tests on H100/A100
- Time commitment: ~10 hours/week for 4 weeks
- Requirements: GPU access, Python proficiency
- [ ] **Documentation Writer**: Create usage examples and guides
- Time commitment: ~8 hours/week for 4 weeks
- Requirements: Technical writing, SGLang familiarity
- [ ] **Visual Designer** (optional): Curate output examples
- Time commitment: ~5 hours/week for 2 weeks
- Requirements: Eye for quality, prompt engineering
### Hardware Requirements
- H100 (80GB) - primary testing platform
- A100 (40GB) - secondary platform (optional)
- Access via cloud providers acceptable (AWS/Lambda/RunPod)
## ๐ Contribution Process
1. **Comment below** if interested (mention which role)
2. **Join discussion** on implementation details
3. **Fork repo** and work on assigned section
4. **Submit PR** following SGLang cookbook standards
5. **Iterate** based on review feedback
## ๐ References
- [SGLang Cookbook Template](https://cookbook.sglang.io/)
- [DeepSeek-V3 Example](https://cookbook.sglang.io/docs/DeepSeek/DeepSeek-V3_2)
- [Wan 2.1 GitHub](https://github.com/Wan-Video/Wan2.1)
- [SGLang Documentation](https://docs.sglang.ai/)
## โ Questions?
**Q: I only have consumer GPUs (4090/3090), can I help?**
A: Yes! You can help with documentation, examples, or testing the 1.3B Wan model. You can reach out @Richardczl98 for requesting additional GPUs
**Q: Which video model should we prioritize first?**
A: Wan 2.1 - it's the most mature open-source option.
**Q: Do I need to know SGLang internals?**
A: No, just familiarity with diffusion models and Python.
---
**Ready to contribute?** Drop a comment below! ๐
cc @mickqian @Qiaolin-Yu @yhyang201
|
https://github.com/sgl-project/sglang/issues/15443
|
open
|
[] | 2025-12-19T03:44:33Z
| 2025-12-23T13:09:31Z
| 1
|
Richardczl98
|
vllm-project/vllm
| 30,969
|
[Bug]: SmolLM3-3B FP8 Fails to Load [`compressed-tensors` and `transformers-impl` compatibility issue]
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
Running in official Docker image: vllm/vllm-openai:v0.11.1
GPU: NVIDIA L4 (GCP g2-standard-8)
`| NVIDIA-SMI 570.195.03 Driver Version: 570.195.03 CUDA Version: 12.9 |`
vLLM version: 0.11.1
```text
0.11.1
```
</details>
### ๐ Describe the bug
vLLM v0.11.1 fails to load SmolLM3-3B FP8 quantized models with llm-compressor using compressed-tensors.
Same models work on v0.11.0.
Tested with:
- [huggingface.co/RedHatAI/SmolLM3-3B-FP8-dynamic](https://huggingface.co/RedHatAI/SmolLM3-3B-FP8-dynamic)
- Manually quantized fine tuned [SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B) using llmcompressor==0.7 (compressed-tensors==0.12.2) in FP8-dynamic
- Manually quantized fine tuned [SmolLM3-3B(https://huggingface.co/HuggingFaceTB/SmolLM3-3B) using llmcompressor==0.8.1 (compressed-tensors==0.12.2) in FP8-dynamic
All fail on v0.11.1.
All work on v0.11.0.
Error occurs during model loading in find_matched_target function.
The error is: "Unable to find matching target for model.layers.0.self_attn.q_proj in the compressed-tensors config"
Complete error
```
+ exec python3 -m vllm.entrypoints.openai.api_server --model RedHatAI/SmolLM3-3B-FP8-dynamic --port 8000 --trust-remote-code --max-model-len 5000
[APIServer pid=1] INFO 12-12 05:05:29 [api_server.py:1772] vLLM API server version 0.11.1
[APIServer pid=1] INFO 12-12 05:05:29 [utils.py:253] non-default args: {'model': 'RedHatAI/SmolLM3-3B-FP8-dynamic', 'trust_remote_code': True, 'max_model_len': 5000}
[APIServer pid=1] The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
[APIServer pid=1] INFO 12-12 05:05:40 [model.py:637] Resolved architecture: SmolLM3ForCausalLM
[APIServer pid=1] INFO 12-12 05:05:40 [model.py:1750] Using max model len 5000
[APIServer pid=1] INFO 12-12 05:05:42 [scheduler.py:228] Chunked prefill is enabled with max_num_batched_tokens=2048.
[EngineCore_DP0 pid=37] INFO 12-12 05:05:54 [core.py:93] Initializing a V1 LLM engine (v0.11.1) with config: model='RedHatAI/SmolLM3-3B-FP8-dynamic', quantization=compressed-tensors
[EngineCore_DP0 pid=37] INFO 12-12 05:05:55 [parallel_state.py:1200] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://10.111.66.205:48123 backend=nccl
[EngineCore_DP0 pid=37] INFO 12-12 05:05:55 [parallel_state.py:1408] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank 0
[EngineCore_DP0 pid=37] INFO 12-12 05:05:55 [gpu_model_runner.py:3467] Starting to load model RedHatAI/SmolLM3-3B-FP8-dynamic...
[EngineCore_DP0 pid=37] INFO 12-12 05:05:56 [base.py:121] Using Transformers modeling backend.
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] EngineCore failed to start.
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] Traceback (most recent call last):
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 834, in run_engine_core
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] engine_core = EngineCoreProc(*args, **kwargs)
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 610, in __init__
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 102, in __init__
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] super().__init__(
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/abstract.py", line 101, in __init__
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] self.model_executor = executor_class(vllm_config)
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/uniproc_executor.py", line 48, in _init_executor
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] self._init_executor()
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_worker.py", line 273, in load_model
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] self.driver_worker.load_model()
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 3484, in load_model
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] self.model_runner.load_model(eep_scale_up=eep_scale_up)
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/base_loader.py", line 49, in load_model
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] self.model = model_loader.load_model(
[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [cor
|
https://github.com/vllm-project/vllm/issues/30969
|
closed
|
[
"bug",
"help wanted",
"good first issue"
] | 2025-12-18T14:36:30Z
| 2025-12-20T21:54:47Z
| 3
|
GauthierRoy
|
huggingface/lerobot
| 2,680
|
Invalid frame index when training on merged datasets [RuntimeError]
|
### Ticket Type
๐ Bug Report (Something isn't working)
### Environment & System Info
```Shell
- LeRobot version: 0.4.3
- Platform: Linux-5.4.0-165-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- FFmpeg version: 4.4.2-0ubuntu0.22.04.1
- PyTorch version: 2.7.1+cu126
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.6
- GPU model: Quadro RTX 6000
- Using GPU in script?: <fill in>
- lerobot scripts: ['lerobot-calibrate', 'lerobot-dataset-viz', 'lerobot-edit-dataset', 'lerobot-eval', 'lerobot-find-cameras', 'lerobot-find-joint-limits', 'lerobot-find-port', 'lerobot-imgtransform-viz', 'lerobot-info', 'lerobot-record', 'lerobot-replay', 'lerobot-setup-motors', 'lerobot-teleoperate', 'lerobot-train']
```
### Description
I'm having a problem when training a VLA with `lerobot-train` on a merged dataset.
I'm aware of the issue #2627 as well as PR #2550 that is supposed to fix the bug.
However, the problem is still occurring on the latest commit (4a151a9) of lerobot 0.4.3.
The dataset has been merged with the following script:
`lerobot-edit-dataset \
--repo_id whosricky/so101-megamix-v1 \
--operation.type merge \
--operation.repo_ids "['whosricky/so101_pick_red_cube_3cams', 'whosricky/so101_pick_blue_cube_3cams', 'whosricky/so101_pick_yellow_cube_3cams', 'whosricky/so101_pick_cube_reasoning_3cams', 'whosricky/so101_stacking_3cams', 'whosricky/so101_pickplace_red_cube_3cams', 'whosricky/so101_pickplace_all_red_cubes_3cams', 'whosricky/so101_sorting_cubes_3cams', 'whosricky/so101_pickplace_red_cubes_random_bowl_3cams']" \
--push_to_hub true `
Training on the single datasets works flawlessly. Training on the merged dataset results in an error.
The problematic sample seems to be #51 of "whosricky/so101_pick_blue_cube_3cams" due to the timestamp exceeding the default tolerance_s.
However, the problem occurs only on the merged dataset and not on the single one.
### Context & Reproduction
```
lerobot-train \
--dataset.repo_id=whosricky/so101-megamix-v1 \
--output_dir=outputs_xvla_megamix_v1/train/my_xvla \
--job_name=xvla_training_megamix_v1 \
--policy.path=lerobot/xvla-base \
--policy.repo_id=whosricky/xvla-so101-megamix-v1 \
--policy.private=true \
--policy.dtype=bfloat16 \
--num_workers=8 \
--batch_size=8 \
--steps=30000 \
--eval_freq=5000 \
--log_freq=100 \
--save_freq=5000 \
--policy.device=cuda \
--policy.freeze_vision_encoder=false \
--policy.freeze_language_encoder=false \
--policy.train_policy_transformer=true \
--policy.train_soft_prompts=true \
--policy.action_mode=auto \
--policy.num_image_views=3 \
--policy.empty_cameras=0 \
--rename_map='{"observation.images.top": "observation.images.image", "observation.images.gripper": "observation.images.image2", "observation.images.front": "observation.images.empty_camera_0"}' \
--wandb.enable=true
```
### Relevant logs or stack trace
```Shell
WARNING:accelerate.utils.other:Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
INFO 2025-12-18 12:38:22 ot_train.py:164 {'batch_size': 8,
'checkpoint_path': None,
'dataset': {'episodes': None,
'image_transforms': {'enable': False,
'max_num_transforms': 3,
'random_order': False,
'tfs': {'affine': {'kwargs': {'degrees': [-5.0,
5.0],
'translate': [0.05,
0.05]},
'type': 'RandomAffine',
'weight': 1.0},
'brightness': {'kwargs': {'brightness': [0.8,
1.2]},
'type': 'ColorJitter',
'weight': 1.0},
'contrast': {'kwargs': {'contrast': [0.8,
1.2]},
'type': 'ColorJitter',
'weight': 1.0},
'hue': {'kwargs': {'hue': [-0.05,
0.05]},
'type': 'ColorJitter',
'weight': 1.0},
'saturation': {'kwargs': {'satur
|
https://github.com/huggingface/lerobot/issues/2680
|
open
|
[
"bug",
"question",
"dataset",
"visualization",
"examples",
"training"
] | 2025-12-18T13:29:50Z
| 2025-12-26T06:26:37Z
| null |
RiccardoIzzo
|
huggingface/trl
| 4,719
|
Loss calculation of `GKDTrainer` may be inaccurate when performing gradient accumulation?
|
It seems that `GKDTrainer` averages the loss of tokens in a micro batch ahead?
https://github.com/huggingface/trl/blob/8918c9836a3e0b43a6851c08d01b69072f56ca52/trl/experimental/gkd/gkd_trainer.py#L284
|
https://github.com/huggingface/trl/issues/4719
|
open
|
[
"๐ bug",
"๐ GKD"
] | 2025-12-18T12:50:05Z
| 2025-12-18T12:50:49Z
| 0
|
jue-jue-zi
|
huggingface/lerobot
| 2,679
|
Merging datasets removes fps from scalar features
|
### Ticket Type
๐ Bug Report (Something isn't working)
### Environment & System Info
```Shell
- LeRobot version: 0.4.3
- Platform: Linux-6.17.9-arch1-1-x86_64-with-glibc2.42
- Python version: 3.12.11
- Huggingface Hub version: 0.34.4
- Datasets version: 4.1.1
- Numpy version: 2.3.5
- FFmpeg version: n8.0.1
- PyTorch version: 2.7.1+cu128
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.8
- GPU model: NVIDIA GeForce RTX 5090 Laptop GPU
- Using GPU in script?: <fill in>
- lerobot scripts: ['lerobot-calibrate', 'lerobot-dataset-viz', 'lerobot-edit-dataset', 'lerobot-eval', 'lerobot-find-cameras', 'lerobot-find-joint-limits', 'lerobot-find-port', 'lerobot-imgtransform-viz', 'lerobot-info', 'lerobot-record', 'lerobot-replay', 'lerobot-setup-motors', 'lerobot-teleoperate', 'lerobot-train']
```
### Description
When using the `merge_datasets` function, the fps attribute is removed from the scalar features in the dataset. Below are the scalar features from dataset.meta.features of a dataset before and after merging
Before:
```
'timestamp': {'dtype': 'float32', 'shape': (1,), 'names': None, 'fps': 10},
'frame_index': {'dtype': 'int64', 'shape': (1,), 'names': None, 'fps': 10},
'episode_index': {'dtype': 'int64', 'shape': (1,), 'names': None, 'fps': 10},
'index': {'dtype': 'int64', 'shape': (1,), 'names': None, 'fps': 10},
'task_index': {'dtype': 'int64', 'shape': (1,), 'names': None, 'fps': 10}}
```
After:
```
'timestamp': {'dtype': 'float32', 'shape': (1,), 'names': None},
'frame_index': {'dtype': 'int64', 'shape': (1,), 'names': None},
'episode_index': {'dtype': 'int64', 'shape': (1,), 'names': None},
'index': {'dtype': 'int64', 'shape': (1,), 'names': None},
'task_index': {'dtype': 'int64', 'shape': (1,), 'names': None}
```
This creates subsequent problems when trying to add an additional dataset to a merged output as the feature mismatch will cause an error to be thrown
### Context & Reproduction
Running the script below shows the features change before and after the merge
```
from lerobot.datasets.dataset_tools import split_dataset, merge_datasets
from lerobot.datasets.lerobot_dataset import LeRobotDataset
from pprint import pprint
dataset = LeRobotDataset("lerobot/pusht")
feat_1 = dataset.meta.features
splits = split_dataset(dataset, splits={"train": 0.8, "val": 0.2})
merged = merge_datasets([splits["train"], splits["val"]], output_repo_id="lerobot/pusht_merged")
feat_2 = merged.meta.features
print("Features of original dataset:")
pprint(feat_1)
print("Features of merged dataset:")
pprint(feat_2)
```
### Relevant logs or stack trace
```Shell
Features of original dataset:
{'action': {'dtype': 'float32',
'fps': 10.0,
'names': {'motors': ['motor_0', 'motor_1']},
'shape': (2,)},
'episode_index': {'dtype': 'int64', 'fps': 10.0, 'names': None, 'shape': (1,)},
'frame_index': {'dtype': 'int64', 'fps': 10.0, 'names': None, 'shape': (1,)},
'index': {'dtype': 'int64', 'fps': 10.0, 'names': None, 'shape': (1,)},
'next.done': {'dtype': 'bool', 'fps': 10.0, 'names': None, 'shape': (1,)},
'next.reward': {'dtype': 'float32', 'fps': 10.0, 'names': None, 'shape': (1,)},
'next.success': {'dtype': 'bool', 'fps': 10.0, 'names': None, 'shape': (1,)},
'observation.image': {'dtype': 'video',
'names': ['height', 'width', 'channel'],
'shape': (96, 96, 3),
'video_info': {'has_audio': False,
'video.codec': 'av1',
'video.fps': 10.0,
'video.is_depth_map': False,
'video.pix_fmt': 'yuv420p'}},
'observation.state': {'dtype': 'float32',
'fps': 10.0,
'names': {'motors': ['motor_0', 'motor_1']},
'shape': (2,)},
'task_index': {'dtype': 'int64', 'fps': 10.0, 'names': None, 'shape': (1,)},
'timestamp': {'dtype': 'float32', 'fps': 10.0, 'names': None, 'shape': (1,)}}
Features of merged dataset:
{'action': {'dtype': 'float32',
'fps': 10.0,
'names': {'motors': ['motor_0', 'motor_1']},
'shape': (2,)},
'episode_index': {'dtype': 'int64', 'names': None, 'shape': (1,)},
'frame_index': {'dtype': 'int64', 'names': None, 'shape': (1,)},
'index': {'dtype': 'int64', 'names': None, 'shape': (1,)},
'next.done': {'dtype': 'bool', 'fps': 10.0, 'names': None, 'shape': (1,)},
'next.reward': {'dtype': 'float32', 'fps': 10.0, 'names': None, 'shape': (1,)},
'next.success': {'dtype': 'bool', 'fps': 10.0, 'names': None, 'shape': (1,)},
'observation.image': {'dtype': 'video',
'names': ['height', 'width', 'channel'],
'shape': (96, 96, 3),
'video_info': {'has_audio': False,
'video.codec': 'av1',
'video.fps': 10.0,
|
https://github.com/huggingface/lerobot/issues/2679
|
open
|
[
"bug",
"enhancement",
"question",
"dataset",
"performance",
"examples"
] | 2025-12-18T12:47:14Z
| 2025-12-18T15:25:12Z
| null |
reeceomahoney
|
vllm-project/vllm
| 30,956
|
[Feature]: could output the given format logger ?
|
### ๐ The feature, motivation and pitch
hi,dear ,
i have def the logger from py scripts ,etc, logger_utils.py
and could i use shell run the command with the logger,
such as ,
`vllm serve qwen3-embedding-0.6b --logger_file logger_utils.py `
thx
i really need your help
SOS ,thx
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30956
|
open
|
[
"feature request"
] | 2025-12-18T09:35:22Z
| 2025-12-19T01:52:41Z
| 5
|
ucas010
|
huggingface/lerobot
| 2,678
|
Bug: lerobot-dataset-viz IndexError when visualizing specific episodes
|
# Bug Report: `lerobot-dataset-viz` IndexError when visualizing specific episodes
## Description
The `lerobot-dataset-viz` command fails with an `IndexError` when trying to visualize a specific episode using the `--episode-index` parameter. The issue is caused by `EpisodeSampler` using global dataset indices while the dataset has been filtered to contain only the specified episode.
## Error Message
```
IndexError: Invalid key: 180 is out of bounds for size 180
```
Full traceback:
```
Traceback (most recent call last):
File "/path/to/lerobot/scripts/lerobot_dataset_viz.py", line 289, in main
visualize_dataset(dataset, **vars(args))
File "/path/to/lerobot/scripts/lerobot_dataset_viz.py", line 148, in visualize_dataset
for batch in tqdm.tqdm(dataloader, total=len(dataloader)):
...
File "/path/to/lerobot/datasets/lerobot_dataset.py", line 1028, in __getitem__
item = self.hf_dataset[idx]
...
IndexError: Invalid key: 180 is out of bounds for size 180
```
## Steps to Reproduce
1. Create a LeRobot dataset with multiple episodes (e.g., 20 episodes, 180 frames each)
2. Try to visualize episode 1:
```bash
lerobot-dataset-viz \
--repo-id lerobot/test \
--root ./lerobot_dataset \
--mode local \
--episode-index 1 \
--batch-size 2
```
3. Error occurs when trying to load the data
## Root Cause Analysis
The bug is in the `EpisodeSampler` class (line 81-91 of `lerobot_dataset_viz.py`):
```python
class EpisodeSampler(torch.utils.data.Sampler):
def __init__(self, dataset: LeRobotDataset, episode_index: int):
from_idx = dataset.meta.episodes["dataset_from_index"][episode_index] # 180
to_idx = dataset.meta.episodes["dataset_to_index"][episode_index] # 360
self.frame_ids = range(from_idx, to_idx) # range(180, 360)
```
**The problem:**
1. At line 287, the dataset is filtered: `dataset = LeRobotDataset(repo_id, episodes=[args.episode_index], ...)`
2. The filtered dataset only contains 180 frames with **local indices 0-179**
3. But `EpisodeSampler` uses indices from `dataset.meta.episodes` which are **global indices 180-359** (position in the full dataset)
4. When DataLoader tries to access `dataset[180]`, it fails because the filtered dataset only has indices 0-179
**Example:**
```
Full dataset (3600 frames):
โโโโโโโโโโโโฌโโโโโโโโโโโฌโโโโโโโโโโโฌโโโโโโฌโโโโโโโโโโโ
โ Episode 0โ Episode 1โ Episode 2โ ... โ Episode 19โ
โ 0-179 โ 180-359 โ 360-539 โ ... โ 3420-3599โ
โโโโโโโโโโโโดโโโโโโโโโโโดโโโโโโโโโโโดโโโโโโดโโโโโโโโโโโ
โ
Global indices
Filtered dataset (180 frames, episode 1 only):
โโโโโโโโโโโโ
โ Episode 1โ โ Only these 180 frames exist
โ 0-179 โ โ Local indices in filtered dataset
โโโโโโโโโโโโ
EpisodeSampler tries to use: range(180, 360) โ Out of bounds!
```
## Proposed Fix
Modify `EpisodeSampler` to handle filtered datasets:
```python
class EpisodeSampler(torch.utils.data.Sampler):
def __init__(self, dataset: LeRobotDataset, episode_index: int):
# Check if dataset is already filtered to a single episode
if dataset.episodes is not None and len(dataset.episodes) == 1:
# Dataset is filtered, use all available frames (local indices)
self.frame_ids = range(len(dataset))
else:
# Dataset is not filtered, use global indices from metadata
from_idx = dataset.meta.episodes["dataset_from_index"][episode_index]
to_idx = dataset.meta.episodes["dataset_to_index"][episode_index]
self.frame_ids = range(from_idx, to_idx)
def __iter__(self) -> Iterator:
return iter(self.frame_ids)
def __len__(self) -> int:
return len(self.frame_ids)
```
## Workaround
Until this is fixed, users can visualize a specific episode by:
1. Loading the full dataset without filtering
2. Using `torch.utils.data.Subset` to select the episode
```python
import rerun as rr
from lerobot.datasets.lerobot_dataset import LeRobotDataset
from torch.utils.data import DataLoader, Subset
# Load full dataset (no filtering)
dataset = LeRobotDataset(
repo_id="lerobot/test",
root="./lerobot_dataset"
)
# Manually select episode frames
episode_index = 1
from_idx = dataset.meta.episodes[episode_index]["dataset_from_index"]
to_idx = dataset.meta.episodes[episode_index]["dataset_to_index"]
episode_dataset = Subset(dataset, range(from_idx, to_idx))
# Create dataloader
dataloader = DataLoader(episode_dataset, batch_size=2, shuffle=False)
# Visualize...
```
## Environment
- **LeRobot Version:** 0.4.2
- **Python Version:** 3.12.11
- **PyTorch Version:** 2.7.1+cu126
- **Datasets Version:** 4.1.1
- **OS:** Linux
## Additional Context
This issue affects any dataset where users want to visualize a specific episode that is not episode 0. The bug makes the `--episode-index` parameter effectively unusable for episodes other than the first one when the dataset has already been filtered.
## Impact
- **Severity:** Medium (cor
|
https://github.com/huggingface/lerobot/issues/2678
|
open
|
[
"bug",
"question",
"dataset",
"visualization",
"python",
"examples"
] | 2025-12-18T08:45:05Z
| 2025-12-24T08:31:00Z
| null |
apeSh1t
|
vllm-project/vllm
| 30,941
|
[Performance]: Why Does Latency Remain Unchanged in vLLM 0.11.0 When Input Token Count Decreases for qwen3-vl-30b-a3b?
|
### Proposal to improve performance
_No response_
### Report of performance regression
_No response_
### Misc discussion on performance
Using vLLM version 0.11.0 to run the qwen3-vl-30b-a3b model, the stress test results show that although the number of input tokens decreases, the latency does not change.
The model is deployed on a single A800 GPU. The startup command is:
vllm server
--dtype bfloat16
--max-model-len 128000
--gpu-memory-utilization 0.95
--limit-mm-per-prompt.video 0
I performed a stress test using one image and a set of text prompts, with QPS set to 10.
I resized the image to 0.25x and 0.7x of the original size while keeping everything else unchanged.
The conclusions are as follows:
qwen3-30b-a3b (single image *0.25) latency 3s
qwen3-30b-a3b (single image *0.7) latency 5s
qwen3-30b-a3b (single image) latency 5s
Prior conditions:
Input token scale / Output token scale
Single image + text prompts: about 4200 / about 70
Single image *0.6 + text prompts: about 1900 / about 70
Single image *0.3 + text prompts: about 860 / about 70
### Your current environment (if you think it is necessary)
```text
The output of `python collect_env.py`
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30941
|
open
|
[
"performance"
] | 2025-12-18T07:40:35Z
| 2025-12-18T07:40:35Z
| 0
|
Hormoney
|
pytorch/pytorch
| 170,750
|
CUDA: Tensor.index_select out-of-bounds index triggers device-side assert (Indexing.cu:1237) instead of a regular error
|
### ๐ Describe the bug
### Bug description
On CUDA, calling `Tensor.index_select` with an out-of-bounds index triggers a device-side assert in `../aten/src/ATen/native/cuda/Indexing.cu:1237` (`indexSelectSmallIndex`), and then raises `RuntimeError: CUDA error: device-side assert triggered`.
On CPU, similar out-of-bounds indexing typically raises a regular Python exception (e.g. `IndexError` / โindex out of rangeโ) without poisoning the CUDA context. On CUDA, the device-side assert is harsh and can cause subsequent CUDA ops to fail as well.
### Minimal repro
```python
import torch
# out-of-bounds index on an empty tensor
x = torch.empty((0,), device="cuda")
idx = torch.tensor([1], device="cuda") # OOB
x.index_select(0, idx)
# Force sync so the error is reported at the correct line
torch.cuda.synchronize()
```
### How to run
CUDA_LAUNCH_BLOCKING=1 TORCH_SHOW_CPP_STACKTRACES=1 \ python mini_repro.py
If symbolization hangs:TORCH_DISABLE_ADDR2LINE=1 CUDA_LAUNCH_BLOCKING=1 TORCH_SHOW_CPP_STACKTRACES=1 python
### Expected behavior
Raise a normal, non-fatal Python exception for out-of-bounds indices (similar to CPU behavior).
Avoid a device-side assert that poisons the CUDA context.
### Actual behavior
../aten/src/ATen/native/cuda/Indexing.cu:1237: indexSelectSmallIndex: block: [0,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "/home/lhj/callChainBuild/output/targeted_mutation/validation/minimal_code/torch_Tensor_index_select_repro.py", line 5, in <module>
x.index_select(0, idx)
**RuntimeError: CUDA error: device-side assert triggered**
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
### Traceback
```
../aten/src/ATen/native/cuda/Indexing.cu:1237: indexSelectSmallIndex: block: [0,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
[W Module.cpp:156] symbolizing C++ stack trace for exception; if this hangs, rerun with TORCH_DISABLE_ADDR2LINE=1...
Traceback (most recent call last):
File "/home/lhj/callChainBuild/output/targeted_mutation/validation/minimal_code/torch_Tensor_index_select_repro.py", line 5, in <module>
x.index_select(0, idx)
RuntimeError: CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:44 (most recent call first):
C++ CapturedTraceback:
#4 c10::Error::Error(c10::SourceLocation, std::string) from ??:0
#5 c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) from ??:0
#6 c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) from ??:0
#7 at::native::(anonymous namespace)::index_select_out_cuda_impl<float>(at::Tensor&, at::Tensor const&, long, at::Tensor const&)::{lambda()#1}::operator()() const::{lambda()#2}::operator()() const from ??:0
#8 void at::native::(anonymous namespace)::index_select_out_cuda_impl<float>(at::Tensor&, at::Tensor const&, long, at::Tensor const&) from ??:0
#9 at::native::index_select_out_cuda(at::Tensor const&, long, at::Tensor const&, at::Tensor&) from ??:0
#10 at::native::index_select_cuda(at::Tensor const&, long, at::Tensor const&) from ??:0
#11 at::(anonymous namespace)::(anonymous namespace)::wrapper_CUDA__index_select(at::Tensor const&, long, at::Tensor const&) from RegisterCUDA.cpp:0
#12 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, long, at::Tensor const&), &at::(anonymous namespace)::(anonymous namespace)::wrappe
r_CUDA__index_select>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, long, at::Tensor const&> >, at::Tensor (at::Tensor const&, long, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&) from RegisterCUDA.cpp:0 #13 at::_ops::index_select::redispatch(c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&) from ??:0
#14 torch::autograd::VariableType::(anonymous namespace)::index_select(c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&) from VariableType_0.cpp:0
#15 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&), &torch::autograd::VariableType::(ano
nymous namespace)::index_select>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&> >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&) from VariableType_0.cpp:0
|
https://github.com/pytorch/pytorch/issues/170750
|
open
|
[
"module: cuda",
"triaged"
] | 2025-12-18T06:48:01Z
| 2025-12-20T23:31:57Z
| 0
|
DeLightor
|
vllm-project/vllm
| 30,933
|
[Usage]: What is the latest instruction to run DeepSeek V3.2?
|
### Your current environment
vLLM 0.12.0
### How would you like to use vllm
I am following the guidelines here https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-V3_2.html for running DeepSeek v3.2. By following the instructions I installed vLLM 0.12.0 on my H200 node. However, when I try to run it with `vllm serve deepseek-ai/DeepSeek-V3.2 --tensor-parallel-size 8 --tokenizer-mode deepseek_v32` it gives an error
```
(APIServer pid=816209) ValueError: No tokenizer registered for tokenizer_mode='deepseek_v32'.
```
If I do not include the `--tokenizer-mode` then the server spins up with no errors, but when I try to send a request, I get another error below
```
(APIServer pid=753941) ERROR 12-18 06:04:47 [serving_chat.py:263] ValueError: As of transformers v4.44, default chat template is no longer allowed, so you must provide a chat template if the tokenizer does not define one.
```
I am wondering if there is an update on the instructions to run DeepSeek V3.2 on vLLM.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30933
|
open
|
[
"usage"
] | 2025-12-18T06:18:29Z
| 2025-12-18T15:50:29Z
| 1
|
IKACE
|
vllm-project/vllm
| 30,923
|
[Bug]: Use the offical doucment vllm online method deploy DeepSeek-OCR๏ผthe result is very bad . but I ust the offline method the result is normal. why ?
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### ๐ Describe the bug
I use https://github.com/vllm-project/recipes/blob/main/DeepSeek/DeepSeek-OCR.md
the offline and online mehtod is work, run okใ
but the same picture in offline is better than online, I can't find the reason what happend ? can someone help me
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30923
|
closed
|
[
"bug"
] | 2025-12-18T04:14:33Z
| 2025-12-18T04:25:20Z
| 0
|
git-liweichao
|
vllm-project/vllm
| 30,922
|
[Bug]: Use the offical doucment vllm online method deploy DeepSeek-OCR๏ผthe result is very bad . but I ust the offline method the result is normal. why ?
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### ๐ Describe the bug
I use https://github.com/vllm-project/recipes/blob/main/DeepSeek/DeepSeek-OCR.md
the offline and online mehtod is work, run okใ
but the same picture in offline is better than online, I can't find the reason what happend ? can someone help me
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30922
|
open
|
[
"bug"
] | 2025-12-18T04:08:46Z
| 2025-12-18T04:25:36Z
| 1
|
git-liweichao
|
sgl-project/sglang
| 15,359
|
[Bug] The handling logic for tool_choice = 'auto' in the DeepseekV3.2 model may be incorrect.
|
### Checklist
- [ ] I searched related issues but found no solution.
- [ ] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [ ] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [ ] Please use English. Otherwise, it will be closed.
### Describe the bug
When using SGLang (sglang:v0.5.6.post2) with DeepseekV3.2, I noticed the response of some request which involves tool calls is not currect.
like the following requests
```sh
curl -X POST http://{host}:{port}/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-1234" \
-d '{
"model": "DeepseekV3.2",
"messages": [
{
"role": "user",
"content": "What is the weather in Beijing?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"strict": true,
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto",
"stream": false
}'
```
might response something like
```sh
{"id":"88c2a168ad43446f9116aeed715cd835","object":"chat.completion","created":1766024807,"model":"DeepseekV3.2","choices":[{"index":0,"message":{"role":"assistant","content":"tool_call_name=current_weather","reasoning_content":null,"tool_calls":null},"logprobs":null,"finish_reason":"stop","matched_stop":1}],"usage":{"prompt_tokens":198,"total_tokens":206,"completion_tokens":8,"prompt_tokens_details":null,"reasoning_tokens":0},"metadata":{"weight_version":"default"}}
```
or
```sh
{"id":"0223b02af05b4c9b99e8b9e4b2abab12","object":"chat.completion","created":1766026261,"model":"DeepseekV3.2","choices":[{"index":0,"message":{"role":"assistant","content":"tool_call_name: get_current_weather\ntool_call_arguments: {\n \"location\": \"Beijing, China\",\n \"unit\": \"celsius\"\n}","reasoning_content":null,"tool_calls":null},"logprobs":null,"finish_reason":"stop","matched_stop":1}],"usage":{"prompt_tokens":198,"total_tokens":232,"completion_tokens":34,"prompt_tokens_details":null,"reasoning_tokens":0},"metadata":{"weight_version":"default"}}
```
As you can see from the response, the content value contains `tool_call_name` but tool_calls is set to `null`
And if change tool_choice to 'required', the response looks like
```sh
{"id":"550109a7f6854af3ba47fdad4f38f9d5","object":"chat.completion","created":1766025639,"model":"DeepseekV3.2","choices":[{"index":0,"message":{"role":"assistant","content":null,"reasoning_content":null,"tool_calls":[{"id":"call_f171fbf82d7d41dab0eaf258","index":0,"type":"function","function":{"name":"get_current_weather","arguments":"{\"location\": \"Beijing, China\", \"unit\": \"celsius\"}"}}]},"logprobs":null,"finish_reason":"tool_calls","matched_stop":null}],"usage":{"prompt_tokens":198,"total_tokens":229,"completion_tokens":31,"prompt_tokens_details":null,"reasoning_tokens":0},"metadata":{"weight_version":"default"}}
```
and when checking the source codes, I find it might be related to the following codes
https://github.com/sgl-project/sglang/blob/9e7656be80578fe981a723bd115373371a9d0d90/python/sglang/srt/entrypoints/openai/serving_chat.py#L248-L260
https://github.com/sgl-project/sglang/blob/9e7656be80578fe981a723bd115373371a9d0d90/python/sglang/srt/function_call/function_call_parser.py#L189-L201
### Reproduction
start SGLang with the following command
```sh
python3 -m sglang.launch_server --model /root/.cache/huggingface/DeepSeek-V3.2 --served-model-name VILLM-N2 --tp 8 --ep 8 --dp 8 --enable-dp-attention --trust-remote-code --port 30000 --host 0.0.0.0 --enable-metrics --mem-fraction-static 0.75 --cuda-graph-max-bs 128 --torch-compile-max-bs 8 --speculative-algorithm EAGLE --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4 --nsa-prefill-backend flashmla_sparse --nsa-decode-backend fa3 --grammar-backend xgrammar --reasoning-parser deepseek-v3 --tool-call-parser deepseekv32 --chat-template ./examples/chat_template/tool_chat_template_deepseekv32.jinja
```
send request
```sh
curl -X POST http://{host}:{port}/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-1234" \
-d '{
"model": "DeepseekV3.2",
"messages": [
{
"role": "user",
"content": "What is the weather in Beijing?"
}
],
"tools": [
|
https://github.com/sgl-project/sglang/issues/15359
|
closed
|
[] | 2025-12-18T02:47:26Z
| 2025-12-18T03:36:38Z
| 4
|
JerryKwan
|
huggingface/lerobot
| 2,673
|
Dataset v2 not working anymore
|
### Ticket Type
Feature
### Environment & System Info
```Shell
- LeRobot version: 0.4.3
- Platform: macOS-26.2-arm64-arm-64bit
- Python version: 3.10.19
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- FFmpeg version: 7.1.1
- PyTorch version: 2.7.1
- Is PyTorch built with CUDA support?: False
- Cuda version: N/A
- GPU model: N/A
- Using GPU in script?: <fill in>
- lerobot scripts: ['lerobot-calibrate', 'lerobot-dataset-viz', 'lerobot-edit-dataset', 'lerobot-eval', 'lerobot-find-cameras', 'lerobot-find-joint-limits', 'lerobot-find-port', 'lerobot-imgtransform-viz', 'lerobot-info', 'lerobot-record', 'lerobot-replay', 'lerobot-setup-motors', 'lerobot-teleoperate', 'lerobot-train']
```
### Description
I did git pull and my dataset v2 doesn't work anymore. My model raises with the logs below.
### Context & Reproduction
1. `lerobot-train --help`
2. Check outputs
### Relevant logs or stack trace
```Shell
File "/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 733, in __next__
data = self._next_data()
File "/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1488, in _next_data
return self._process_data(data, worker_id)
File "/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1550, in _process_data
data.reraise()
File "/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/_utils.py", line 750, in reraise
raise exception
IndexError: Caught IndexError in DataLoader worker process 1.
Original Traceback (most recent call last):
File "/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 349, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
File "/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 52, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/admin/home/michel_aratingi/code/collab-lerobot/src/lerobot/datasets/lerobot_dataset.py", line 975, in __getitem__
item = self.hf_dataset[idx]
File "/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
File "/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2840, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices)
File "/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 612, in query_table
_check_valid_index_key(key, size)
File "/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 552, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 46969 is out of bounds for size 46963
```
### Checklist
- [x] I have searched existing tickets to ensure this isn't a duplicate.
- [x] I am using the latest version of the `main` branch.
- [x] (I have verified this is not an environment-specific problem.
### Additional Info / Workarounds
Maybe if I try to update my transformers dependency?
I edit this ticket
|
https://github.com/huggingface/lerobot/issues/2673
|
closed
|
[
"enhancement",
"question",
"dataset",
"dependencies",
"training"
] | 2025-12-17T21:35:31Z
| 2025-12-17T23:26:54Z
| null |
imstevenpmwork
|
huggingface/lerobot
| 2,670
|
Async inference for simulation (libero benchmark)
|
### Issue Type
{"label" => "โ Technical Question"}
### Environment & System Info
```Shell
```
### Description
Is there any way that we can support async inference for simulator (e.g., libero)? This makes it possible to test RTC with simulators.
### Context & Reproduction
A question re a feature.
### Expected Behavior / Desired Outcome
_No response_
### Relevant logs or stack trace
```Shell
```
### Checklist
- [ ] I have searched existing issues to ensure this isn't a duplicate.
- [ ] I am using the latest version of the `main` branch.
- [ ] (For bugs) I have verified this is not an environment-specific issue.
### Additional Info / Workarounds
_No response_
|
https://github.com/huggingface/lerobot/issues/2670
|
open
|
[
"question",
"simulation",
"performance",
"evaluation"
] | 2025-12-17T18:57:07Z
| 2026-01-02T05:40:18Z
| null |
dywsjtu
|
huggingface/transformers
| 42,930
|
Inconsistent handling of video_metadata in Qwen3VLVideoProcessor usage example
|
### System Info
transformers==4.57.3
### Who can help?
@zucchini-nlp @yonigozlan @molbap
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I'm working with the `Qwen3VLVideoProcessor` and noticed a potential inconsistency between the processor's output and its expected usage.
According to the current implementation of `Qwen3VLVideoProcessor._preprocess()`, the returned `BatchFeature` only contains the keys:
- `"pixel_values_videos"`
- `"video_grid_thw"`
However, in some calling code, I see logic like:
```python
videos_inputs = self.video_processor(videos=videos, **kwargs)
if "return_metadata" not in kwargs:
video_metadata = videos_inputs.pop("video_metadata")
```
How does it work? thank you very much
### Expected behavior
I want to change Qwen2.5vl to Qwen3vl but can't set a fixed nframes
|
https://github.com/huggingface/transformers/issues/42930
|
closed
|
[
"bug"
] | 2025-12-17T17:21:00Z
| 2025-12-18T10:32:23Z
| 3
|
wagoriginal
|
vllm-project/vllm
| 30,882
|
[Bug]: Marlin Fp8 Block Quant Failure
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### ๐ Describe the bug
```bash
MODEL := "Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8"
#MODEL := "RedHatAI/Mixtral-8x7B-Instruct-v0.1-FP8"
launch_marlin:
VLLM_TEST_FORCE_FP8_MARLIN=1 VLLM_USE_DEEPGEMM=0 chg run --gpus 1 -- vllm serve {{MODEL}} --enforce-eager --max-model-len 8192
eval:
lm_eval \
--model local-completions \
--tasks gsm8k \
--model_args "model={{MODEL}},base_url=http://localhost:8000/v1/completions,num_concurrent=1000,tokenized_requests=False"
```
Result:
```bash
(vllm) [robertgshaw2-redhat@nm-automation-h100-standalone-1-preserve vllm]$ just launch_marlin
VLLM_TEST_FORCE_FP8_MARLIN=1 VLLM_USE_DEEPGEMM=0 chg run --gpus 1 -- vllm serve Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8 --enforce-eager --max-model-len 8192
Reserved 1 GPU(s): [1] for command execution
(APIServer pid=3634068) INFO 12-17 15:54:23 [api_server.py:1259] vLLM API server version 0.13.0rc2.dev185+g00a8d7628
(APIServer pid=3634068) INFO 12-17 15:54:23 [utils.py:253] non-default args: {'model_tag': 'Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8', 'model': 'Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8', 'max_model_len': 8192, 'enforce_eager': True}
(APIServer pid=3634068) INFO 12-17 15:54:23 [model.py:514] Resolved architecture: Qwen3MoeForCausalLM
(APIServer pid=3634068) INFO 12-17 15:54:23 [model.py:1661] Using max model len 8192
(APIServer pid=3634068) INFO 12-17 15:54:24 [scheduler.py:230] Chunked prefill is enabled with max_num_batched_tokens=8192.
(APIServer pid=3634068) WARNING 12-17 15:54:24 [vllm.py:622] Enforce eager set, overriding optimization level to -O0
(APIServer pid=3634068) INFO 12-17 15:54:24 [vllm.py:722] Cudagraph is disabled under eager mode
(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:31 [core.py:93] Initializing a V1 LLM engine (v0.13.0rc2.dev185+g00a8d7628) with config: model='Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8', speculative_config=None, tokenizer='Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=8192, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=fp8, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False), seed=0, served_model_name=Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.NONE: 0>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['+quant_fp8', 'all', '+quant_fp8'], 'splitting_ops': [], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.NONE: 0>, 'cudagraph_num_of_warmups': 0, 'cudagraph_capture_sizes': [], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': False, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 0, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False}, 'local_cache_dir': None}
(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:32 [parallel_state.py:1210] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://10.243.64.5:43323 backend=nccl
(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:32 [parallel_state.py:1418] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank 0
(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:33 [gpu_model_runner.py:3620] Starting to load model Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8...
(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:33 [deep_gemm.py:76] DeepGEMM E8M0 enabled on current platform.
(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:33 [cuda.py:351] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:33 [layer.py:373] Enabled separate cuda str
|
https://github.com/vllm-project/vllm/issues/30882
|
closed
|
[
"bug",
"help wanted",
"good first issue"
] | 2025-12-17T15:55:18Z
| 2025-12-17T16:02:54Z
| 2
|
robertgshaw2-redhat
|
vllm-project/vllm
| 30,879
|
[Doc]: Add some documentation about encoder compilation
|
### ๐ The doc issue
I want something like a design doc for encoder compilation. For example:
- It uses support_torch_compile and set_model_tag to avoid cache collisions
- it supports or doesn't support the following features that VllmBackend does: cudagraphs, compile_ranges, and a high-level explanation for how these are turned off or on.
- it inherits from compilation_config (or maybe it doesn't)
- here's how to turn it on/off
I'm having a difficult time thinking through the edge cases in https://github.com/vllm-project/vllm/pull/30822 and https://github.com/vllm-project/vllm/pull/30489
cc @Lucaskabela
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30879
|
open
|
[
"documentation",
"torch.compile"
] | 2025-12-17T15:44:50Z
| 2025-12-17T16:27:38Z
| 1
|
zou3519
|
vllm-project/vllm
| 30,865
|
[Usage]:Tools GLM4.6v with vLLM
|
### Your current environment
Hello,
I am running tests on this model, which I find excellent. However, I am encountering a few issues and would like to know whether it is possible to fix them or if I am simply asking for the impossible.
First of all, here is my vLLM configuration:
`docker run -d \ --name vllm-llm \ --gpus '"device=4,5,6,7"' \ -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \ -e VLLM_OBJECT_STORAGE_SHM_BUFFER_NAME="${SHM_NAME}" \ -v /raid/workspace/qladane/vllm/hf-cache:/root/.cache/huggingface \ --env "HF_TOKEN=${HF_TOKEN:-}" \ -p 8003:8000 \ --ipc=host \ --restart unless-stopped \ vllm-openai:glm46v \ zai-org/GLM-4.6V-FP8 \ --tensor-parallel-size 4 \ --enforce-eager \ --served-model-name ImagineAI \ --allowed-local-media-path / \ --limit-mm-per-prompt '{"image": 1, "video": 0}' \ --max-model-len 131072 \ --dtype auto \ --kv-cache-dtype fp8 \ --gpu-memory-utilization 0.85 \ --reasoning-parser glm45 \ --tool-call-parser glm45 \ --enable-auto-tool-choice \ --enable-expert-parallel \ --mm-encoder-tp-mode data \ --mm-processor-cache-type shm`
Next, here is my OpenWebUI configuration:
<img width="1080" height="568" alt="Image" src="https://github.com/user-attachments/assets/af5ff9c0-9cdc-407f-8b0b-8e76a42746af" />
<img width="1080" height="394" alt="Image" src="https://github.com/user-attachments/assets/60fa32f9-2f54-4a75-8dc1-0ed00c69c4e5" />
<img width="1080" height="416" alt="Image" src="https://github.com/user-attachments/assets/783ba2e7-08e9-426a-a8be-9a2a561b2fe0" />
<img width="1080" height="357" alt="Image" src="https://github.com/user-attachments/assets/a7c30850-a680-401a-b149-5787000e7344" />
I would like to know whether, with GLM-4.6V and OpenWebUI, it is possible to make the model choose and execute tools autonomously when it considers them relevant.
At the moment:
If it is an internet search, I have to manually activate the button, even though access is already available.
If it is Python code, I have to click โexecuteโ; it does not run it by itself, even though it clearly has access to Jupyter, etc.
If anyone has already encountered this issue.
Thank you very much in advance for your help.
Kind regards
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30865
|
open
|
[
"usage"
] | 2025-12-17T10:51:34Z
| 2025-12-18T08:33:44Z
| 1
|
qBrabus
|
sgl-project/sglang
| 15,321
|
[Feature][VLM] Support ViT Piecewise CUDA Graph for VLMs
|
### Checklist
- [ ] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [ ] Please use English. Otherwise, it will be closed.
### Motivation
Support ViT Piecewise CUDA Graph for VLMs can improve prefill performance for VLMs.
- [x] Support ViT PCG Framework https://github.com/sgl-project/sglang/pull/14422
- [x] Support Qwen2.5-VL https://github.com/sgl-project/sglang/pull/14422
- [x] Support Qwen3-VL https://github.com/sgl-project/sglang/pull/15320
- [ ] Support InternVL
- [ ] Support GLM-4.1V
### Related resources
_No response_
|
https://github.com/sgl-project/sglang/issues/15321
|
open
|
[
"performance",
"Multi-modal",
"vlm"
] | 2025-12-17T09:17:18Z
| 2026-01-04T02:09:13Z
| 0
|
yuan-luo
|
vllm-project/vllm
| 30,859
|
[Bug]: set_current_vllm_config() is only done during the initialization stage but not the runtime stage
|
### Your current environment
Any env
### ๐ Describe the bug
# Issue Statement
Currently, `set_current_vllm_config()` is only done during the initialization stage but not the runtime stage. If the code tries to call `get_current_vllm_config()`, vLLM prints a warning "Current vLLM config is not set." and returns a default config.
However, this approach is problematic because:
1. When contributors change the code, many of us did not realize the fact that `get_current_vllm_config()` should only be called during init stage and should not be called during runtime stage.
2. It's just a warning instead of a hard failure, so contributors may not notice this when they run local tests.
3. Such warnings could be annoying to users because it may be printed for every single decoding step. Plus, the warning doesn't carry any useful info about how to fix/bypass the issue.
4. The default config may be completely incorrect for the caller function.
5. Warning prints on every step might impact performance, because print isn't fast operation. (thanks to @vadiklyutiy )
# Requirements
We should change the behavior such that:
- `get_current_vllm_config()` either returns the real config set by the user or raises an error if the config does not exist.
# Related Issues
This issues have appeared many times in the past. Although the fix is usually not difficult, it is an annoying recurrent issues that we should avoid in the future to avoid wasted engineering effort.
- https://github.com/vllm-project/vllm/issues/13207
- https://github.com/vllm-project/vllm/pull/29999
- https://github.com/vllm-project/vllm/issues/30185
- https://github.com/vllm-project/vllm/issues/30240
- https://github.com/vllm-project/vllm/issues/30571
# Possible Solutions
## Solution A: `set_current_vllm_config()` for runtime stage as well
Such that `get_current_vllm_config()` is always available, regardless of init stage or runtime stage.
## Solution B: Convert the warning in `get_current_vllm_config()` to a hard failure
But this means we may need to fix lots of CI failures.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30859
|
open
|
[
"bug"
] | 2025-12-17T08:59:49Z
| 2025-12-22T18:09:55Z
| 7
|
nvpohanh
|
sgl-project/sglang
| 15,319
|
[Feature] RFC: AutoSpec, Automatic Runtime Speculative Inference Parameter Tuning
|
### Checklist
- [x] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [x] Please use English. Otherwise, it will be closed.
### Motivation
## Summary
This proposal introduces automatic runtime tuning for speculative inference parameters in SGLang. Instead of requiring users to manually set speculative_num_steps, speculative_topk, and speculative_num_draft_tokens, the system dynamically adjusts them using a feedback-driven controller. This maximizes throughput while respecting hardware limits and draft model capabilitiesโwithout any manual configuration.
## Problem & Motivation
Currently, users of speculative inference in SGLang must manually tune several parameters:
- speculative_num_steps
- speculative_topk
- speculative_num_draft_tokens
<img width="377" height="249" alt="Image" src="https://github.com/user-attachments/assets/d13c9e61-e5be-4d9c-a245-d9b95cb299f6" />
From the graph, we see that throughput varies with speculative_num_steps and batchsize, and it also suggests that a well-tuned parameter configuration of speculative inference can increase throughput by 5%~50%. These findings suggest three current issues:
1. Trial-and-error overhead โ Finding optimal values per model/hardware/workload is tedious and often results in suboptimal performance.
2. Model capability mismatch โ Different draft models have different effective limits, but static parameters cannot adapt.
3. Batch-size sensitivity โ The optimal number of speculative steps decreases as batch size grows, due to compute constraints.
A single fixed configuration cannot perform well across varying models, hardware, and batch sizes.
## Proposed Design
We propose a lightweight feedback controller that adjusts speculative_num_steps in real time based on runtime metrics. For simplicity and stability, we keep speculative_topk=1 and speculative_num_draft_tokens=speculative_num_steps+1 (following observed best practices).
### Core Architecture
The system monitors two metrics after each batch:
- Acceptance rate โ ratio of accepted draft tokens.
- Acceptance length growth โ how much accepted length changes when steps increase.
Using these, it applies the following simple rules:
1. Increase steps if:
- Acceptance rate is high (configurable, e.g., โฅ0.6)
- Acceptance length grows sufficiently (exceeding a model-aware threshold)
- Hardware limits for the current batch size are not exceeded
2. Decrease steps if:
- Acceptance rate is low (e.g., <0.5)
3. Otherwise, keep steps unchanged.
This forms a stable negative-feedback loop that converges to a near-optimal step count for the current workload.
### Detailed Designs
#### Initialization Phase
During system startup, the following initialization sequence occurs:
1. **Computational Threshold Calculation**: For each possible batch size (1, 2, 4, 8, 16, 32, 64), compute the maximum allowable speculative steps given hardware constraints(thres_batchsize);
2. **Draft Model Ability Analysis**: (Optional) Assess draft model capabilities and establish maximum effective step boundaries. (This step is optional, parameters can be dynamically adjusted and saved during runtime.)
3. **Theoretical Threshold Establishment**: Calculate lower bound of theoretical accept length growth thresholds for different speculative step values.
#### Runtime Parameter Adjustment Logic
The adjustment algorithm implements a conservative approach to prevent oscillation:
<img width="1384" height="1484" alt="Image" src="https://github.com/user-attachments/assets/f31880cc-7374-4818-9fa7-6aa6f4d1ed91" />
```
For each batch run:
1. Collect metrics: acceptance_rate, acceptance_length_growth_rate
2. speculative_num_steps += 1 if (acceptance_length_growth_rate > thres_accept_length_growth_rate AND accept_rate >= thres_positive_accept_rate AND speculative_num_step+1<thres_batchsize_num_steps)
3. speculative_num_steps -= 1 if (acceptance_length_growth_rate <= thres_accept_length_growth_rate OR accept_rate < thres_negative_accept_rate)
4. speculative_num_steps remains unchanged otherwise
5. Update parameters and record accept length of current loop
```
### Key Benefits
- Zero configuration โ Users no longer need to guess parameters.
- Adaptive โ Automatically adjusts to model pairs, hardware, and batch sizes.
- Performance-aware โ Maximizes throughput while avoiding overload.
- Backward compatible โ Manual configuration remains available.
## Command Line Arguments
| Argument | Description |
|----------|-------------|
| `--speculative-auto-tune` | Enable automatic tuning of speculative_num_steps (default: false) |
| `--speculative-min-steps` | Minimum speculative steps for dynamic adjustment (default: 1) |
| `--speculative-max-steps` | Maximum speculative steps for dynamic adjustment (default: 10) |
| `--speculative-positive-threshold` | Acceptance rate threshold for increasin
|
https://github.com/sgl-project/sglang/issues/15319
|
open
|
[] | 2025-12-17T08:53:57Z
| 2025-12-22T03:37:45Z
| 3
|
maodoudou168
|
vllm-project/vllm
| 30,855
|
[Usage]: Qwen3-30B-A3B-NVFP4 fails on Dell Pro Max GB10 with "no kernel image is available for execution on the device"
|
### Your current environment
```
Hardware: Dell Pro Max GB10
OS: Ubuntu 24
CUDA: cuda_13.0.r13.0
Cuda compilation tools, release 13.0, V13.0.88;
vllm: V0.12.0
torch_version: 2.9.0+cu128
model: RedHatAI/Qwen3-30B-A3B-NVFP4 or nvidia/Qwen3-30B-A3B-NVFP4 or nvidia/Qwen3-30B-A3B-FP4
```
### How would you like to use vllm
### I'm trying to run the quantized model RedHatAI/Qwen3-30B-A3B-NVFP4 using vLLM v0.12.0 on a Dell Pro Max GB10.However, I get the following error during model loading: torch.AcceleratorError: CUDA error: no kernel image is available for execution on the device
vllm serve RedHatAI/Qwen3-30B-A3B-NVFP4 --port 8002 --gpu-memory-utilization 0.7
(APIServer pid=731925) INFO 12-17 16:03:13 [api_server.py:1772] vLLM API server version 0.12.0
(APIServer pid=731925) INFO 12-17 16:03:13 [utils.py:253] non-default args: {'model_tag': 'RedHatAI/Qwen3-30B-A3B-NVFP4', 'port': 8002, 'model': 'RedHatAI/Qwen3-30B-A3B-NVFP4', 'gpu_memory_utilization': 0.7}
(APIServer pid=731925) Downloading Model from https://www.modelscope.cn to directory: /home/smc01/.cache/modelscope/hub/models/RedHatAI/Qwen3-30B-A3B-NVFP4
(APIServer pid=731925) Downloading Model from https://www.modelscope.cn to directory: /home/smc01/.cache/modelscope/hub/models/RedHatAI/Qwen3-30B-A3B-NVFP4
(APIServer pid=731925) Downloading Model from https://www.modelscope.cn to directory: /home/smc01/.cache/modelscope/hub/models/RedHatAI/Qwen3-30B-A3B-NVFP4
(APIServer pid=731925) INFO 12-17 16:03:17 [model.py:637] Resolved architecture: Qwen3MoeForCausalLM
(APIServer pid=731925) INFO 12-17 16:03:17 [model.py:1750] Using max model len 40960
(APIServer pid=731925) INFO 12-17 16:03:17 [scheduler.py:228] Chunked prefill is enabled with max_num_batched_tokens=2048.
(APIServer pid=731925) Downloading Model from https://www.modelscope.cn to directory: /home/smc01/.cache/modelscope/hub/models/RedHatAI/Qwen3-30B-A3B-NVFP4
(APIServer pid=731925) Downloading Model from https://www.modelscope.cn to directory: /home/smc01/.cache/modelscope/hub/models/RedHatAI/Qwen3-30B-A3B-NVFP4
(EngineCore_DP0 pid=732093) INFO 12-17 16:03:22 [core.py:93] Initializing a V1 LLM engine (v0.12.0) with config: model='RedHatAI/Qwen3-30B-A3B-NVFP4', speculative_config=None, tokenizer='RedHatAI/Qwen3-30B-A3B-NVFP4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=compressed-tensors, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01), seed=0, served_model_name=RedHatAI/Qwen3-30B-A3B-NVFP4, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>}, 'local_cache_dir': None}
(EngineCore_DP0 pid=732093) /home/smc01/miniconda3/envs/vLLM_12/lib/python3.10/site-packages/torch/cuda/__init__.py:283: UserWarning:
(EngineCore_DP0 pid=732093) Found GPU0 NVIDIA GB10 which is of cuda capabilit
|
https://github.com/vllm-project/vllm/issues/30855
|
open
|
[
"usage"
] | 2025-12-17T08:44:11Z
| 2025-12-17T08:44:11Z
| 0
|
nanbogong
|
vllm-project/vllm
| 30,847
|
[Bug]: Qwen 3VL via Efficient Video Sampling (EVS) to trim video embeddings and found that the number of tokens after timestamp in the Prompt was not aligned with the actual number of tokens after pruning?
|
### Your current environment
<details>
vllm serve Qwen3-VL-8B --video-pruning-rate=0.75
messages=[
{
"role": "user",
"content": [
# {"type": "text", "text": "What's in this video?"},
{"type": "text", "text": "่ฟไธช่ง้ขๅๅพ็ๅๅซๆ่ฟฐ็ๆฏไปไนๅ
ๅฎน?"},
{
"type": "video_url",
"video_url": {
"url": "file:///codes/data/video/Tom_Jerry.mp4",
"fps": 1,
},
}
],
}
],
<summary>The output of <code>python collect_env.py</code></summary>
```text
The get-video_deplacement_qwen3vl method in the qwen3-vl.py file
Firstly: Calculate the number of frames per frame
Secondly, add the specific timestamp of<{cur_time:. 1f} s>to the Prompt and add the calculated number of tokens after the timestamp.
At this point, the number of tokens per frame is calculated based on the clipping rate, so except for the first frame, the number of tokens after each frame remains unchanged (EVS is not used to calculate the actual tokens here).
The EVS algorithm calculates that the number of tokens reserved for each frame is different. It will cause the number of tokens after timestamp to be inconsistent with the actual number of tokens after clipping
```
</details>
ใ
### ๐ Describe the bug
1ใget_video_replacement_qwen3vl
frames_idx_token=[165, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33]
<img width="989" height="359" alt="Image" src="https://github.com/user-attachments/assets/aeb59c9e-b970-4a2c-851f-a001c8610f0a" />
2ใcompute_retention_mask
<img width="1058" height="160" alt="Image" src="https://github.com/user-attachments/assets/d96f2ce9-1ed2-4d5d-b5ac-28c6394d8ee0" />
<img width="1042" height="194" alt="Image" src="https://github.com/user-attachments/assets/2bc35d99-bbbb-4437-84fb-755567e97b40" />
3ใembed_input_ids
<img width="988" height="421" alt="Image" src="https://github.com/user-attachments/assets/2f8fcd5b-9fef-49d5-b331-dd4f1f1d7335" />
input_ids:
<img width="886" height="624" alt="Image" src="https://github.com/user-attachments/assets/a62fcfe8-f29e-4e1d-a8fb-dec91c5e75e7" />
From the above 1 and 3, it can be seen that the data in frames_idx_token is the same as that in embed_input_ids,
The first frame contains 165 tokens, while the rest contain 33 tokens
151656 is the ID of the video token. The number of 151656 is the number of video tokens. The sum of the number of video tokens in all frames is the same as the sum of frames_idx_token.
Regarding the second item: compute_contention_mask EVS cropped mask, it was found that the number of tokens in the first frame was 165, while the number of tokens in other frames was different,
Based on the above 1, 2, and 3, it can be concluded that the current implementation of EVS pruning algorithm has problems
That is, the number of tokens after timestamp in the Prompt does not match the actual number of tokens that should be retained after EVS pruning.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30847
|
open
|
[
"bug"
] | 2025-12-17T06:46:15Z
| 2026-01-04T07:39:17Z
| 5
|
xshqhua
|
vllm-project/vllm
| 30,832
|
[Performance]: DeepSeek-V3.2 on 8xH20 30 decode tokens/sec
|
### Proposal to improve performance
**My Env:**
vllm 0.13.0rc2.dev178+g676db55ee
deep_gemm 2.1.1+c9f8b34
cuda. 12.9
python. 3.10.18
**command** is the same as:
vllm serve mypath/DeepSeek-V3.2 \
--tensor-parallel-size 8 \
--tokenizer-mode deepseek_v32 \
--tool-call-parser deepseek_v32 \
--enable-auto-tool-choice \
--reasoning-parser deepseek_v3
**My Question:**
The output tokens is 30 tokens/s 1/req which is slower than excpted on https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-V3_2.html#benchmarking:
is there any wrong with this?
------------------------------------------------
Benchmarking[ยถ](https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-V3_2.html#benchmarking)
We used the following script to benchmark deepseek-ai/DeepSeek-V3.2 on 8xH20.
vllm bench serve \
--model deepseek-ai/DeepSeek-V3.2 \
--dataset-name random \
--random-input 2048 \
--random-output 1024 \
--request-rate 10 \
--num-prompt 100 \
--trust-remote-code
TP8 Benchmark Output[ยถ](https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-V3_2.html#tp8-benchmark-output)
============ Serving Benchmark Result ============
Successful requests: 100
Failed requests: 0
Request rate configured (RPS): 10.00
Benchmark duration (s): 129.34
Total input tokens: 204800
Total generated tokens: 102400
Request throughput (req/s): 0.77
Output token throughput (tok/s): 791.73
Peak output token throughput (tok/s): 1300.00
Peak concurrent requests: 100.00
Total Token throughput (tok/s): 2375.18
---------------Time to First Token----------------
Mean TTFT (ms): 21147.20
Median TTFT (ms): 21197.97
P99 TTFT (ms): 41133.00
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 99.71
Median TPOT (ms): 99.25
P99 TPOT (ms): 124.28
---------------Inter-token Latency----------------
Mean ITL (ms): 99.71
Median ITL (ms): 76.89
P99 ITL (ms): 2032.37
==================================================
### Report of performance regression
_No response_
### Misc discussion on performance
_No response_
### Your current environment (if you think it is necessary)
```text
The output of `python collect_env.py`
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30832
|
open
|
[
"performance"
] | 2025-12-17T03:08:52Z
| 2025-12-18T08:01:30Z
| 1
|
lisp2025
|
pytorch/pytorch
| 170,635
|
Use cvt.rp.satfinite.ue8m0x2.f32 PTX instruction in Inductor codegen for mxfp8 quantization
|
## Summary
For MXFP8 quantization, NVIDIA recommends using the "RCEIL" rounding mode to convert a fp32 scale factor to the e8m0 format for MXFP8. On Blackwell/sm100, they support a PTX instruction to convert fp32 scales to the e8m0 format for MXFP8 using a single instruction, rather than several operations: `cvt.rp.satfinite.ue8m0x2.f32`
In torchao, for RCEIL rounding mode in MXFP8 quantization, we use this with inline PTX. Examples:
- https://github.com/pytorch/ao/pull/3498
- https://github.com/pytorch/ao/blob/85557135c93d3429320a4a360c0ee9cb49f84a00/torchao/csrc/cuda/mx_kernels/mxfp8_quantize.cuh#L211
However, our [torch native to_mx() function does not yet support this](https://github.com/pytorch/ao/blob/b9e5780b56088daaf01d4fa3d4828efc4868cbed/torchao/prototype/mx_formats/mx_tensor.py#L106).
Would it be possible for Inductor codegen to pattern match this and codegen using the PTX instruction above? Or is there an alternate approach we should consider? Thanks!
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @kadeng @muchulee8 @amjames @aakhundov @coconutruben @jataylo
|
https://github.com/pytorch/pytorch/issues/170635
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"module: floatx (formerly float8)"
] | 2025-12-17T02:03:40Z
| 2025-12-19T09:36:51Z
| 0
|
danielvegamyhre
|
pytorch/pytorch
| 170,604
|
CUDAGraph capturing of iterating the same function/module (outside and inside fullgraph)
|
### ๐ Describe the bug
The example from https://docs.pytorch.org/docs/stable/torch.compiler_cudagraph_trees.html#limitations throws an error as warned in the docs:
```
RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run. Stack trace: File ".../bug.py", line 7, in my_model
y = torch.matmul(x, x). To prevent overwriting, clone the tensor outside of torch.compile() or call torch.compiler.cudagraph_mark_step_begin() before each model invocation.
```
```python
import torch
@torch.compile(mode="reduce-overhead")
def my_model(x):
y = torch.matmul(x, x)
return y
x = torch.randn(10, 10, device="cuda")
y1 = my_model(x)
y2 = my_model(x)
print(y1)
```
The docs suggest that `torch.compiler.cudagraph_mark_step_begin()` can be used, but
```python
torch.compiler.cudagraph_mark_step_begin()
y1 = my_model(x)
torch.compiler.cudagraph_mark_step_begin()
y2 = my_model(x)
```
produces anyway:
```
RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run. Stack trace: File ".../bug.py", line 7, in my_model
y = torch.matmul(x, x). To prevent overwriting, clone the tensor outside of torch.compile() or call torch.compiler.cudagraph_mark_step_begin() before each model invocation.
```
---
And more importantly, how to do several invocations of the same model inside fullgraph-capture and make it work with CUDAGraph/reduce-overhead? I've tried placing the call `torch.compiler.cudagraph_mark_step_begin()` inside fullgraph'd region, but it throws with a forced graph break:
```
torch._dynamo.exc.Unsupported: Attempted to call function marked as skipped
Explanation: Dynamo developers have intentionally marked that the function `cudagraph_mark_step_begin` in file `.../.venv/lib/python3.12/site-packages/torch/compiler/__init__.py` should not be traced.
Hint: Avoid calling the function `cudagraph_mark_step_begin`.
Hint: Apply `@torch._dynamo.dont_skip_tracing` to the function `cudagraph_mark_step_begin` to force tracing into the function. More graph breaks may occur as a result of attempting to trace into the function.
Hint: Please file an issue to PyTorch.
Developer debug context: module: torch.compiler, qualname: cudagraph_mark_step_begin, skip reason: <missing reason>
For more details about this graph break, please visit: https://meta-pytorch.github.io/compile-graph-break-site/gb/gb0007.html
```
### Versions
2.9.1
cc @ptrblck @msaroufim @eqy @jerryzh168 @tinglvv @nWEIdia @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng
|
https://github.com/pytorch/pytorch/issues/170604
|
open
|
[
"module: cuda",
"triaged",
"module: cuda graphs"
] | 2025-12-16T22:07:19Z
| 2025-12-17T05:01:56Z
| 0
|
vadimkantorov
|
huggingface/candle
| 3,247
|
Parakeet V3 support?
|
Any plans to support Parakeet V3 by any chance? Thank you ๐
|
https://github.com/huggingface/candle/issues/3247
|
open
|
[] | 2025-12-16T19:05:33Z
| 2025-12-16T19:05:33Z
| 0
|
mobicham
|
vllm-project/vllm
| 30,798
|
[Usage]: vllm offline server lora model
|
### Your current environment
Hi team,
I have a question about deploying LoRA models with a vLLM offline server.
Currently, we have a base model **A**. After LoRA training, we obtain adapter parameters **P**. When we serve model A with vLLM (offline server) and enable LoRA, we can select either the **base model A** or **A + P** (LoRA adapter) from the `/v1/models` list for inference.
Based on this, suppose we **merge A and P** into a new merged model **B = A + P**, and then continue LoRA training on top of **B** to obtain another LoRA adapter **Q**.
Is there a way to deploy on a single vLLM server such that the models list allows choosing among these three options for inference?
1. **A**
2. **A + P**
3. **A + P + Q**
If vLLM cannot directly stack LoRA adapters (P then Q) at runtime, is there a recommended approach to **combine P and Q** into a new equivalent adapter (e.g., a single LoRA adapter **R**) that is functionally equivalent to **A + P + Q**, ideally in a way that is **equivalent to training a LoRA adapter directly on base A**?
Thanks a lot for your help!
---
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30798
|
open
|
[
"usage"
] | 2025-12-16T16:38:49Z
| 2025-12-18T11:52:39Z
| 4
|
zapqqqwe
|
sgl-project/sglang
| 15,266
|
Multi-Adapter Support for Embed Qwen3 8B Embedding Model
|
### Checklist
- [x] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [x] Please use English. Otherwise, it will be closed.
### Motivation
Hi Team, do we currently support multi-adapter (LoRA) support for embedding models, specifically Qwen3 8B Embedding model? If not, when can we expect the support? Thanks :)
### Related resources
I'm training the model for three different tasks using separate lora adapters and need to deploy the model with one base and the three different adapters.
This is similar to how [Jina v4](https://huggingface.co/jinaai/jina-embeddings-v4) Embedding model has task specific adapters.
My adapter config looks like this -
```
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "/temp/local-ssd/models/Qwen3-Embedding-8B",
"bias": "none",
"corda_config": null,
"eva_config": null,
"exclude_modules": null,
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layer_replication": null,
"layers_pattern": null,
"layers_to_transform": null,
"loftq_config": {},
"lora_alpha": 128,
"lora_bias": false,
"lora_dropout": 0.1,
"megatron_config": null,
"megatron_core": "megatron.core",
"modules_to_save": [
"classifier",
"score",
"classifier",
"score"
],
"peft_type": "LORA",
"r": 32,
"rank_pattern": {},
"revision": null,
"target_modules": [
"gate_proj",
"k_proj",
"up_proj",
"q_proj",
"down_proj",
"v_proj",
"o_proj"
],
"task_type": "SEQ_CLS",
"trainable_token_indices": null,
"use_dora": false,
"use_rslora": false
}
```
|
https://github.com/sgl-project/sglang/issues/15266
|
open
|
[] | 2025-12-16T14:14:16Z
| 2025-12-16T14:14:22Z
| 0
|
dawnik17
|
vllm-project/vllm
| 30,776
|
[Usage]: Qwen3-omni's offline usage
|
### Your current environment
I used the code below in vllm==0.12.0, but failed.
```
import os
import torch
from vllm import LLM, SamplingParams
from transformers import Qwen3OmniMoeProcessor
from qwen_omni_utils import process_mm_info
def build_input(processor, messages, use_audio_in_video):
text = processor.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
# print(text[0])
# print(len(text[0]))
audios, images, videos = process_mm_info(messages, use_audio_in_video=use_audio_in_video)
inputs = {
'prompt': text,
'multi_modal_data': {},
"mm_processor_kwargs": {
"use_audio_in_video": use_audio_in_video,
},
}
if images is not None:
inputs['multi_modal_data']['image'] = images
if videos is not None:
inputs['multi_modal_data']['video'] = videos
if audios is not None:
inputs['multi_modal_data']['audio'] = audios
return inputs
if __name__ == '__main__':
# vLLM engine v1 not supported yet
os.environ['VLLM_USE_V1'] = '1'
os.environ['CUDA_DEVICES'] = '0,1,2,3,4,5,6,7'
MODEL_PATH = "Qwen3-Omni-30B-A3B-Instruct"
llm = LLM(
model=MODEL_PATH, trust_remote_code=True, gpu_memory_utilization=0.95,
tensor_parallel_size=1,
limit_mm_per_prompt={'image': 3, 'video': 3, 'audio': 3},
max_num_seqs=8,
max_model_len=32768,
seed=17114,
)
sampling_params = SamplingParams(
temperature=0.6,
top_p=0.95,
top_k=20,
max_tokens=16384,
)
processor = Qwen3OmniMoeProcessor.from_pretrained(MODEL_PATH)
conversation1 = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "1.mp4",
"fps": 6,
}
],
}
]
USE_AUDIO_IN_VIDEO = True
# Combine messages for batch processing
conversations = [conversation1]
inputs = [build_input(processor, messages, USE_AUDIO_IN_VIDEO) for messages in conversations]
# print(inputs[0])
outputs = llm.generate(inputs, sampling_params=sampling_params)
for i in range(len(outputs)):
print("\n\n==========\n")
print(outputs[i])
```
The error
```
Traceback (most recent call last):
File "/sft-qwen3-omni/vllm_inference.py", line 44, in <module>
llm = LLM(
^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/llm.py", line 334, in __init__
self.llm_engine = LLMEngine.from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/llm_engine.py", line 183, in from_engine_args
return cls(
^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/llm_engine.py", line 109, in __init__
self.engine_core = EngineCoreClient.make_client(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 93, in make_client
return SyncMPClient(vllm_config, executor_class, log_stats)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 642, in __init__
super().__init__(
File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 471, in __init__
with launch_core_engines(vllm_config, executor_class, log_stats) as (
File "/usr/lib/python3.12/contextlib.py", line 144, in __exit__
next(self.gen)
File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/utils.py", line 903, in launch_core_engines
wait_for_engine_startup(
File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/utils.py", line 960, in wait_for_engine_startup
raise RuntimeError(
RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {'EngineCore_DP0': 1}
[root:]$ python sft-qwen3-omni/vllm_inference.py
[2025-12-16 12:25:00] INFO vision_process.py:42: set VIDEO_TOTAL_PIXELS: 90316800
INFO 12-16 12:25:00 [utils.py:253] non-default args: {'trust_remote_code': True, 'seed': 17114, 'max_model_len': 32768, 'gpu_memory_utilization': 0.95, 'max_num_seqs': 8, 'disable_log_stats': True, 'limit_mm_per_prompt': {'image': 3, 'video': 3, 'audio': 3}, 'model': 'Qwen3-Omni-30B-A3B-Instruct'}
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
Unrecognized keys in `rope_scaling` for 'rope_type'='default': {'mrope_interleaved', 'interleaved', 'mrope_section'}
Unrecognized keys in `rope_scaling` for 'rope_type'='default': {'interleaved', 'mrope_section'}
INFO 12-16 12:25:00 [model.py:637] Resolved architecture: Qwen3OmniMoeForConditionalGeneration
INFO 12-16 12:25:00 [model.py:1750] Using max model len 32768
INFO 12-16 12:25:00 [scheduler.py:228] Chun
|
https://github.com/vllm-project/vllm/issues/30776
|
open
|
[
"bug",
"usage"
] | 2025-12-16T12:30:18Z
| 2025-12-17T17:03:34Z
| 50
|
Auraithm
|
sgl-project/sglang
| 15,260
|
SGLang installs newer PyTorch automatically โ is there an official SGLang โ PyTorch compatibility guide?
|
Hi SGLang team, thank you for the great project!
I have a question regarding **PyTorch version compatibility and installation**.
Currently, the recommended installation command from the website is:
```bash
uv pip install "sglang" --prerelease=allow
```
However, when using this command, `pip/uv` automatically upgrades PyTorch to the latest version (e.g., torch 2.9.1).
In my environment, I am intentionally pinned to **torch 2.8.x** and would prefer not to upgrade.
At the moment, itโs not clear:
* Which **SGLang versions are compatible with which PyTorch versions**
* Whether older SGLang releases are expected to work with torch 2.8
* What the recommended installation approach is for users who need to keep a specific torch version
### **Questions**
1. Is there an **official or recommended SGLang โ PyTorch compatibility matrix**?
2. For users pinned to torch 2.8.x, which SGLang version is recommended?
3. Is it safe to install SGLang with `--no-deps` or a constraints file to prevent torch upgrades?
4. Would it be possible to document supported torch versions in the release notes or README?
### **Why this matters**
Many users run SGLang in **production or CUDA-pinned environments**, where upgrading PyTorch is non-trivial. Clear guidance would help avoid dependency conflicts and accidental upgrades.
Thanks again for your work โ any guidance would be greatly appreciated!
|
https://github.com/sgl-project/sglang/issues/15260
|
open
|
[] | 2025-12-16T12:27:59Z
| 2025-12-16T12:27:59Z
| 0
|
David-19940718
|
vllm-project/vllm
| 30,757
|
[Performance]: Async sched: Why return AsyncGPUModelRunnerOutput util func sample_tokens
|
### Proposal to improve performance
Why is AsyncGPUModelRunnerOutput returned only after sample_tokens, not immediately after execute_model?
https://github.com/vllm-project/vllm/blob/0d0c929f2360cde5bae6817ad0f555641329e79d/vllm/v1/engine/core.py#L420-L422
If we defer returning AsyncGPUModelRunnerOutput until after sampling, there's a high chance that the async future completes immediately because `AsyncGPUModelRunnerOutput.get_output` is really light workload. As a result, the batch_queue size may effectively remain at 1, preventing overlap between model forward and scheduling of the next batch.
https://github.com/vllm-project/vllm/blob/0d0c929f2360cde5bae6817ad0f555641329e79d/vllm/v1/engine/core.py#L430-L438
### Report of performance regression
_No response_
### Misc discussion on performance
_No response_
### Your current environment (if you think it is necessary)
```text
The output of `python collect_env.py`
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30757
|
open
|
[
"performance"
] | 2025-12-16T08:26:08Z
| 2025-12-16T08:26:49Z
| 0
|
iwzbi
|
pytorch/executorch
| 16,271
|
Android: load model from assets
|
### ๐ The feature, motivation and pitch
It is simple, there is no way to read model directly from assets. The assets are files bundled in the Android apps.
The assets are not handled the same way as regular files -- they can be accessed only through [assets manager](https://developer.android.com/reference/kotlin/android/content/res/AssetManager.html).
### Alternatives
As a workaround, the model could be loaded from assets and stored into regular file which is then read by the ExecuTorch
```kotlin
val file = File(context.filesDir, modelName)
context.assets.open(modelAssetsPath).use { inputStream ->
FileOutputStream(file).use { outputStream ->
inputStream.copyTo(outputStream)
}
}
// here we can initialize the model
val model = Module.load(file.absolutePath)
```
### Additional context
There is generally two way how it could look like:
- initialization from `ByteArray`(Kotlin) / `byte[]` (Java) like in ONNX runtime
- directly from the assets using asset path and assets manager as parameters like done in LiteRT/TFLite.
### RFC (Optional)
_No response_
|
https://github.com/pytorch/executorch/issues/16271
|
open
|
[] | 2025-12-16T03:40:03Z
| 2025-12-17T21:10:12Z
| 2
|
Bludator
|
vllm-project/vllm
| 30,736
|
[Bug] DCP/DBO: 'NoneType' error building attention_metadata during DeepSeek-V3.1 deployment dummy run
|
### Your current environment
```bash
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.10.0a0+git9166f61
Is debug build : False
CUDA used to build PyTorch : 12.9
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.19 | packaged by conda-forge | (main, Oct 22 2025, 22:29:10) [GCC 14.3.0] (64-bit runtime)
Python platform : Linux-5.15.0-124-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.9.86
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0 : NVIDIA H200
GPU 1 : NVIDIA H200
GPU 2 : NVIDIA H200
GPU 3 : NVIDIA H200
GPU 4 : NVIDIA H200
GPU 5 : NVIDIA H200
GPU 6 : NVIDIA H200
GPU 7 : NVIDIA H200
Nvidia driver version : 570.124.06
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
vLLM Info
==============================
ROCM Version : Could not collect
vLLM Version : 0.11.1rc4.dev1340+gd08981aba.d20251215 (git sha: d08981aba, date: 20251215)
vLLM Build Flags:
CUDA Archs : 9.0
ROCm : Disabled
```
### ๐ Describe the bug
When starting vllm serve with the command below, it fails during the final dummy run step and does not start successfully.
Startup Command:
```bash
vllm serve deepseek-ai/DeepSeek-V3.1-Terminus \
--enable-dbo \
--stream-interval 10 \
--api-server-count 2 \
--max-num-batched-tokens 32768 \
--max-num-seqs 256 \
--long-prefill-token-threshold 16384 \
--scheduling-policy fcfs \
--data-parallel-size 2 \
--data-parallel-size-local 2 \
--tensor-parallel-size 4 \
--decode-context-parallel-size 4 \
--data-parallel-backend mp \
--distributed-executor-backend mp \
--enable-expert-parallel \
--all2all-backend deepep_low_latency \
--max-model-len 131072 \
--gpu-memory-utilization 0.8 \
--quantization "fp8" \
--trust-remote-code \
--enable-auto-tool-choice \
--tool-call-parser "deepseek_v31" \
--chat-template dpsk-v3.1-tool-parser-vllm.jinja \
--host ${HOST} \
--port ${PORT} \
```
Error Output๏ผ
```bash
(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] WorkerProc hit an exception.
(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] Traceback (most recent call last):
(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] File "/home/jovyan/rl/.pixi/envs/infer/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 817, in worker_busy_loop
(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] output = func(*args, **kwargs)
(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] ^^^^^^^^^^^^^^^^^^^^^
(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] File "/home/jovyan/rl/.pixi/envs/infer/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 448, in compile_or_warm_up_model
(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] cuda_graph_memory_bytes = self.model_runner.capture_model()
(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] File "/home/jovyan/rl/.pixi/envs/infer/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4541, in capture_model
(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] self._capture_cudagraphs(
(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] File "/home/jovyan/rl/.pixi/envs/infer/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4615, in _capture_cudagraphs
(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] self._dummy_run(
(Worker_DP1_TP0_DCP0_EP4 pid=479)
|
https://github.com/vllm-project/vllm/issues/30736
|
open
|
[
"bug",
"help wanted"
] | 2025-12-16T03:07:59Z
| 2025-12-22T17:11:48Z
| 3
|
Butterfingrz
|
huggingface/transformers.js
| 1,487
|
License clarification for some of the converted models
|
### Question
Hello!
I want to use [Xenova/whisper-small](https://huggingface.co/Xenova/whisper-small) and [Xenova/UAE-Large-V1](https://huggingface.co/Xenova/UAE-Large-V1) in a project, but I noticed that these model cards on Hugging Face do not have a license specified in their metadata or README.
Since the original weights from OpenAI and WhereIsAI are licensed, I assume these converted ONNX versions are intended to follow the same or a similar open-source licenses. Could you please clarify:
- Are these models safe to use for commercial/personal projects?
- Is it possible to update the model cards to explicitly include the license tag?
Thanks again!
|
https://github.com/huggingface/transformers.js/issues/1487
|
closed
|
[
"question"
] | 2025-12-16T00:27:16Z
| 2025-12-16T19:13:09Z
| null |
rmahdav
|
vllm-project/vllm
| 30,722
|
[Bug]: llama4_pythonic tool parser fails with SyntaxError on nested list parameters
|
### Your current environment
I don't have direct access to the cluster the model is running in. But it's running on 8x H100 GPUs using TP 8, expert parallel.
This is the fp8 model from Huggingface.
These are the vllm serve args I'm using:
VLLM Version: 0.11.0
```
--port 8002
--model /config/models/maverick
--device cuda
--tensor-parallel-size 8
--disable-log-requests
--max-num-batched-tokens 16000
--served-model-name 'llama-4-maverick-17b-128e-instruct'
--limit-mm-per-prompt image=50
--kv-cache-dtype fp8
--trust-remote-code
--enable-auto-tool-choice
--enable-chunked-prefill true
--enable-prefix-caching
--tool-call-parser llama4_pythonic
--enable-expert-parallel
--chat-template examples/tool_chat_template_llama4_pythonic.jinja
--override-generation-config '{\"attn_temperature_tuning\": true}'
--max-model-len 1000000
```
### ๐ Describe the bug
### Description
The `llama4_pythonic` tool parser intermittently fails to parse valid tool calls, resulting in:
1. `SyntaxError` from `ast.parse()` when model output is malformed (missing closing `]`)
2. Valid pythonic syntax returned as `content` instead of being parsed into `tool_calls`
### Reproduction
**Minimal curl (run 10+ times to observe intermittent failure):**
```bash
curl -X POST https://your-vllm-endpoint/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama-4-maverick-17b-128e-instruct",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "how do I enroll in benefits?"}
],
"tools": [{
"type": "function",
"function": {
"name": "enterprise_search",
"description": "Search enterprise knowledge base",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string"},
"rephrased_queries": {
"type": "array",
"items": {"type": "string"},
"description": "List of 2 rephrased queries"
}
},
"required": ["query", "rephrased_queries"]
}
}
}],
"tool_choice": "auto",
"max_tokens": 500,
"temperature": 0,
"top_p": 0.95
}'
```
**Observed results (10 identical requests):**
- 7/10: โ
`finish_reason: "tool_calls"`, properly parsed
- 3/10: โ `finish_reason: "stop"`, pythonic syntax in `content` field, empty `tool_calls`
### Failure Modes Observed
**Mode 1: Valid pythonic not parsed**
```json
{
"finish_reason": "stop",
"message": {
"content": "[enterprise_search(query=\"Benefits enrollment\", rephrased_queries=[\"...\", \"...\"])]",
"tool_calls": []
}
}
```
Parser fails to detect valid syntax โ returned as content.
**Mode 2: Model generates text after tool call**
```json
{
"content": "[enterprise_search(...)]\n\nI was unable to execute this task..."
}
```
Model mixes tool call + text, which violates parser assumption.
**Mode 3: Malformed output (missing bracket)**
```
[enterprise_search(query='...', rephrased_queries=['...', '...'])
```
Model hits `stop_reason: 200007` before completing โ `ast.parse()` throws SyntaxError.
### Suspected Root Cause
***The below is suggested by Claude Opus 4.5 so take with a grain of salt.***
1. **Parser detection inconsistency** - Valid pythonic output intermittently not recognized as tool call
2. **No text-after-tool-call handling** - Parser fails when model appends text after `]`
3. **Stop token interference** - Model sometimes hits stop token (200007) mid-generation before completing brackets
4. **Nested bracket complexity** - Array parameters (`rephrased_queries`) create `[...[...]...]` nesting that may confuse detection
### Error Logs
[err.txt](https://github.com/user-attachments/files/24175232/err.txt)
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30722
|
open
|
[
"bug"
] | 2025-12-15T21:26:24Z
| 2025-12-15T21:26:24Z
| 0
|
mphilippnv
|
pytorch/executorch
| 16,265
|
viable/strict is advancing even if docker build failed
|
### ๐ Describe the bug
Can we block viable/strict advancement when docker build failed?
### Versions
CI only
|
https://github.com/pytorch/executorch/issues/16265
|
closed
|
[] | 2025-12-15T20:42:25Z
| 2025-12-17T22:56:47Z
| 0
|
kirklandsign
|
pytorch/executorch
| 16,263
|
Android Documentation - Improve Llama example
|
### ๐ The doc issue
Feedback from UnSloth on how to run Android llama example : https://docs.google.com/document/d/1GB3edTlBQfc4Ar0yiBTELKynhwa1hstwKhJxpq3ATVE/edit?tab=t.0
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/executorch/issues/16263
|
open
|
[
"android_ux"
] | 2025-12-15T19:32:41Z
| 2025-12-15T19:32:41Z
| 0
|
psiddh
|
pytorch/executorch
| 16,260
|
Android UX: Prebuilt APKs for Android apps
|
Helps in overall E2E experience for the Devs, With least friction Android Devs can install and test prebuilt apk w/o having to setup a more cumbersome path of building from sources.
- Llama Demo apk
- dl3 demo apk
|
https://github.com/pytorch/executorch/issues/16260
|
open
|
[
"android_ux"
] | 2025-12-15T19:23:14Z
| 2025-12-15T19:38:33Z
| 0
|
psiddh
|
huggingface/tokenizers
| 1,913
|
Wrong and unsuppressable print when instantiating BPE
|
I am running Python code that is of the form
```python
from transformers import PreTrainedTokenizerFast
from tokenizers import Tokenizer
from tokenizers.models import BPE
vocab = {"a": 5, "b": 6, "ab": 7}
merges = [("a","b")]
backend_of_backend_of_backend = BPE(vocab=vocab, merges=merges, dropout=None)
backend_of_backend = Tokenizer(model=backend_of_backend_of_backend)
backend = PreTrainedTokenizerFast(tokenizer_object=backend_of_backend)
```
The line `BPE(vocab=vocab, merges=merges, dropout=None)` has nothing to do with serialisation. Yet, when I run it, an unwanted print
```
The OrderedVocab you are attempting to save contains holes for indices [0, 1, 2, 3, 4], your vocabulary could be corrupted!
```
appears in my console, which seems to come from
https://github.com/huggingface/tokenizers/blob/f7db48f532b3d4e3c65732cf745fe62863cbe5fa/tokenizers/src/models/mod.rs#L53-L56
Not only is the print wrong (I am not trying to **save** anything), but also, it cannot be suppressed by redirecting `stdout` and `stderr` in Python.
`println!` does not belong in low-level code, so at the very least, we need a way to disable it. But besides, what is this print even for, given that it says something about **saving** when we are **loading** a tokenizer?
|
https://github.com/huggingface/tokenizers/issues/1913
|
closed
|
[] | 2025-12-15T16:30:46Z
| 2026-01-05T13:02:45Z
| 4
|
bauwenst
|
pytorch/torchtitan
| 2,153
|
[Question] composable activation checkpoint
|
I'm looking for a way that not to use module wrapper to apply activation checkpoint, and I found this https://github.com/pytorch/pytorch/pull/87664/files.
Is this method works fine? Or it just a demo code
|
https://github.com/pytorch/torchtitan/issues/2153
|
open
|
[
"question"
] | 2025-12-15T13:54:08Z
| 2025-12-16T22:25:59Z
| null |
Irvingwangjr
|
vllm-project/vllm
| 30,694
|
[Feature]: CompressedTensors: NVFP4A16 not supported for MoE models
|
### ๐ The feature, motivation and pitch
NVFP4A16 (W4A16 FP4) quantization via compressed_tensors works for dense models but fails on MoE models like Qwen3-30B-A3B.
Looking at `compressed_tensors_moe.py`, `_is_fp4a16_nvfp4` is checked for Linear layers but not in `get_moe_method()` for FusedMoE. Only W4A4 has a MoE method (`CompressedTensorsW4A4Nvfp4MoEMethod`).
Since the Marlin kernel already supports FP4 weights + FP16 activations, is there a plan to add W4A16 MoE support for compressed_tensors?
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30694
|
open
|
[
"feature request"
] | 2025-12-15T13:29:09Z
| 2025-12-21T09:27:38Z
| 2
|
zhangyimi
|
pytorch/pytorch
| 170,426
|
argmax over multiple axis
|
### ๐ The feature, motivation and pitch
Is there any chance we are getting `argmax` to work also on multiple axis?
I feel that the usage of [unravel_index](https://docs.pytorch.org/docs/stable/generated/torch.unravel_index.html) is so error prone that would make sense to just have it part of the library... and to be fair does not so hard to implement
probably duplicate of `torch.max().indices`... but that also does not support multiple axis
cc @albanD
|
https://github.com/pytorch/pytorch/issues/170426
|
open
|
[
"triaged",
"module: python frontend"
] | 2025-12-15T11:03:36Z
| 2025-12-18T15:37:31Z
| 0
|
AlbertoSinigaglia
|
vllm-project/vllm
| 30,685
|
[Feature]: fp8 kv cache for finer-grained scaling factors (e.g., per channel).
|
### ๐ The feature, motivation and pitch
Currently, the FP8 KV cache feature (in the FlashMLA interface) only supports per-tensor (scalar) scaling factors. Are you developing support for finer-grained scaling factors (e.g., per-channel)? If so, when can we expect the FP8 KV cache with such finer-grained scaling factors to be completed?
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30685
|
open
|
[
"feature request"
] | 2025-12-15T09:32:48Z
| 2025-12-15T09:32:48Z
| 0
|
zx-ai
|
huggingface/transformers
| 42,868
|
sdpa_paged: How does it handle paged cache without padding?
|
Hi @ArthurZucker ,
I was analyzing the [sdpa_paged](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/sdpa_paged.py#L18) implementation and found the approach quite fascinating. I have a question regarding how the input shapes are handled.
If I have a batch of 4 sequences with lengths **32, 32, 64, and 128**, a standard SDPA call usually expects a shape of `[4, 128]` (Batch Size, Max Seq Len), where the shorter sequences are padded to 128.
However, in this implementation, it appears that the input to SDPA is a flattened tensor with shape **`[1, 256]`** (the sum of all lengths: $32+32+64+128$), implying that no padding is used and the sequences are concatenated.
Could you explain how standard SDPA produces the correct result in this case? Specifically, how does it differentiate between the sequences to prevent cross-sequence attention within this single packed batch?
Thanks for your time!
related PR: #38085
|
https://github.com/huggingface/transformers/issues/42868
|
closed
|
[] | 2025-12-15T08:39:00Z
| 2025-12-16T03:08:27Z
| 4
|
jiqing-feng
|
pytorch/executorch
| 16,244
|
How to let executorch export intput output int8
|
Hi,
I use https://github.com/pytorch/executorch/blob/main/examples/arm/ethos_u_minimal_example.ipynb to export example model and run on FVP.
The output is 2.0(float).
But I modify the code and that it output and intput to be int8. But the output at FVP show 1(char).
I think that it is wrong. How can I fix it?
<img width="1488" height="672" alt="Image" src="https://github.com/user-attachments/assets/9d9a3fc9-97a4-4f4d-a7c2-84e936c02e72" />
```
from executorch.backends.arm.ethosu import EthosUPartitioner
from executorch.exir import (
EdgeCompileConfig,
ExecutorchBackendConfig,
to_edge_transform_and_lower,
)
from executorch.extension.export_util.utils import save_pte_program
from executorch.exir.passes.quantize_io_pass import QuantizeInputs, QuantizeOutputs
# Create partitioner from compile spec
partitioner = EthosUPartitioner(compile_spec)
# Lower the exported program to the Ethos-U backend
edge_program_manager = to_edge_transform_and_lower(
quantized_exported_program,
partitioner=[partitioner],
compile_config=EdgeCompileConfig(
_check_ir_validity=False,
),
)
edge_program_manager.transform(passes=[QuantizeInputs(edge_program_manager, [0, 1]), QuantizeOutputs(edge_program_manager, [0])])
# Convert edge program to executorch
executorch_program_manager = edge_program_manager.to_executorch(
config=ExecutorchBackendConfig(extract_delegate_segments=False)
)
_ = executorch_program_manager.exported_program().graph_module.print_readable()
# Save pte file
save_pte_program(executorch_program_manager, "ethos_u_minimal_example_test_inout_int8.pte")
```
By the way, according to https://github.com/pytorch/executorch/issues/7590
How can the embedded application access the quantisation scale & zero point finally?
Thanks,
Kris
cc @freddan80 @per @zingo @oscarandersson8218 @digantdesai
|
https://github.com/pytorch/executorch/issues/16244
|
open
|
[
"partner: arm"
] | 2025-12-15T06:45:32Z
| 2025-12-24T01:36:24Z
| null |
kris-himax
|
huggingface/trl
| 4,692
|
LLVM error during GRPO training with Apple M4 Max
|
I have the below error while doing GRPO training. I am using HuggingFace example codes for GRPO. I couldn't run the model on MPS because of this issue.
How can I run GRPO on MPS?
loc("mps_matmul"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/4~B_wkugAG-524HdEQLaK0kvU7Y_D8Jtm6UxMaIoY/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":43:0)): error: incompatible dimensions
loc("mps_matmul"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/4~B_wkugAG-524HdEQLaK0kvU7Y_D8Jtm6UxMaIoY/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":43:0)): error: invalid shape
LLVM ERROR: Failed to infer result type(s).
Details:
OS: Tahoe 26.2
pytorch 2.9.1
trl: 0.26.1
MLX:0.30.0
|
https://github.com/huggingface/trl/issues/4692
|
open
|
[
"๐ bug",
"๐ GRPO"
] | 2025-12-14T23:01:49Z
| 2025-12-14T23:02:11Z
| 0
|
neslihaneti
|
vllm-project/vllm
| 30,654
|
[Feature][Attention][UX]: Incorporate Features into Attention Selection
|
### ๐ The feature, motivation and pitch
SUMMARY:
* we have default attention backends by priority and a notion of which backend supports what hw
* however, certain features are not considered in this (e.g. fp8 kv cache, e.g. attention sinks)
Recent example, we had test failures because we updated the logic to load kv cache quantization from the model config. But since CUTLASS_MLA is the default backend on B200, we started seeing test failures (since CUTLASS MLA does not support fp8 kv cache) because we were not automatically falling back to FLASHINFER_MLA (which does)
So the proposal is to:
- make sure all attention backends report what features are supported
- update the attention selector to consider these features in the selection
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30654
|
open
|
[
"help wanted",
"good first issue",
"feature request"
] | 2025-12-14T18:04:14Z
| 2025-12-30T05:38:40Z
| 11
|
robertgshaw2-redhat
|
pytorch/pytorch
| 170,400
|
Clarify inverted boolean mask logic between nn.MultiHeadAttention and F.scaled_dot_product_attention
|
### ๐ The doc issue
### ๐ Motivation
I am opening this issue to suggest a documentation improvement regarding a common "gotcha" when migrating between `nn.MultiHeadAttention` (MHA) and `F.scaled_dot_product_attention` (SDPA).
Many users (including myself) have noticed that the boolean mask semantics are inverted between these two APIs, which can lead to silent bugs during migration.
### ๐ The Inconsistency
* **`nn.MultiHeadAttention` (`key_padding_mask`)**: `True` means **PADDING** (Ignore/Mask out).
* **`F.scaled_dot_product_attention` (`attn_mask`)**: `True` means **KEEP** (Attend to).
While this behavior is hinted at in the SDPA docstring's pseudo-code implementation:
```python
if attn_mask.dtype == torch.bool:
attn_bias.masked_fill_(attn_mask.logical_not(), float("-inf"))
```
The use of `.logical_not()` confirms that SDPA expects `True` to be kept, whereas MHA expects `True` to be masked. This implicit difference is easy to overlook if one relies solely on parameter names or prior MHA experience.
### โ
Verification
I have verified this behavior with a minimal reproduction script on PyTorch 2.5.1, confirming that passing the identical boolean mask to both APIs results in opposite attention patterns (MHA ignores the `True` index, while SDPA attends to it).
Thanks for considering this clarification\!
### Suggest a potential alternative/fix
To improve Developer Experience (DX) and prevent confusion for users moving in either direction (MHA -\> SDPA or SDPA -\> MHA), I suggest adding a **Note** or **Warning** block in the documentation for `F.scaled_dot_product_attention`.
**Example phrasing:**
> **Note:** The boolean mask semantics for `attn_mask` here are the **inverse** of `nn.MultiHeadAttention.forward`'s `key_padding_mask`.
>
> * In `F.scaled_dot_product_attention`, `True` indicates values to **participate** in attention.
> * In `nn.MultiHeadAttention`, `True` indicates values to be **masked out** (padding).
>
> If migrating from MHA, ensure you invert your boolean mask (e.g., using `~mask` or `mask.logical_not()`).
cc @svekars @sekyondaMeta @AlannaBurke @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
|
https://github.com/pytorch/pytorch/issues/170400
|
closed
|
[
"module: docs",
"module: nn",
"triaged",
"module: sdpa"
] | 2025-12-14T13:07:19Z
| 2025-12-23T20:44:24Z
| 1
|
konodiodaaaaa1
|
huggingface/diffusers
| 12,838
|
Merge Loras for FLUX
|
The issue is based on https://huggingface.co/docs/diffusers/main/using-diffusers/merge_loras
Is there a similar procedure for merging loras for FLUX models? The guide seems to be specific for UNet based methods. I'm working on FLUX-dev and I would like to perform a linear merge of my loras.
|
https://github.com/huggingface/diffusers/issues/12838
|
open
|
[] | 2025-12-14T12:39:41Z
| 2025-12-14T12:39:41Z
| 0
|
shrikrishnalolla
|
vllm-project/vllm
| 30,633
|
[Installation]: How to install vLLM 0.11.0 with CUDA < 12.9 (Driver 535)? No matching wheels found
|
### Your current environment
Iโm trying to install vLLM 0.11.0 on a machine with NVIDIA Driver 535, and I ran into issues related to CUDA version compatibility.
Environment
OS: Linux (Ubuntu 20.04 / 22.04)
GPU: NVIDIA GPU H20
NVIDIA Driver: 535.xx
Python: 3.10
vLLM version: 0.11.0
Problem
According to the release information for vLLM 0.11.0, the available prebuilt wheels appear to target CUDA 12.9+.
However, with Driver 535, CUDA 12.9 is not supported, and I cannot find any official wheels for CUDA 12.1 / 12.2 / 12.4 or lower.
This leads to the following questions:
Is vLLM 0.11.0 officially compatible with CUDA versions < 12.9?
If yes, what is the recommended way to install it on systems with Driver 535?
Build from source with a specific CUDA version?
Use a specific Docker image?
Pin to an older vLLM release?
Are there plans to provide prebuilt wheels for CUDA 12.1 / 12.4, or is CUDA 12.9+ now a hard requirement going forward?
What Iโve tried
Checked the GitHub Releases page for vLLM 0.11.0 โ no wheels for CUDA < 12.9
Verified that upgrading CUDA to 12.9 is not possible with Driver 535
Looked for documentation on source builds for older CUDA versions, but didnโt find clear guidance
Any clarification or recommended workflow would be greatly appreciated.
Thanks in advance!
### How you are installing vllm
```sh
pip install -vvv vllm
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30633
|
open
|
[
"installation"
] | 2025-12-14T04:29:41Z
| 2026-01-01T16:50:50Z
| 1
|
whu125
|
vllm-project/vllm
| 30,630
|
[Usage]: SymmMemCommunicator: Device capability 10.3 not supported
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
Hi, I am seeing following warning using vllm serve on B300 instances.
```
WARNING 12-13 16:31:15 [symm_mem.py:67] SymmMemCommunicator: Device capability 10.3 not supported, communicator is not available.
```
vllm launch command
```
vllm serve \
--tensor-parallel-size 4 \
--kv-cache-dtype fp8 \
--tool-call-parser glm45 \
--reasoning-parser glm45 \
--enable-auto-tool-choice \
--model zai-org/GLM-4.6-FP8'
```
I built docker image using latest vllm on main branch commit 0e71eaa6447d99e76de8e03213ec22bc1d3b07df . Updated triton version to 3.5.1 and torch version to 2.9.1 to avoid compatibility issue from triton ([issue](https://github.com/triton-lang/triton/issues/8473)).
for same config benchmarking, I am seeing same perf as H200 (slightly worse) than B300. Is B300 fully supported on vllm yet?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30630
|
open
|
[
"usage",
"nvidia"
] | 2025-12-14T01:00:34Z
| 2025-12-18T21:17:42Z
| 4
|
navmarri14
|
huggingface/transformers.js
| 1,484
|
Should npm @xenova/transformers be deleted or marked deprecated?
|
### Question
Hello,
I was surprised that none of the models Iย tried were supported by transformerjs, even if they were using transformerjs in their README, until I realized that I was using the old npm package.
Shouldn't this package be removed ? Or marked as deprecated in favour of huggingface's ?
Best,
|
https://github.com/huggingface/transformers.js/issues/1484
|
open
|
[
"question"
] | 2025-12-13T19:49:08Z
| 2025-12-17T12:21:12Z
| null |
matthieu-talbot-ergonomia
|
huggingface/tokenizers
| 1,910
|
[Docs] `Visualizer` dead links
|
It seems like documentation for `Visualizer` is out of date and all the links return 404.
Docs: https://huggingface.co/docs/tokenizers/api/visualizer
Github Source: https://github.com/huggingface/tokenizers/blob/main/bindings/python/py_src/tokenizers/tools/visualizer.py
|
https://github.com/huggingface/tokenizers/issues/1910
|
open
|
[] | 2025-12-13T19:23:33Z
| 2025-12-13T19:23:33Z
| 0
|
dudeperf3ct
|
vllm-project/vllm
| 30,621
|
[Feature]: Remove MXFP4 Logic From `fused_experts`
|
### ๐ The feature, motivation and pitch
SUMMARY:
* as part of effort to refactor MoE, trying to reduce cruft
* we currently only have MX emulation in vLLM
* the logic for this emulation should be moved into quark
https://github.com/vllm-project/vllm/blame/main/vllm/model_executor/layers/fused_moe/fused_moe.py#L1866-L1899
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30621
|
open
|
[
"help wanted",
"good first issue",
"feature request"
] | 2025-12-13T18:30:30Z
| 2026-01-04T14:47:45Z
| 13
|
robertgshaw2-redhat
|
vllm-project/vllm
| 30,620
|
[Feature]: Remove Chunking From FusedMoE
|
### ๐ The feature, motivation and pitch
* we have some chunking logic in the triton kernels to avoid IMA: https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/fused_moe/fused_moe.py#L1807
* we chunk in ~65k tokens
* this case does not happen anymore because of chunked prefill
We should remove this
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30620
|
open
|
[
"help wanted",
"good first issue",
"feature request"
] | 2025-12-13T18:22:30Z
| 2025-12-13T23:27:22Z
| 3
|
robertgshaw2-redhat
|
pytorch/pytorch
| 170,361
|
[Dynamo] Use VariableBuilder/SourcelessBuilder consistently
|
There are many places in Dynamo where we directly call a VariableTracker subclass' `create`/`__init__` from a different VariableTracker's, e.g. `call_function`, `var_getattr`. This was done in order to skip the overhead required to go through `VariableBuilder`/`SourcelessBuilder`.
However, this has resulted in a number of soundness issues in the past (I can't find an example off the top of my head though). The reason is that when we directly construct a `VariableTracker`, we are assuming that the wrapped value is represented a certain way, which `VariableBuilder`/`SourcelessBuilder` may represent differently. The latter often has additional checks that result in greater specialization and slightly differing behavior.
We should:
- Audit places where we manually construct `VariableTracker`s and make the construction go through `VariableBuilder`/`SourcelessBuilder` more conservatively
- Reduce the overhead of `VariableBuilder` and `SourcelessBuilder` (esp. `VariableBuilder`, since it has a large if-statement)
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @kadeng @amjames @Lucaskabela @jataylo
|
https://github.com/pytorch/pytorch/issues/170361
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-variable-tracker"
] | 2025-12-13T01:26:10Z
| 2025-12-13T02:18:29Z
| 1
|
williamwen42
|
vllm-project/vllm
| 30,570
|
[Usage]: Why is VLLM still using SSE at all for mcp?
|
### Your current environment
This is a broad question: Why is vllm still using/hardcoding sse usage at all, when its been deprecated for well over six months at this point?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30570
|
open
|
[
"usage"
] | 2025-12-12T20:02:08Z
| 2025-12-18T10:50:37Z
| 1
|
bags307
|
pytorch/pytorch
| 170,320
|
Can't find 'action.yml', 'action.yaml' or 'Dockerfile' under '/home/ec2-user/actions-runner/_work/pytorch/pytorch/.github/actions/check-tpu'
|
> NOTE: Remember to label this issue with "`ci: sev`"
> If you want autorevert to be disabled, keep the ci: disable-autorevert label
<!-- Add the `merge blocking` label to this PR to prevent PRs from being merged while this issue is open -->
> [!IMPORTANT]
> Comment the following on your PR to rebase
> ```
> @pytorchbot rebase -b main
> ```
## Current Status
*Status could be: preemptive, ongoing, mitigated, closed. Also tell people if they need to take action to fix it (i.e. rebase)*.
With the introduction of:
* #170269
Developers may experience workflow failures related to a composite action named check-tpu.
## Error looks like
*Provide some way users can tell that this SEV is causing their issue.*
```
Can't find 'action.yml', 'action.yaml' or 'Dockerfile' under '/home/ec2-user/actions-runner/_work/pytorch/pytorch/.github/actions/check-tpu'
```
## Incident timeline (all times pacific)
*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*
## User impact
*How does this affect users of PyTorch CI?*
Developers should rebase their PRs past:
* #170269
> [!IMPORTANT]
> Comment the following on your PR to rebase
> ```
> @pytorchbot rebase -b main
> ```
## Root cause
*What was the root cause of this issue?*
* #170269
## Mitigation
*How did we mitigate the issue?*
Developers should rebase their PRs past:
* #170269
## Prevention/followups
*How do we prevent issues like this in the future?*
We should probably introduce a linter that prevents us from adding composite actions and referencing them in a workflow in the same PR.
|
https://github.com/pytorch/pytorch/issues/170320
|
closed
|
[
"ci: sev"
] | 2025-12-12T19:30:03Z
| 2025-12-14T15:36:06Z
| 1
|
seemethere
|
pytorch/pytorch
| 170,302
|
DISABLED test_opaque_obj_training_ir_to_decomp_nonstrict (__main__.TrainingIRToRunDecompExportNonStrictTestExport)
|
Platforms: rocm, xpu
This test was disabled because it is failing on [main and PRs](https://hud.pytorch.org/failure?name=rocm-mi200%20%2F%20linux-jammy-rocm-py3.10%20%2F%20test%20(default%2C%201%2C%206%2C%20linux.rocm.gpu.2%2C%20unstable)&jobName=linux-jammy-rocm-py3.10%20%2F%20test%20(default%2C%201%2C%206%2C%20linux.rocm.gpu.2%2C%20unstable)&failureCaptures=RuntimeError%3A%20Type%20%27test_export.TestExport.test_opaque_obj.%3Clocals%3E.MyInput%27) (couldn't find a more targeted link to show just this test failure). Example of [MI200 failure](https://github.com/pytorch/pytorch/actions/runs/20158110270/job/57866162668) and [MI300 failure](https://github.com/pytorch/pytorch/actions/runs/20160197548/job/57872771424)
cc @gujinghui @EikanWang @fengyuan14 @guangyey @jeffdaily @sunway513 @pruthvistony @ROCmSupport @jataylo @hongxiayang @naromero77amd @pragupta @jerrymannil @xinyazhang
|
https://github.com/pytorch/pytorch/issues/170302
|
open
|
[
"triaged",
"skipped",
"rocm-skipped-tests"
] | 2025-12-12T16:04:41Z
| 2025-12-25T00:24:56Z
| 2
|
jithunnair-amd
|
pytorch/pytorch
| 170,293
|
[wheels] Missing CUDA wheels for pytorch<2.6.0
|
### ๐ Describe the bug
For older versions of pytorch<2.6.0, the CUDA wheels cannot be reached anymore.
System: Windows-11-10.0.22631-SP0
Python version: 3.13
Using pip 25.3
Example of failing installation:
` pip install torch==2.5.1 --index-url https://download.pytorch.org/whl/cu124 --isolated --verbose`
Output is mentioning pytorch 2.6.0:
```
Looking in indexes: https://download.pytorch.org/whl/cu124
ERROR: Could not find a version that satisfies the requirement torch==2.5.1 (from versions: 2.6.0+cu124)
ERROR: No matching distribution found for torch==2.5.1
```
Reproducible with other pytorch versions and CUDA variants, when pytorch<2.6.0.
Example of successful installation:
` pip install torch==2.6.0 --index-url https://download.pytorch.org/whl/cu124 --isolated --verbose`
### Versions
Python version: 3.13.7 (main, Sep 18 2025, 19:43:45) [MSC v.1944 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
|
https://github.com/pytorch/pytorch/issues/170293
|
closed
|
[] | 2025-12-12T10:33:42Z
| 2025-12-12T12:02:58Z
| 1
|
guibruand
|
sgl-project/sglang
| 14,984
|
Can the source code compilation and installation of sgl-kernel support the SM86 driver for CUDA12.9
|
### Checklist
- [x] I searched related issues but found no solution.
- [ ] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [ ] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [ ] Please use English. Otherwise, it will be closed.
### Describe the bug
Encountered problem: Unable to find the. so file for sm86 when installing the latest sgl-kernel0.3.19, only sm90 and higher are available
### Reproduction
Question: The machine is a GPU driver for SM86. Can installing sgl kernel in the nvcc 12.9 container source code adapt to SM86?
### Environment
Environment: The host is SM86, the nvcc version in the Docker container is 12.9, and torch and flash attn are CU129
|
https://github.com/sgl-project/sglang/issues/14984
|
open
|
[] | 2025-12-12T10:29:50Z
| 2025-12-15T09:41:18Z
| 1
|
zwt-1234
|
vllm-project/vllm
| 30,548
|
[Feature]: Support for Q.ANT Photonic Computing ?
|
### ๐ The feature, motivation and pitch
https://qant.com/
https://qant.com/wp-content/uploads/2025/11/20251111_QANT-Photonic-AI-Accelerator-Gen-2.pdf
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30548
|
open
|
[
"feature request"
] | 2025-12-12T10:16:53Z
| 2025-12-12T14:45:53Z
| 2
|
plitc
|
pytorch/data
| 1,520
|
Are there any plans to optimize the fetcher_state in StatefulDataLoader?
|
Since `_IterableDatasetFetcher` has no state attribute: https://github.com/pytorch/pytorch/blob/v2.6.0/torch/utils/data/_utils/fetch.py#L19, and the current `fetcher_state:dataset_iter_state` is None: https://github.com/meta-pytorch/data/blob/v0.11.0/torchdata/stateful_dataloader/worker.py#L277, could this cause prefetched data to be discarded during resume?
|
https://github.com/meta-pytorch/data/issues/1520
|
open
|
[] | 2025-12-12T09:50:08Z
| 2025-12-17T05:23:35Z
| 5
|
howitry
|
huggingface/tokenizers
| 1,909
|
[Docs] `Encode Inputs` rendering issues
|
It seems like the documentation for Encode Inputs is not rendered properly.
Official URL: https://huggingface.co/docs/tokenizers/main/en/api/encode-inputs?code=python
GitHub URL: https://github.com/huggingface/tokenizers/blob/main/docs/source-doc-builder/api/encode-inputs.mdx
|
https://github.com/huggingface/tokenizers/issues/1909
|
open
|
[] | 2025-12-12T09:47:48Z
| 2025-12-12T09:47:48Z
| 0
|
ariG23498
|
pytorch/pytorch
| 170,286
|
Can torch has a relaxed dependencies instead of strict dependencies on nvidia-cuda-runtime
|
### ๐ Describe the bug
Right now, torch uses strict == pins for these packages (see
https://github.com/pytorch/pytorch/blob/main/.github/scripts/generate_binary_build_matrix.py#L106C2-L123C7).
Is there a specific reason these must be strict == requirements? Would it be possible to relax them to version ranges instead?
For example, in my setup:
torch==[10.0.dev](http://10.0.dev/) depends on nvidia-cuda-runtime==13.0.96
tensorrt==10.14 depends on nvidia-cuda-runtime==13.0.88
This conflict causes uv to resolve to a much older torch version:
https://github.com/pytorch/pytorch/issues/170286
If torch could declare a version range for nvidia-cuda-runtime instead of a strict pin, it would make dependency resolution much easier for downstream users who also depend on other CUDA-related packages.
### Versions
Collecting environment information...
PyTorch version: 2.10.0.dev20251210+cu130
Is debug build: False
CUDA used to build PyTorch: 13.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.3 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 22.0.0 (++20251015042503+856555bfd843-1~exp1~20251015042630.2731)
CMake version: version 4.2.0
Libc version: glibc-2.39
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-6.14.0-37-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 13.1.80
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4080 SUPER
Nvidia driver version: 580.95.05
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Caching allocator config: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 79%
CPU max MHz: 5883.0000
CPU min MHz: 545.0000
BogoMIPS: 8982.91
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Ghostwrite: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:
|
https://github.com/pytorch/pytorch/issues/170286
|
closed
|
[
"module: binaries",
"triaged"
] | 2025-12-12T08:39:43Z
| 2025-12-13T00:30:46Z
| 3
|
lanluo-nvidia
|
vllm-project/vllm
| 30,541
|
[Usage]: missing dsml token "| DSML | " with DeepSeek-V3.2 tools call
|
### Your current environment
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 4.0.3
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-5.15.0-50-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA H20
GPU 1: NVIDIA H20
GPU 2: NVIDIA H20
GPU 3: NVIDIA H20
GPU 4: NVIDIA H20
GPU 5: NVIDIA H20
GPU 6: NVIDIA H20
GPU 7: NVIDIA H20
Nvidia driver version : 565.57.01
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) PLATINUM 8563C
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 2
Frequency boost: enabled
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 640 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51,104-155
NUMA node1 CPU(s): 52-103,156-207
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.5.3
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cudnn-frontend==1.16.0
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.1
|
https://github.com/vllm-project/vllm/issues/30541
|
open
|
[
"usage"
] | 2025-12-12T06:47:03Z
| 2025-12-12T20:59:40Z
| 1
|
crischeng
|
pytorch/executorch
| 16,217
|
make building stop at Built target portable_kernels
|
Hey, i want to export llama pte model and deploy it on SA8255 device, i refered to https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md and https://docs.pytorch.ac.cn/executorch/stable/llm/build-run-llama3-qualcomm-ai-engine-direct-backend.html, but when i Built llama runner binary for Android i got the error:
[ 98%] Linking CXX static library libportable_kernels.a
[ 98%] Built target portable_kernels
[ 98%] Linking CXX static library liboptimized_portable_kernels.a
[ 98%] Built target optimized_portable_kernels
gmake: *** [Makefile:156: all] Error 2
How can I solve this? i paste the error.log and the build.sh file.
Appreciate reply!
[build.sh](https://github.com/user-attachments/files/24119047/build.sh)
[error.log](https://github.com/user-attachments/files/24119048/error.log)
cc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin
|
https://github.com/pytorch/executorch/issues/16217
|
open
|
[
"partner: qualcomm",
"module: qnn"
] | 2025-12-12T03:24:46Z
| 2025-12-21T00:59:11Z
| 16
|
imjking
|
vllm-project/vllm
| 30,511
|
Potential Deadlock?
|
Consider using proper synchronization primitives like threading.Event or queue.Queue.get(timeout=...)
|
https://github.com/vllm-project/vllm/issues/30511
|
closed
|
[] | 2025-12-11T19:57:43Z
| 2025-12-12T18:00:20Z
| 1
|
ChuanLi1101
|
sgl-project/sglang
| 14,903
|
Does the current Qwen3-VL (or Qwen3-VL-MoE) officially support TBO?
|
Hi team,
I noticed that Qwen3-VL and Qwen3-MoE adopt different model architectures.
When profiling the execution path, I found that:
Qwen3-MoE eventually falls back to the Qwen2-MoE implementation, which explicitly supports TBO (Two-Batch Overlap).
However, Qwen3-VL takes the path of Qwen3-VL-MoE, and I did not find any clear implementation or code path that indicates TBO support for this variant.
Based on the current codebase, it seems that Qwen3-VL-MoE may not have full TBO support, or its TBO integration is not obvious from the trace.
|
https://github.com/sgl-project/sglang/issues/14903
|
open
|
[] | 2025-12-11T13:26:50Z
| 2025-12-11T13:26:50Z
| 0
|
jerry-dream-fu
|
pytorch/pytorch
| 170,183
|
[docs] Unable to `git clone` PyTorch wiki on Windows due to colon(`:`) in filename
|
### ๐ The doc issue
> Summary : `git checkout` fails when trying to clone the PyTorch wiki on Windows OS.
Windows filesystems do not allow the use of colons (`:`) in filenames.
However, the wiki currently contains a page titled: [PyTorch CI Metrics Dashboards: the HUD](https://github.com/pytorch/pytorch/wiki/PyTorch-CI-Metrics-Dashboards:-the-HUD)
Because this filename contains a colon, cloning the wiki repository on a Windows environment results in an error.
- Error Message.
```sh
(base) PS D:\Git_Repo\Open_Source> git clone https://github.com/pytorch/pytorch.wiki.git
Cloning into 'pytorch.wiki'...
remote: Enumerating objects: 3525, done.
remote: Total 3525 (delta 0), reused 0 (delta 0), pack-reused 3525 (from 1)
Receiving objects: 100% (3525/3525), 1.73 MiB | 4.51 MiB/s, done.
Resolving deltas: 100% (2173/2173), done.
error: invalid path 'PyTorch-CI-Metrics-Dashboards:-the-HUD.md'
fatal: unable to checkout working tree
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'
```
Thanks you.
### Suggest a potential alternative/fix
Rename the wiki page to remove the colon or replace it with a hyphen.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
|
https://github.com/pytorch/pytorch/issues/170183
|
open
|
[
"module: windows",
"triaged",
"module: infra"
] | 2025-12-11T13:00:52Z
| 2025-12-15T18:07:48Z
| 6
|
daehyun99
|
huggingface/transformers
| 42,804
|
[`Quantization FP8`] Native `from_config` support
|
### Feature request
Related to https://github.com/huggingface/transformers/pull/42028#discussion_r2592235170
Since FP8 is becoming more and more standard, it would be nice to create fp8 native models via config or more like using `from_config`. Atm, quant configs are not respected apparently - either that or we need to update the docs to show how to use it properly.
### Motivation
Fp8 is becoming increasingly important
### Your contribution
๐
|
https://github.com/huggingface/transformers/issues/42804
|
open
|
[
"Feature request"
] | 2025-12-11T10:17:47Z
| 2025-12-14T22:49:48Z
| 3
|
vasqu
|
huggingface/trl
| 4,679
|
[SFT] High vRAM consumption during eval loop
|
### Reproduction
### Unexpected behavior
When training a model on large sequences (>=20k tokens) with `PEFT LoRA` + `SFTTrainer` + `liger-kernel`, the vRAM usage spikes during the evaluation loop, consuming way more vRAM than during the training.
The size of this vRAM spike seem to scale with the length of the input sequence: for cases with `max_length=40000`, we end up with spikes of ~50GB vRAM, far exceeding the amount used during the training.
Here's a MLFlow GPU vRAM extract showcasing this on an A100 for this 40k token scenario with Qwen3-0.6B:
<img width="1003" height="556" alt="Image" src="https://github.com/user-attachments/assets/8d909f73-6cbe-4c3e-8d6a-e6b8c6c56dbe" />
And same goes for Qwen3-4B, 40k token:
<img width="1006" height="552" alt="Image" src="https://github.com/user-attachments/assets/aa74b9c3-14eb-4c35-851f-c6802d2d420d" />
### Minimal reproduction script
Below is the [default SFT example from the documentation](https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py), slightly altered to artificially create long input sequences (>=20k tokens) in both the training and evaluation dataset splits.
By running `watch -n 1 nvidia-smi` while the training is running, you can see that the vRAM usage is way higher during the evaluation phase than during the training. If your GPU has enough vRAM, you can increase the `max_length` parameter and this will become even more visible. _For some reason, I can't get `trackio` to properly report vRAM usage, hence the use of `nvidia-smi`.
You can launch the script with the following command:
```bash
python sft_example.py \
--model_name_or_path Qwen/Qwen3-0.6B \
--dataset_name trl-lib/Capybara \
--learning_rate 2.0e-4 \
--max-steps 10 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--eval_accumulation_steps 1 \
--gradient_accumulation_steps 1 \
--gradient_checkpointing \
--eos_token '<|im_end|>' \
--eval_strategy steps \
--eval_steps 10 \
--use_peft \
--lora_r 8 \
--lora_alpha 16 \
--use_liger \
--max_length 10000
```
```python
# Copyright 2020-2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# /// script
# dependencies = [
# "trl",
# "peft",
# "trackio",
# "kernels"
# ]
# ///
import argparse
import os
from accelerate import logging
from datasets import load_dataset
from transformers import AutoConfig, AutoModelForCausalLM
from transformers.models.auto.modeling_auto import (
MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING_NAMES,
)
from trl import (
DatasetMixtureConfig,
ModelConfig,
ScriptArguments,
SFTConfig,
SFTTrainer,
TrlParser,
get_dataset,
get_kbit_device_map,
get_peft_config,
get_quantization_config,
)
logger = logging.get_logger(__name__)
# Enable logging in a Hugging Face Space
os.environ.setdefault("TRACKIO_SPACE_ID", "trl-trackio")
def main(script_args, training_args, model_args, dataset_args):
################
# Model init kwargs
################
model_kwargs = dict(
revision=model_args.model_revision,
trust_remote_code=model_args.trust_remote_code,
attn_implementation=model_args.attn_implementation,
dtype=model_args.dtype,
)
quantization_config = get_quantization_config(model_args)
if quantization_config is not None:
# Passing None would not be treated the same as omitting the argument, so we include it only when valid.
model_kwargs["device_map"] = get_kbit_device_map()
model_kwargs["quantization_config"] = quantization_config
# Create model
config = AutoConfig.from_pretrained(model_args.model_name_or_path)
valid_image_text_architectures = MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING_NAMES.values()
if config.architectures and any(
arch in valid_image_text_architectures for arch in config.architectures
):
from transformers import AutoModelForImageTextToText
model = AutoModelForImageTextToText.from_pretrained(
model_args.model_name_or_path, **model_kwargs
)
else:
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path, **model_kwargs
)
# Load the dataset
if dataset_args.datasets and script_args.dataset_name:
logger.warning(
"Both `datasets` and `dataset_name` are provided. The `datasets` argument will be used to load the "
"dataset and `dataset_name` will be ignored."
)
|
https://github.com/huggingface/trl/issues/4679
|
open
|
[
"๐ bug",
"๐ SFT",
"โก PEFT"
] | 2025-12-11T10:01:49Z
| 2026-01-02T09:23:17Z
| 3
|
Khreas
|
vllm-project/vllm
| 30,477
|
[Usage]: How to disable thinking for Qwen-8B
|
### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.5.1+cu121
Is debug build : False
CUDA used to build PyTorch : 12.1
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.12 (main, Nov 19 2025, 22:46:53) [Clang 21.1.4 ] (64-bit runtime)
Python platform : Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.1.105
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration : GPU 0: NVIDIA GeForce RTX 4090 Laptop GPU
Nvidia driver version : 546.26
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-14900HX
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 1
BogoMIPS: 4838.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 768 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 32 MiB (16 instances)
L3 cache: 36 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.9.86
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pyzmq==27.1.0
[pip3] torch==2.5.1+cu121
[pip3] torchaudio==2.5.1+cu121
[pip3] torchvision==0.20.1+cu121
[pip3] transformers==4.57.3
[pip3] triton==3.1.0
[conda] Could not collect
|
https://github.com/vllm-project/vllm/issues/30477
|
closed
|
[
"usage"
] | 2025-12-11T09:28:40Z
| 2025-12-22T06:10:43Z
| 3
|
fancyerii
|
huggingface/diffusers
| 12,823
|
How to use quantizer after pipeline loaded?
|
How to use quantizer after pipeline loaded?
- Currently
```python
# Quantization occurs at load time.
pipe = QwenImagePipeline.from_pretrained(
(
args.model_path
if args.model_path is not None
else os.environ.get(
"QWEN_IMAGE_DIR",
"Qwen/Qwen-Image",
)
),
scheduler=scheduler,
torch_dtype=torch.bfloat16,
quantization_config=quantization_config,
)
```
- What i want
```python
# Load on CPU -> Load and fuse lora -> quantize -> to GPU
```
|
https://github.com/huggingface/diffusers/issues/12823
|
open
|
[] | 2025-12-11T06:32:38Z
| 2025-12-11T14:18:28Z
| null |
DefTruth
|
huggingface/transformers
| 42,794
|
`decoder_start_token_id` or `bos_token_id` has to be defined for encoder-decoder generation.
|
### System Info
latest transformers
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import pipeline
pipe = pipeline(
"document-question-answering",
model="naver-clova-ix/donut-base-finetuned-docvqa",
dtype=torch.float16,
)
image = "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png"
question = "What is the invoice number?"
result = pipe(image=image, question=question)
print(result)
```
error:
```
Traceback (most recent call last):
File "/home/jiqingfe/transformers/test_dqa.py", line 13, in <module>
result = pipe(image=image, question=question)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jiqingfe/transformers/src/transformers/pipelines/document_question_answering.py", line 310, in __call__
return super().__call__(inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jiqingfe/transformers/src/transformers/pipelines/base.py", line 1278, in __call__
return next(
^^^^^
File "/home/jiqingfe/transformers/src/transformers/pipelines/pt_utils.py", line 126, in __next__
item = next(self.iterator)
^^^^^^^^^^^^^^^^^^^
File "/home/jiqingfe/transformers/src/transformers/pipelines/pt_utils.py", line 271, in __next__
processed = self.infer(next(self.iterator), **self.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jiqingfe/transformers/src/transformers/pipelines/base.py", line 1185, in forward
model_outputs = self._forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jiqingfe/transformers/src/transformers/pipelines/document_question_answering.py", line 468, in _forward
model_outputs = self.model.generate(**model_inputs, **generate_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sgl-workspace/miniforge3/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/jiqingfe/transformers/src/transformers/generation/utils.py", line 2551, in generate
self._prepare_special_tokens(generation_config, kwargs_has_attention_mask, device=device)
File "/home/jiqingfe/transformers/src/transformers/generation/utils.py", line 2145, in _prepare_special_tokens
raise ValueError(
ValueError: `decoder_start_token_id` or `bos_token_id` has to be defined for encoder-decoder generation.
```
### Expected behavior
Cannot locate which PR caused this regression because too many errors recently. The transformers 4.57.3 works well on the script.
|
https://github.com/huggingface/transformers/issues/42794
|
closed
|
[
"bug"
] | 2025-12-11T06:22:58Z
| 2025-12-18T18:33:40Z
| 1
|
jiqing-feng
|
vllm-project/vllm
| 30,464
|
[Usage]: How can I use the local pre-compiled wheel of vllm
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
Every time I use `VLLM_USE_PRECOMPILED=1 uv pip install --editable .` to build vllm, it always takes much time to download the pre-compiled wheel. Would it be possible to build it by using a locally downloaded wheel file instead?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30464
|
open
|
[
"usage"
] | 2025-12-11T06:22:43Z
| 2025-12-12T01:02:22Z
| 1
|
gcanlin
|
huggingface/transformers
| 42,791
|
Add support for GPT_OSS with tp_plan or enable native tensor parallelism
|
### Model description
#[https://huggingface.co/docs/transformers/main/perf_infer_gpu_multi?tp_plan=auto+plan](url)
> https://github.com/huggingface/transformers/issues/41819
There are a list of supported models here, but GPT-OSS is not one of them. Please add support for GPT_OSS too to enable `tp_plan`. Please help me understand when model is prepared for TP in accelerate initiation, is there some native support needed in model for enabling TP.
I have tried this example TP script [https://github.com/huggingface/accelerate/blob/main/examples/torch_native_parallelism/nd_parallel.py](url) with pure TP, on GPT-OSS-20B model and getting same error as mentioned in this already open issue:
[]([https://github.com/huggingface/transformers/issues/41819](url).)
After handling `DTensor` sinks as mentioned as a fix in above issue, still I find many such `DTensors` at multiple other places which is causing below error, due to incompatibility between ` DTensor ` and `torch.Tensor`.
`raise RuntimeError(
[rank0]: RuntimeError: aten.bmm.default: got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators!`
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/transformers/issues/42791
|
open
|
[
"New model"
] | 2025-12-11T04:31:19Z
| 2025-12-19T08:38:31Z
| 1
|
quic-akuruvil
|
sgl-project/sglang
| 14,868
|
How to train vicuna EAGLE3 model?
|
I have carefully reviewed the official tutorials and source code, but I was unable to find the relevant config and template files specific to Vicuna.
Could you please provide an example, specifically regarding the template structure?
|
https://github.com/sgl-project/sglang/issues/14868
|
open
|
[] | 2025-12-11T03:59:39Z
| 2025-12-11T03:59:39Z
| 0
|
Sylvan820
|
vllm-project/vllm
| 30,447
|
[Usage]: how to load kv cache data into local file
|
### Your current environment
pthon3.10+vllm0.10.0
### How would you like to use vllm
I want to get int8 kv cache data from [qwen-int8](https://www.modelscope.cn/models/Qwen/Qwen-7B-Chat-Int8). I don't know how if vllm can do that? Thank you.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30447
|
open
|
[
"usage"
] | 2025-12-11T01:43:58Z
| 2025-12-12T15:11:50Z
| 1
|
chx725
|
vllm-project/vllm
| 30,441
|
[Usage]: vllm serve setup issues on B300
|
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
uv is set
==============================
System Info
==============================
OS : Amazon Linux 2023.9.20251208 (x86_64)
GCC version : (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version : Could not collect
CMake version : version 3.22.2
Libc version : glibc-2.34
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu130
Is debug build : False
CUDA used to build PyTorch : 13.0
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.11.14 (main, Nov 12 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-5)] (64-bit runtime)
Python platform : Linux-6.1.158-180.294.amzn2023.x86_64-x86_64-with-glibc2.34
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 13.0.88
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA B300 SXM6 AC
GPU 1: NVIDIA B300 SXM6 AC
GPU 2: NVIDIA B300 SXM6 AC
GPU 3: NVIDIA B300 SXM6 AC
GPU 4: NVIDIA B300 SXM6 AC
GPU 5: NVIDIA B300 SXM6 AC
GPU 6: NVIDIA B300 SXM6 AC
GPU 7: NVIDIA B300 SXM6 AC
Nvidia driver version : 580.105.08
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8559C
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 2
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd ida arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear serialize amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 640 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsa: Not affected
Vulnerability Tsx async abort: Not affected
Vulnerability Vms
|
https://github.com/vllm-project/vllm/issues/30441
|
open
|
[
"usage"
] | 2025-12-10T23:50:27Z
| 2025-12-13T02:01:04Z
| 1
|
navmarri14
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.