repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
vllm-project/vllm
28,246
[Bug]: Return Token Ids not returning Gen Token Ids for GPT-OSS-120b
### Your current environment <details> Using docker image vllm/vllm-openai:latest </details> ### 🐛 Describe the bug When passing in return_token_ids flag to v1/chat/completions endpoint for GPTOSS-120b, only prompt_token_ids are returned and not token_ids. We have not seen this happen with any other model except GPTOSS-120b ``` curl --location 'http://localhost:8015/v1/chat/completions' \ --header 'Content-Type: application/json' \ --data '{ "model": "gpt-oss-120b", "messages": [{"content": "Hello!", "role": "user"}], "temperature": 0, "return_token_ids": true }' ``` `{"id":"chatcmpl-a19161b8131141e2a79495025adb40eb","object":"chat.completion","created":1762462711,"model":"gpt-oss-120b","choices":[{"index":0,"message":{"role":"assistant","content":"Hello! How can I help you today?","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning_content":"The user says \"Hello!\" We should respond politely. No special instructions. Just greet back."},"logprobs":null,"finish_reason":"stop","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":71,"total_tokens":109,"completion_tokens":38,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":[200006,17360,200008,3575,553,17554,162016,11,261,4410,6439,2359,22203,656,7788,17527,558,87447,100594,25,220,1323,19,12,3218,198,6576,3521,25,220,1323,20,12,994,12,3218,279,30377,289,25,14093,279,2,13888,18403,25,8450,11,1721,13,21030,2804,413,7360,395,1753,3176,13,200007,200006,77944,200008,200007,200006,1428,200008,13225,0,200007,200006,173781],"kv_transfer_params":null}` I've also included in the docker container setup ``` docker run --rm -d --name vllm-gpt-oss-120b \ --gpus '"device=4,5"' \ --shm-size=16g \ -e TORCH_CUDA_ARCH_LIST="9.0" \ -v /mlf1-shared/user/gpt-oss-120b:/opt/model \ -p ${PORT}:${PORT} \ vllm/vllm-openai:latest\ --model /opt/model \ --served-model-name "${SERVED_MODEL_NAME}" \ --tensor-parallel-size "${TP_SIZE}" \ --gpu-memory-utilization "${GPU_UTIL}" \ --max-num-seqs 64 \ --port ${PORT} ``` ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28246
open
[ "bug" ]
2025-11-06T21:08:16Z
2025-11-07T00:18:25Z
1
sophies-cerebras
pytorch/pytorch
167,242
CUDNN version in nightly pytorch 2.10.0 builds
Hi, I mainly use pytorch with ComfyUI. I know there is an issue with pytorch and CUDNN for which there have been made workarounds in ComfyUI code. I have seen here https://github.com/pytorch/pytorch/issues/166122 that CUDNN 9.15 solves the problem (from I can understand, as I'm not a developer). Checking today's torch nightly 2.10.0+cu130 for Windows it shows, if I'm not mistaken, CUDNN version 9.12: ``` >>> import torch >>> print(torch.__version__) 2.10.0.dev20251106+cu130 >>> torch.backends.cudnn.version() 91200 ``` My question is: when is to be expected to see CUDNN v 9.15 in the nightly 2.10.0+cu130 builds? And another question is: seeing that today CUDNN 9.15 is available from nvidia (in fact is already downloaded and installed on my computer) is there a way to use 9.15 in the current torch build, as this comment suggests? https://github.com/pytorch/pytorch/issues/166122#issuecomment-3487979692 > We have aligned not to bump this for the minor version release; as a workaround, we encourage users to manually install cudnn 9.15+ if they want to work around My apologies, if I ask, maybe, trivial questions. cc @seemethere @malfet @atalman @csarofeen @ptrblck @xwang233 @eqy
https://github.com/pytorch/pytorch/issues/167242
open
[ "module: binaries", "module: cudnn", "triaged" ]
2025-11-06T20:16:08Z
2025-11-30T16:25:21Z
13
jovan2009
pytorch/ao
3,305
[MXFP8 MoE] What's the expected inference solution on H100s, after training with TorchAO MXFP8 MoE?
Hi team, Thanks for your great implementation of the new MXFP8 MoE! I have integrated it and consider to use it for prod training. But I got a concern about how to do inference. MXFP8 is only available on B200. What is the expected inference solution on H100 or even non-Nvidia GPUs after training with MXFP8. Other quantizations, even another FP8 quantization, is not guaranteed to work well with the model trained with MXFP8. Is a QAT finetuning with another quantization method expected? Should we just inference with another quantization method without finetuning? I guess FP4 training is a similar case. I think the question is not only to TorchAO team. Anyone please share your ideas/insights if you would like to. Thanks in advance!
https://github.com/pytorch/ao/issues/3305
open
[ "question", "mx", "moe" ]
2025-11-06T18:45:31Z
2025-11-07T19:20:18Z
null
goldhuang
vllm-project/vllm
28,236
[Feature]: Implement naive prepare/finalize class to replace naive dispatching in fused_moe/layer.py
### 🚀 The feature, motivation and pitch The `FusedMoE` layer has a special case dispatch/combine for EP+DP when there is no specific all2all backend specified. This makes the code in `layer.py` a bit confusing and hard to follow. One way to simplify this is to implement a proper `FusedMoEPrepareAndFinalize` subclass for naive dispatch/combine. ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28236
open
[ "help wanted", "good first issue", "feature request" ]
2025-11-06T18:38:38Z
2025-11-12T06:36:29Z
4
bnellnm
vllm-project/vllm
28,233
[Usage]: LogitProcessor vLLM 0.9.1 run the same prompt 50 times with batching, apply logitprocessor independently on each
### Your current environment Goal Run the same prompt 50 times through vLLM 0.9.1, generating independent outputs with a custom LogitsProcessor that forces a comma token after some pattern "xyz" appears in each generation. What You Want Batched execution: Process all 50 generations efficiently in parallel Independent state: Each of the 50 generations should have its own state in the logits processor Pattern detection: When text ends with "xyz", mask all tokens except comma }, One-time application: Each generation should only apply the comma mask once Current Hurdles 1. Processor Signature Confusion vLLM V0 (0.9.1) uses signature: __call__(prompt_token_ids, generated_token_ids, logits) prompt_token_ids: The input prompt tokens (same for all 50) generated_token_ids: Tokens generated so far (different per generation) Problem: No built-in request ID to distinguish between the 50 generations 2. State Management When using the same prompt 50 times: All generations share identical prompt_token_ids Can't use prompt as unique identifier Using generated_token_ids as key works initially, but becomes complex as sequences diverge State dictionary grows indefinitely without cleanup 3. Batching vs Sequential Batching (llm.generate([prompt]*50)): Processor is called for all 50 in interleaved order, making state tracking difficult Sequential (50 separate calls): Works reliably but loses parallel efficiency Working Solution (Sequential) for i in range(50): processor = LookAheadProcessor(tokenizer) # Fresh processor each time sampling_params = SamplingParams(..., logits_processors=[processor]) output = llm.generate([prompt], sampling_params) This works because each generation gets its own processor instance. The Core Problem vLLM V0's logits processor API doesn't provide per-request identifiers in batched scenarios, making it impossible to maintain independent state for identical prompts without workarounds like using (prompt_tokens, generated_tokens) tuples as keys - which still fails when generations produce identical token sequences early on. Anyone knows a solution to this problem ? ### How would you like to use vllm ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28233
open
[ "usage" ]
2025-11-06T18:11:32Z
2025-11-06T18:11:32Z
0
jindalankush28
vllm-project/vllm
28,230
[Bug]: GPU VRAM continuously increase during Qwen3-VL usage over days until OOM
### Your current environment Setup: docker run -d \ --runtime nvidia \ --gpus '"device=3,4,5,6"' \ -e TRANSFORMERS_OFFLINE=1 \ -e DEBUG="true" \ -p 8000:8000 \ --ipc=host \ vllm/vllm-openai:v0.11.0 \ --gpu-memory-utilization 0.95 \ --model Qwen/Qwen3-VL-235B-A22B-Instruct-FP8 \ --tensor-parallel-size 4 \ --mm-encoder-tp-mode data \ --enable-auto-tool-choice \ --tool-call-parser hermes \ --limit-mm-per-prompt.video 0 Server: 8*H200 with CUDA=12.6. ### 🐛 Describe the bug This is the same issue described in https://github.com/vllm-project/vllm/issues/27466 https://github.com/vllm-project/vllm/issues/27452 VRAM continuously increase over days after usage with vision. When available VRAM drops below 500MB, OOM occurs during new requests. As described in other posts, removing mm_encoder_tp_mode="data" or --enforce-eager does not work either. There is currently no acceptable solution. Is there a memory leakage? It is understood that VRAM usage may go up during vision task, but that should be cleared. VRAM cannot continuously increase and eventually hit OOM. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28230
open
[ "bug" ]
2025-11-06T17:19:18Z
2025-12-02T16:50:26Z
15
yz342
pytorch/pytorch
167,219
Are there limitations to dtensor's registration strategy?
I have a IR schema like this func: my_scatter_add(Tensor x, Tensor(a!) y, Tensor index, Tensor? scale=None, bool use_high_prec=False) -> () This function has no return value, and the second parameter is an in-place parameter I tried the `register_sharding` method described in the Dtensor documentation. However, it threw an error. It seems this method doesn't support IR schema without outputs. Can this IR schema support Dtensor registration? cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @tianyu-l @XilunWu @SherlockNoMad
https://github.com/pytorch/pytorch/issues/167219
open
[ "oncall: distributed", "module: dtensor" ]
2025-11-06T14:50:40Z
2025-11-11T13:37:24Z
4
Bin1024
huggingface/datasets
7,852
Problems with NifTI
### Describe the bug There are currently 2 problems with the new NifTI feature: 1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503) 2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative paths to the nifti files: ```bash table['nifti'] <pyarrow.lib.ChunkedArray object at 0x798245d37d60> [ -- is_valid: all not null -- child 0 type: binary [ null, null, null, null, null, null ] -- child 1 type: string [ "/home/tobias/programming/github/datasets/nifti_extracted/T1.nii", "/home/tobias/programming/github/datasets/nifti_extracted/T2-interleaved.nii", "/home/tobias/programming/github/datasets/nifti_extracted/T2.nii", "/home/tobias/programming/github/datasets/nifti_extracted/T2_-interleaved.nii", "/home/tobias/programming/github/datasets/nifti_extracted/T2_.nii", "/home/tobias/programming/github/datasets/nifti_extracted/fieldmap.nii" ] ] ``` instead of containing bytes. The code is copy pasted from PDF, so I wonder what is going wrong here. ### Steps to reproduce the bug see the linked comment ### Expected behavior downloading should work as smoothly as for pdf ### Environment info - `datasets` version: 4.4.2.dev0 - Platform: Linux-6.14.0-33-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.35.3 - PyArrow version: 21.0.0 - Pandas version: 2.3.3 - `fsspec` version: 2025.9.0
https://github.com/huggingface/datasets/issues/7852
closed
[]
2025-11-06T11:46:33Z
2025-11-06T16:20:38Z
2
CloseChoice
huggingface/peft
2,901
AttributeError: 'float' object has no attribute 'meta'
### System Info peft== 0.17.1 torch== 2.5.1+cu118 transformers==4.57.0 python==3.12.7 ### Who can help? I am trying to use LoRA with DINOv3 (so a slightly modified vit-b). However, I am hitting after a random number of iterations this error. It is sadly difficult to reproduce. Maybe someone can hint at what is going on? ``` Traceback (most recent call last): File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1446, in _call_user_compiler compiled_fn = compiler_fn(gm, self.example_inputs()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__ compiled_gm = compiler_fn(gm, example_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/__init__.py", line 2234, in __call__ return compile_fx(model_, inputs_, config_patches=self.config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1521, in compile_fx return aot_autograd( ^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 72, in __call__ cg = aot_module_simplified(gm, example_inputs, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1071, in aot_module_simplified compiled_fn = dispatch_and_compile() ^^^^^^^^^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1056, in dispatch_and_compile compiled_fn, _ = create_aot_dispatcher_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function return _create_aot_dispatcher_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 759, in _create_aot_dispatcher_function compiled_fn, fw_metadata = compiler_fn( ^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 179, in aot_dispatch_base compiled_fw = compiler(fw_module, updated_flat_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1350, in fw_compiler_base return _fw_compiler_base(model, example_inputs, is_inference) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1421, in _fw_compiler_base return inner_compile( ^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 475, in compile_fx_inner return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 85, in debug_wrapper inner_compiled_fn = compiler_fn(gm, example_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 661, in _compile_fx_inner compiled_graph = FxGraphCache.load( ^^^^^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 1334, in load compiled_graph = compile_fx_fn( ^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 570, in codegen_and_compile compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/t
https://github.com/huggingface/peft/issues/2901
closed
[]
2025-11-06T11:24:18Z
2025-11-17T15:34:08Z
6
Karol-G
vllm-project/vllm
28,192
[RFC]: Support separate NICs for KV cache traffic and MoE traffic
### Motivation. In MoE models with large KV caches, KV cache all-to-all and MoE expert communication share the same RNIC, causing congestion and degrading performance. Using dedicated NICs for each traffic type can improve bandwidth utilization and reduce interference. ### Proposed Change. Does vLLM currently support routing KV cache traffic and MoE traffic through different NICs? ### Feedback Period. _No response_ ### CC List. _No response_ ### Any Other Things. _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28192
open
[ "RFC" ]
2025-11-06T07:31:17Z
2025-11-06T08:19:56Z
1
JayFzh
vllm-project/vllm
28,186
[Bug] Cannot load qwen3-vl series with lora adapter
I fine-tuned the `Qwen3-VL-8B-Instruct` model using Unsloth. I moved the saved QLoRA adapter and the `Qwen3-VL-2B-Instruct` model to my vLLM server. Then I ran a command to start model serving with vLLM as shown below. (For reference, the vLLM server has no issues—it was already serving official Qwen3-VL models.) ``` command = [ sys.executable, "-m", "vllm.entrypoints.openai.api_server", "--model", "./Qwen3-VL-2B-Instruct", "--max_model_len", "3500", "--gpu_memory_utilization", "0.85", "--trust-remote-code", "--host", "0.0.0.0", "--port", "8888", # for lora adapter "--enable-lora", "--max-lora-rank", "16", # LoRA rank "--max-loras", "1", "--max-cpu-loras", "1", "--lora-modules", "adapter0=./my_lora_adapter" ] ``` I waited for vLLM to properly load the QLoRA adapter, but the following problem occurred : https://github.com/vllm-project/vllm/issues/26991 When I was feeling hopeless, I tried merging the model instead of saving the LoRA adapter separately by using the `save_pretrained_merged()` function as shown below, and then vLLM was able to load and perform inference normally: ``` save_pretrained_merged( f"my_16bit_model", tokenizer, save_method="merged_16bit") ``` However, I don't want to merge the models—I want to load VL model with **LoRA** adapter. I’ve seen many posts from others experiencing the same error. As of now, what can I do to resolve this issue?
https://github.com/vllm-project/vllm/issues/28186
open
[ "bug" ]
2025-11-06T06:02:33Z
2025-11-09T11:16:27Z
4
deepNoah
pytorch/pytorch
167,186
scripts/build_android.sh missing
### 🐛 Build scripts for android deleted, README outdated I was trying to build pytorch v2.9.0 for android, but it seems build_android.sh script was deleted. Is there any reason why it was deleted? The odd thing is that https://github.com/pytorch/pytorch/blob/v2.9.0/android/README.md references bash ./scripts/build_pytorch_android.sh which doesn't exit. ``` commit 91602a92548d1dd351979cdc6e778c505c32c2b9 Author: albanD <desmaison.alban@gmail.com> Date: Wed Jul 23 01:21:25 2025 +0000 Cleanup old caffe2 scripts (#158475) Testing on this one is grep based: if there were no reference to that script I can find, I deleted. We can easily add any of these back if needed! Pull Request resolved: https://github.com/pytorch/pytorch/pull/158475 Approved by: https://github.com/seemethere, https://github.com/huydhn, https://github.com/cyyever ``` ### Versions v2.9.0
https://github.com/pytorch/pytorch/issues/167186
closed
[ "triaged", "oncall: mobile" ]
2025-11-06T04:15:16Z
2025-11-07T00:56:14Z
1
ppavacic
pytorch/torchtitan
1,998
[Documentation] [BE] Add docs for MXFP8 training on Blackwell
We have [float8](https://github.com/pytorch/torchtitan/blob/main/docs/float8.md) docs, we should add mxfp8 docs as well, especially since we have a public blog post on accelerating training with torchtitan mxfp8 training: https://pytorch.org/blog/accelerating-2k-scale-pre-training-up-to-1-28x-with-torchao-mxfp8-and-torchtitan-on-crusoe-b200-cluster/
https://github.com/pytorch/torchtitan/issues/1998
closed
[ "documentation" ]
2025-11-06T02:53:06Z
2025-12-03T21:54:51Z
0
danielvegamyhre
pytorch/pytorch
167,172
[Profiler][XPU] Is there a miss?
Found something: https://github.com/pytorch/pytorch/blob/943227f57bcd638ab288331442748769f907d8c1/torch/csrc/autograd/init.cpp#L390-L419 Is the XPU code should also be in the #if branch? Seems the XPU depends on macro `LIBKINETO_NOXPUPTI`? Hmmmm, or the #if control misses the `|| !defined(LIBKINETO_NOXPUPTI)` also? Not a pro to XPU, so please correct me if something here is wrong. cc @gujinghui @EikanWang @fengyuan14 @guangyey
https://github.com/pytorch/pytorch/issues/167172
closed
[ "triaged", "module: xpu" ]
2025-11-06T02:15:45Z
2025-11-19T05:42:57Z
1
KarhouTam
huggingface/trl
4,481
DPOTrainer._prepare_dataset() adds an extra eos_token to conversationally formatted inputs
## Overview The DPOTrainer unconditionally appends the eos_token to both the "chosen" and "rejected" sequences. Because conversationally formatted inputs will already have the chat template applied, this causes them to have duplicate eos_tokens (Ex. `...<|im_end|><|im_end|>`). A related problem was reported for the [SFTTrainer](https://github.com/huggingface/trl/issues/3318), where Qwen2.5’s chat template confused the trainer’s logic for detecting whether a sequence already ended with an eos_token_id. The DPO case is slightly different: [DPOTrainer.tokenize_row](https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L738-L739) explicitly appends tokenizer.eos_token_id to both chosen_input_ids and rejected_input_ids, regardless of whether the text is standard or conversational. Even if the chat template already added the token, it will be added again. ## Repro ```python import trl from trl import DPOTrainer, DPOConfig from transformers import AutoModelForCausalLM, AutoTokenizer from datasets import Dataset import torch MODEL_ID = "Qwen/Qwen2.5-0.5B-Instruct" # Conversational format sample_data = { "prompt": [[{"role": "user", "content": "What is 2+2?"}]], "chosen": [[{"role": "assistant", "content": "2+2 equals 4."}]], "rejected": [[{"role": "assistant", "content": "I don't know math."}]] } # Convert to dataset train_dataset = Dataset.from_dict(sample_data) # Load model and tokenizer tokenizer = AutoTokenizer.from_pretrained(MODEL_ID) model = AutoModelForCausalLM.from_pretrained( MODEL_ID, dtype=torch.bfloat16, device_map="auto" ) # Setup DPO config dpo_config = DPOConfig( output_dir="./dpo_output", per_device_train_batch_size=2, num_train_epochs=1, logging_steps=1, remove_unused_columns=False, ) # Initialize DPOTrainer trainer = DPOTrainer( model=model, args=dpo_config, train_dataset=train_dataset, processing_class=tokenizer, ) # Get the processed batch train_dataloader = trainer.get_train_dataloader() batch = next(iter(train_dataloader)) # Decode and display the preprocessed sequences for idx in range(len(batch["chosen_input_ids"])): # Show prompt if available if "prompt_input_ids" in batch: prompt_tokens = batch["prompt_input_ids"][idx] print("-"*80) print(f"PROMPT:") print("-"*80) print(tokenizer.decode(prompt_tokens, skip_special_tokens=False)) print("-"*80) # Show full chosen sequence chosen_tokens = batch["chosen_input_ids"][idx] print(f"CHOSEN SEQUENCE:") print("-"*80) print(tokenizer.decode(chosen_tokens, skip_special_tokens=False)) print("-"*80 + "\n") # Show full rejected sequence rejected_tokens = batch["rejected_input_ids"][idx] print(f"REJECTED SEQUENCE:") print("-"*80) print(tokenizer.decode(rejected_tokens, skip_special_tokens=False)) print("-"*80) ``` ## Outputs: Notice the double `<|im_end|>` tokens for the 'chosen' and 'rejected' columns. ``` -------------------------------------------------------------------------------- PROMPT: -------------------------------------------------------------------------------- <|im_start|>system You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|> <|im_start|>user What is 2+2?<|im_end|> <|im_start|>assistant -------------------------------------------------------------------------------- CHOSEN SEQUENCE: -------------------------------------------------------------------------------- 2+2 equals 4.<|im_end|> <|im_end|> -------------------------------------------------------------------------------- REJECTED SEQUENCE: -------------------------------------------------------------------------------- I don't know math.<|im_end|> <|im_end|> -------------------------------------------------------------------------------- ``` ### System Info - Platform: Linux-6.11.0-1016-nvidia-x86_64-with-glibc2.39 - Python version: 3.12.11 - TRL version: 0.24.0 - PyTorch version: 2.7.1+cu128 - accelerator(s): NVIDIA H200 - Transformers version: 4.57.1 - Accelerate version: 1.11.0 - Accelerate config: not found - Datasets version: 4.4.1 - HF Hub version: 0.36.0 - bitsandbytes version: not installed - DeepSpeed version: not installed - Liger-Kernel version: not installed - LLM-Blender version: not installed - OpenAI version: not installed - PEFT version: not installed - vLLM version: not installed ### Checklist - [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue)) - [x] I have included my system information - [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks)) - [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/wo
https://github.com/huggingface/trl/issues/4481
open
[ "🐛 bug", "🏋 DPO" ]
2025-11-06T01:17:05Z
2025-11-06T18:40:39Z
0
DevonPeroutky
huggingface/trl
4,468
Move RLOOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move RLOOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old location - [ ] Update tests - [ ] Verify examples still work ## Post-V1 Plan May stay in trl.experimental as maintenance cost is low. ## Related - Parent tracking issue: #4374 - RFC: #4223 - BCO migration (completed): #4312
https://github.com/huggingface/trl/issues/4468
closed
[ "📚 documentation", "✨ enhancement" ]
2025-11-05T21:30:15Z
2025-12-05T18:21:41Z
2
behroozazarkhalili
huggingface/trl
4,466
Move PPOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move PPOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old location - [ ] Update tests - [ ] Verify examples still work ## Post-V1 Plan May stay in trl.experimental as it's an important baseline but requires heavy refactoring. ## Related - Parent tracking issue: #4374 - RFC: #4223 - BCO migration (completed): #4312
https://github.com/huggingface/trl/issues/4466
closed
[ "📚 documentation", "✨ enhancement", "🏋 PPO" ]
2025-11-05T21:29:54Z
2025-11-13T19:01:20Z
0
behroozazarkhalili
huggingface/trl
4,465
Move ORPOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move ORPOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old location - [ ] Update tests - [ ] Verify examples still work ## Post-V1 Plan May stay in trl.experimental. ## Related - Parent tracking issue: #4374 - RFC: #4223 - BCO migration (completed): #4312
https://github.com/huggingface/trl/issues/4465
closed
[ "📚 documentation", "✨ enhancement", "🏋 ORPO" ]
2025-11-05T21:29:44Z
2025-11-21T06:36:32Z
0
behroozazarkhalili
huggingface/trl
4,463
Move KTOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move KTOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old location - [ ] Update tests - [ ] Verify examples still work ## Post-V1 Plan May be promoted to main codebase after refactoring. ## Related - Parent tracking issue: #4374 - RFC: #4223 - BCO migration (completed): #4312
https://github.com/huggingface/trl/issues/4463
open
[ "📚 documentation", "✨ enhancement", "🏋 KTO" ]
2025-11-05T21:29:25Z
2025-11-05T21:29:50Z
0
behroozazarkhalili
huggingface/trl
4,461
Move OnlineDPOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move OnlineDPOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old location - [ ] Update tests - [ ] Verify examples still work ## Post-V1 Plan May be removed based on usage and maintenance requirements. ## Related - Parent tracking issue: #4374 - RFC: #4223 - BCO migration (completed): #4312
https://github.com/huggingface/trl/issues/4461
closed
[ "📚 documentation", "✨ enhancement", "🏋 Online DPO" ]
2025-11-05T21:28:08Z
2025-11-24T01:13:07Z
1
behroozazarkhalili
pytorch/pytorch
167,118
[CI][CUDA][B200] Why does job keep encountering "No devices were found" while "nvidia-smi" on bare-metal returns normal results
### 🐛 Describe the bug JOB link: https://github.com/pytorch/pytorch/actions/runs/19096449521/job/54559623146 Runner/user: dgxb200-08-1003 Nvidia-smi output when logged on the machine: <img width="673" height="560" alt="Image" src="https://github.com/user-attachments/assets/28d124a2-3a4e-408a-8301-4437b2541af5" /> ### Versions Infra cc @ezyang @gchanan @kadeng @msaroufim @ptrblck @eqy @tinglvv @atalman @malfet @huydhn @seemethere
https://github.com/pytorch/pytorch/issues/167118
closed
[ "high priority", "triage review" ]
2025-11-05T20:06:16Z
2025-11-10T17:16:16Z
4
nWEIdia
vllm-project/vllm
28,152
[Feature]: Factor out `zero_expert_num` from `FusedMoE`
### 🚀 The feature, motivation and pitch We have many special cases in `FusedMoE` for `zero_expert_num` This parameter is used exclusively for `LongCatFlash`. We should factor this out of `FusedMoe` and put the complexity into the model file. ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28152
open
[ "help wanted", "feature request" ]
2025-11-05T19:05:54Z
2025-11-06T20:08:23Z
0
robertgshaw2-redhat
pytorch/ao
3,295
Examples of using llms with PT2E workflow?
Are there examples of using llms with PT2E workflow? I'm interested in static quantization using qwen3 .
https://github.com/pytorch/ao/issues/3295
closed
[ "triaged" ]
2025-11-05T18:33:13Z
2025-12-05T01:12:56Z
3
cjm715
vllm-project/vllm
28,150
[Bug]: -O.mode=NONE (or -cc.mode=NONE) should work
### Your current environment main ### 🐛 Describe the bug Right now -O.mode only accepts integer levels. Ideally it would accept ints and the string. `vllm serve -O.mode=NONE` # doesn't work `vllm serve -O.mode=0` # does work ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28150
closed
[ "bug", "help wanted", "good first issue", "torch.compile" ]
2025-11-05T18:28:23Z
2025-11-12T00:46:20Z
1
zou3519
vllm-project/vllm
28,137
[Feature]: Refactor `aiter_shared_expert_fusion`
### 🚀 The feature, motivation and pitch We have a special case in `FusedMoE` layer for `aiter_shared_expert_fusion` which creates various if branches spattered across the layer We should factor this out ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28137
open
[ "help wanted" ]
2025-11-05T15:54:09Z
2025-12-20T22:00:55Z
3
robertgshaw2-redhat
vllm-project/vllm
28,132
[Usage]: How do I assign a specific GPU to a vLLM docker container?
### Your current environment stock vllm-openai:v0.11.0 docker image rootless Docker v.27.5.1 on Ubuntu 22.04.5 LTS on physical hardware Nvidia Driver Version: 570.133.20 CUDA Version: 12.8 GPUs: 4x H100 (NVLink), numbered 0,1,2,3 ### How would you like to use vllm I want to run inference of [SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B). The exact model doesn't matter, this happens with other models as well. i want to run this model using Docker. This basically works. However, it alway picks a different GPU than what i specify in CUDA_VISIBLE_DEVICES. Out of my four GPUs, 0 and 1 are idle. I would like the container to use GPU 0. But no matter what I try, it always decides to run on GPU 1. I can verify this using `nvtop`. This is my compose file: ```yaml services: vllm-smol: container_name: smollm-3b image: vllm/vllm-openai:v0.11.0 volumes: - ./smollm-3b/models:/models gpus: "all" environment: HF_HOME: "/models" CUDA_VISIBLE_DEVICES: "0" command: > --model HuggingFaceTB/SmolLM3-3B --enable-auto-tool-choice --tool-call-parser=hermes --gpu-memory-utilization 0.1875 labels: ``` This way, the vLLM container starts and inferencing runs fine. But it decides to use GPU 1 instead of GPU 0 i have also tried this, as docker compose will only accept `gpus: "all"`: ```yaml docker run -d \ --name smollm-3b \ -v "$(pwd)/smollm-3b/models:/models" \ --gpus "device=0" \ -e HF_HOME="/models" \ -e CUDA_VISIBLE_DEVICES="0" \ vllm/vllm-openai:v0.11.0 \ --model HuggingFaceTB/SmolLM3-3B \ --enable-auto-tool-choice \ --tool-call-parser=hermes \ --gpu-memory-utilization 0.1875 ``` This gives me an error during container startup: `RuntimeError: No CUDA GPUs are available` Omitting `CUDA_VISIBLE_DEVICES` gives the same error. And finally, there is also this attempt: ```yaml services: vllm-smol: container_name: smollm-3b image: vllm/vllm-openai:v0.11.0 volumes: - ./smollm-3b/models:/models deploy: resources: reservations: devices: - driver: nvidia device_ids: ['0'] capabilities: [gpu] environment: HF_HOME: "/models" # CUDA_VISIBLE_DEVICES: "0" command: > --model HuggingFaceTB/SmolLM3-3B --enable-auto-tool-choice --tool-call-parser=hermes --gpu-memory-utilization 0.1875 ``` Errors are, once again, identical with and without `CUDA_VISIBLE_DEVICES`: `RuntimeError: No CUDA GPUs are available` Am I doing something fundamentally wrong here? All i want is to use a specific GPU (GPU 0 in my case) ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28132
closed
[ "usage" ]
2025-11-05T14:42:17Z
2025-11-06T14:54:41Z
1
lindner-tj
huggingface/lerobot
2,389
How to resolve the issue that GROOT cannot train properly? Below is my training configuration and error log.
How to resolve the issue that GROOT cannot train properly? Below is my training configuration and error log. accelerate launch \ --multi_gpu \ --num_processes=2 \ $(which lerobot-train) \ --output_dir=./outputs/groot_training \ --save_checkpoint=true \ --batch_size=8 \ --steps=200000 \ --save_freq=20000 \ --log_freq=200 \ --policy.type=groot \ --policy.push_to_hub=false \ --policy.repo_id=your_repo_id \ --dataset.root=/home/ruijia/wxl/data/train_segdata_wrist_20251028_200/ \ --dataset.repo_id=ur_wrist_data \ --wandb.enable=false \ --wandb.disable_artifact=false \ --job_name=grapdata [rank1]:[W1105 18:09:16.255729052 CUDAGuardImpl.h:119] Warning: CUDA warning: an illegal memory access was encountered (function destroyEvent) terminate called after throwing an instance of 'c10::Error' [rank1]:[E1105 18:09:16.257152106 ProcessGroupNCCL.cpp:1899] [PG ID 0 PG GUID 0(default_pg) Rank 1] Process group watchdog thread terminated with exception: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. Exception raised from c10_cuda_check_implementation at /pytorch/c10/cuda/CUDAException.cpp:43 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x7c3dcab785e8 in /home/ruijia/miniconda3/envs/lerobot_pi05/lib/python3.10/site-packages/torch/lib/libc10.so)
https://github.com/huggingface/lerobot/issues/2389
open
[ "training" ]
2025-11-05T10:17:59Z
2025-11-07T17:47:50Z
null
wuxiaolianggit
huggingface/lerobot
2,388
how to improve the generalization of the vla model like gr00t
After fine-tuning the gr00t, i found that it only work for the prompt within the dataset, it is difficult for it to understand new words and new item that need to grab. so whether there is a method can protect the generalization, if i can create a new layer to map the output of the model to new dimensionality?
https://github.com/huggingface/lerobot/issues/2388
open
[]
2025-11-05T10:06:11Z
2025-11-05T10:44:38Z
null
Temmp1e
vllm-project/vllm
28,119
[Feature]: Will we support async scheduler for pipeline parallel?
### 🚀 The feature, motivation and pitch SGLang already have https://github.com/sgl-project/sglang/pull/11852 And I see huge perf gap on SM120 PP because of this. ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28119
closed
[ "feature request" ]
2025-11-05T09:55:57Z
2025-11-07T06:14:19Z
4
weireweire
huggingface/gsplat.js
122
I want to add an object (such as a robot) to move around in the model. How can this be achieved?
I want to add an object (such as a robot) to move around in the model. How can this be achieved?
https://github.com/huggingface/gsplat.js/issues/122
open
[]
2025-11-05T09:16:39Z
2025-11-05T09:16:39Z
null
ThinkingInGIS
pytorch/pytorch
167,062
How to use torch.compile on Windows GPU?
### 🐛 Describe the bug I have installed Python 3.13.9 and PyTorch 2.9+cuda3.13 pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130, And my GPU is RTX 380 12 GB. I have Windows 11 I followed up on those steps - MSVC v143 - VS 2022 C++ x64/x86 build tools - Windows 11 SDK - C++ CMake tools for Windows - C++ core features and added the cl.exe into my environment path "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.44.35207\bin\Hostx64\x64" I tried this code ``` import torch device="cuda" def foo(x, y): a = torch.sin(x) b = torch.cos(x) return a + b opt_foo1 = torch.compile(foo) print(opt_foo1(torch.randn(10, 10).to(device), torch.randn(10, 10).to(device))) ``` ### Error logs CppCompileError: C++ compile error Command: cl /I c:/Users/Emad Younan/AppData/Local/Programs/Python/Python313/Include /I c:/Users/Emad Younan/AppData/Local/Programs/Python/Python313/Lib/site-packages/torch/include /I c:/Users/Emad Younan/AppData/Local/Programs/Python/Python313/Lib/site-packages/torch/include/torch/csrc/api/include /D NOMINMAX /D TORCH_INDUCTOR_CPP_WRAPPER /D STANDALONE_TORCH_HEADER /D C10_USING_CUSTOM_GENERATED_MACROS /O2 /DLL /MD /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /Zc:__cplusplus /permissive- /openmp /openmp:experimental C:/temp/torch_compile/bi/cbil2ud2wplsgzj6esiu72j2t7zq6phvrdyun5pl56vn2g26y5qg.main.cpp /FeC:/temp/torch_compile/bi/cbil2ud2wplsgzj6esiu72j2t7zq6phvrdyun5pl56vn2g26y5qg.main.pyd /LD /link /LIBPATH:c:/Users/Emad Younan/AppData/Local/Programs/Python/Python313/libs /LIBPATH:c:/Users/Emad Younan/AppData/Local/Programs/Python/Python313/Lib/site-packages/torch/lib torch.lib torch_cpu.lib torch_python.lib sleef.lib c10.lib Output: Microsoft (R) C/C++ Optimizing Compiler Version 19.44.35219 for x64 Copyright (C) Microsoft Corporation. All rights reserved. cl : Command line warning D9025 : overriding ‘/openmp’ with ‘/openmp:experimental’ cbil2ud2wplsgzj6esiu72j2t7zq6phvrdyun5pl56vn2g26y5qg.main.cpp c:/Users/Emad Younan/AppData/Local/Programs/Python/Python313/Lib/site-packages/torch/include\torch/csrc/inductor/cpp_prefix.h(3): fatal error C1083: Cannot open include file: ‘omp.h’: No such file or directory ### Versions pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130 cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @chauhang @penguinwu
https://github.com/pytorch/pytorch/issues/167062
open
[ "module: windows", "triaged", "oncall: pt2" ]
2025-11-05T09:04:27Z
2025-11-11T18:16:46Z
null
emadyounan
vllm-project/vllm
28,104
[Usage]: vllm bench serve不能用sharegpt数据集
### Your current environment ```text 我运行以下bencmmarks命令:vllm bench serve --model Qwen3 --tokenizer /mnt/workspace/models --host 127.0.0.1 --port 80 --num-prompts 400 --percentile-metrics ttft,tpot,itl,e2el --metric-percentiles 90,95,99 --dataset-name sharegpt --data set-path /mnt/workspace/benchmarks/sharegpt/ShareGPT_V3_unfiltered_cleaned_split.json --sharegpt-output-len 512 会报一下错误:/usr/local/lib/python3.12/dist-packages/torch/cuda/init.py:61: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. import pynvml # type: ignore[import] INFO 11-04 22:14:30 [init.py:243] Automatically detected platform cuda. INFO 11-04 22:14:32 [init.py:31] Available plugins for group vllm.general_plugins: INFO 11-04 22:14:32 [init.py:33] - lora_filesystem_resolver -> vllm.plugins.lora_resolvers.filesystem_resolver:register_filesystem_resolver INFO 11-04 22:14:32 [init.py:36] All plugins in this group will be loaded. Set to control which plugins to load. usage: vllm bench serve [options] vllm bench <bench_type> [options] serve: error: argument --dataset-name: invalid choice: 'sharegpt' (choose from random). 请问为什么我这个会报错???VLLM_PLUGINS ``` ### How would you like to use vllm how to solve it?? ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28104
open
[ "usage" ]
2025-11-05T06:18:02Z
2025-11-06T14:24:46Z
1
uOnePiece
pytorch/pytorch
167,042
Requesting Cuda 13 support
### 🚀 The feature, motivation and pitch Hi! I am trying to run Torch with GPU support. I am running on Windows, with CUDA toolkit 13 installed, and the latest nvidia drivers. `torch.cuda.is_available()` is showing as False. Is it safe to assume this is because it needs CUDA 12? I'm brand new to Torch, but do a bit of CUDA FFI from rust in my own code, and have been able to get Python FFI working with that. The gist is, if you just use the CUDA Driver API, the application (In this case me running Pytorch) doesn't even need CUDA installed; just compatible drivers. The PC *compiling* the program needs CUDA. For things like cuFFT, you can ship the DLL/SO with the program, then it will work. Maybe we need something like that? What specific things beyond the Driver API does Torch use? Or do you think something else is wrong? Thank you! Happy to help narrow this down and solve. ### Why this is something we should add When you go to the nvidia site and download CUDA, it is downloading by default a version that doesn't work with Torch (?).
https://github.com/pytorch/pytorch/issues/167042
closed
[]
2025-11-05T01:41:01Z
2025-11-05T01:51:37Z
1
David-OConnor
pytorch/pytorch
167,027
combine compiled vectorized function without recompiling already compiled part
### 🚀 The feature, motivation and pitch The nice thing of `torch.compile` is that it fuses the vectorized operations and avoid big intermediate tensors. For example, if I have ``` def func(x): y = f1(x) z = f2(y) return z ``` After `torch.compile` it becomes something like ``` for(int i=0;i<len(x);i++) { tmp_scalar = f1(x[i]) z[i] = f2(tmp_scalar) } ``` However if `f1` and `f2` are big functions, it expands everything inside. Is there a way to prevent the expansion of `f1` and `f2`, while still keeping the fusing behavior, for the purpose of reducing compilation time? In C++, `f1` and `f2` should be compiled into a non-inlined scalar function, and I would just like to do another compilation to combine `f1` and `f2` and then loop over `i`. If I understand correctly, graph break does not try to fuse the separated parts, although it can avoid recompilation. ### Alternatives _No response_ ### Additional context _No response_ cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @chauhang @penguinwu @voznesenskym @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @coconutruben
https://github.com/pytorch/pytorch/issues/167027
open
[ "triaged", "intel", "oncall: pt2", "module: inductor" ]
2025-11-05T00:16:52Z
2025-11-11T18:15:06Z
1
SUSYUSTC
vllm-project/vllm
28,070
[Usage]: Is there a way to control default thinking behaviour of a model?
### Your current environment Is there a way to control default thinking behaviour for models deployed through vllm. As per https://docs.vllm.ai/en/stable/features/reasoning_outputs.html, IBM Grantie 3.2 reasoning is disabled by default. Qwen3, GLM 4.6, Deepseek V3.1 all have reasoning enabled by default. It would be great if there is a way to control this from vllm. --override-generation-config allows user to override temperature and other params at deployment. But this does not work for reasoning. I have tried `docker run -d --runtime nvidia -e TRANSFORMERS_OFFLINE=1 -e DEBUG="true" -p 8000:8000 --ipc=host vllm/vllm-openai:v0.11.0 --reasoning-parser qwen3 --model Qwen/Qwen3-4B --override-generation-config '{"chat_template_kwargs": {"enable_thinking": false}}'` ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28070
closed
[ "usage" ]
2025-11-04T22:03:32Z
2025-12-30T03:38:48Z
0
yz342
vllm-project/vllm
28,056
[Bug]: Missing libarm_compute.so in Arm CPU pip installed wheels
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug We now have vllm wheels for Arm CPUs in pypi thanks to https://github.com/vllm-project/vllm/pull/26931 and https://github.com/vllm-project/vllm/pull/27331 You can install Arm CPU wheels with: ``` pip install --pre vllm==0.11.1rc3+cpu --extra-index-url https://wheels.vllm.ai/0.11.1rc3%2Bcpu/ ``` However it will currently fail, unless you ldpreload ACL: ``` WARNING 10-29 12:33:18 [interface.py:171] Failed to import from vllm._C: ImportError('libarm_compute.so: cannot open shared object file: No such file or directory') We need to figure out how to package libarm_compute.so in the wheel ``` Best way to reproduce this locally is: - build vllm from main locally with `VLLM_TARGET_DEVICE=cpu python3 setup.py bdist_wheel` - remove `vllm/deps` which contains the libarm_compute.so - pip install the wheel you built then you will run into the issue (because it will try to load libarm_compute.so under vllm/.deps/arm_compute-src/build/) Note: ACL/oneDNN are built in vllm here: We need to figure out how to bundle `libarm_compute.so` in the wheel to avoid this. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28056
closed
[ "bug" ]
2025-11-04T17:22:55Z
2025-11-13T05:43:10Z
2
fadara01
pytorch/torchtitan
1,989
Should MFU/tflops take tensor parallelism into account?
Right now model flops is computed before TP is applied. But TP changes the sizes of the matrices so I think the flops computation should be different as well?
https://github.com/pytorch/torchtitan/issues/1989
open
[ "question" ]
2025-11-04T16:51:12Z
2025-11-05T00:04:49Z
null
chelsea0x3b
vllm-project/vllm
28,046
Qwen3-Omni model inference : ValueError: Either SamplingParams or PoolingParams must be provided.
### Your current environment ```text The output of `python web_demo.py` ``` The above mentioned method provides the error below ``` qwen/Qwen3-Omni/collect_env.py", line 287, in get_vllm_version from vllm import __version__, __version_tuple__ ImportError: cannot import name '__version__' from 'vllm' (unknown location) ``` while the envs installed are below: ``` pip list Package Version Editable project location --------------------------------- --------------------------------- ---------------------------------------------------------- accelerate 1.11.0 aiofiles 24.1.0 aiohappyeyeballs 2.6.1 aiohttp 3.13.2 aiosignal 1.4.0 airportsdata 20250909 annotated-doc 0.0.3 annotated-types 0.7.0 anyio 4.11.0 astor 0.8.1 async-timeout 5.0.1 attrs 25.4.0 audioread 3.1.0 av 16.0.1 blake3 1.0.8 Brotli 1.1.0 cachetools 6.2.1 certifi 2025.10.5 cffi 2.0.0 charset-normalizer 3.4.4 click 8.2.1 cloudpickle 3.1.2 cmake 4.1.2 compressed-tensors 0.10.2 cupy-cuda12x 13.6.0 decorator 5.2.1 depyf 0.18.0 dill 0.4.0 diskcache 5.6.3 distro 1.9.0 dnspython 2.8.0 einops 0.8.1 email-validator 2.3.0 exceptiongroup 1.3.0 fastapi 0.121.0 fastapi-cli 0.0.14 fastapi-cloud-cli 0.3.1 fastrlock 0.8.3 ffmpy 0.6.4 filelock 3.20.0 flash_attn 2.8.3 frozenlist 1.8.0 fsspec 2025.10.0 gguf 0.17.1 gradio 5.44.1 gradio_client 1.12.1 groovy 0.1.2 h11 0.16.0 hf-xet 1.2.0 httpcore 1.0.9 httptools 0.7.1 httpx 0.28.1 huggingface-hub 0.36.0 idna 3.11 interegular 0.3.3 Jinja2 3.1.6 jiter 0.11.1 joblib 1.5.2 jsonschema 4.25.1 jsonschema-specifications 2025.9.1 lark 1.2.2 lazy_loader 0.4 librosa 0.11.0 llguidance 0.7.30 llvmlite 0.44.0 lm-format-enforcer 0.10.12 markdown-it-py 4.0.0 MarkupSafe 3.0.3 mdurl 0.1.2 mistral_common 1.8.5 mpmath 1.3.0 msgpack 1.1.2 msgspec 0.19.0 multidict 6.7.0 nest-asyncio 1.6.0 networkx 3.4.2 ninja 1.13.0 numba 0.61.2 numpy 2.2.6 nvidia-cublas-cu12 12.6.4.1 nvidia-cuda-cupti-cu12 12.6.80 nvidia-cuda-nvrtc-cu12 12.6.77 nvidia-cuda-runtime-cu12 12.6.77 nvidia-cudnn-cu12 9.5.1.17 nvidia-cufft-cu12 11.3.0.4 nvidia-cufile-cu12 1.11.1.6 nvidia-curand-cu12 10.3.7.77 nvidia-cusolver-cu12 11.7.1.2 nvidia-cusparse-cu12 12.5.4.2 nvidia-cusparselt-cu12 0.6.3 nvidia-nccl-cu12 2.26.2 nvidia-nvjitlink-cu12 12.6.85 nvidia-nvtx-cu12 12.6.77 openai 1.90.0 opencv-python-headless 4.12.0.88 orjson 3.11.4 outlines 0.1.11 outlines_core 0.1.26 packaging 25.0 pandas 2.3.3 partial-json-parser 0.2.1.1.post6 pillow 11.3.0 pip 25.2 platformdirs 4.5.0 pooch 1.8.2 prometheus_client 0.23.1 prometheus-fastapi-instrumentator 7.1.0 propcache
https://github.com/vllm-project/vllm/issues/28046
closed
[ "usage" ]
2025-11-04T13:59:57Z
2025-11-24T19:24:39Z
22
Tortoise17
vllm-project/vllm
28,045
[Doc]: Any detailed documentation about how to load_weights in customized vllm model?
### 📚 The doc issue I don't know how to modify the attention and how the load_model works. The documentation says too few, I find it's hard to understand. Anyone has some more detailed experience? Thank you! ### Suggest a potential alternative/fix _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28045
open
[ "documentation" ]
2025-11-04T13:23:25Z
2025-11-05T02:07:55Z
0
sleepwalker2017
vllm-project/vllm
28,035
[Usage]: deepseek-ocr The output token count is too low and unstable.
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm python3 -m vllm.entrypoints.openai.api_server --served-model-name deepseek-ocr --model deepseekocr --tensor-parallel-size 1 --gpu-memory-utilization 0.95 --disable-log-requests --logits_processors vllm.model_executor.models.deepseek_ocr:NGramPerReqLogitsProcessor { "model": "DeepSeek-OCR", "messages": [{ "role": "user", "content": [ { "type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{self.image_to_base64(image_path)}"} }, {"type": "text", "text": ”<image>\nFree OCR.“} ] }], "vllm_xargs": { "ngram_size": 30, "window_size": 100, "whitelist_token_ids": "[128821, 128822]" }, "temperature": 0.0, "max_tokens": 4096 } "finish_reason":"stop" but "completion_tokens":200+ ,cannot output the complete image content. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28035
open
[ "usage" ]
2025-11-04T09:50:53Z
2025-11-04T09:50:53Z
0
sixgod-666
vllm-project/vllm
28,031
[Usage]: Error: Failed to initialize the TMA descriptor 700
### Your current environment vllm0.11.0 to train Qwen3-vl-8B The following error message appears intermittently during training. ``` [36m(WorkerDict pid=82555) TMA Desc Addr: 0x7f4e2736b080 (WorkerDict pid=82555) format 9 (WorkerDict pid=82555) dim 4 (WorkerDict pid=82555) gmem_address 0xa9bdcd0000 (WorkerDict pid=82555) globalDim (128,415,2,1,1) (WorkerDict pid=82555) globalStrides (2,2048,1024,0,0) (WorkerDict pid=82555) boxDim (64,128,1,1,1) (WorkerDict pid=82555) elementStrides (1,1,1,1,1) (WorkerDict pid=82555) interleave 0 (WorkerDict pid=82555) swizzle 3 (WorkerDict pid=82555) l2Promotion 2 (WorkerDict pid=82555) oobFill 0 (WorkerDict pid=82555) Error: Failed to initialize the TMA descriptor 700 (WorkerDict pid=82555) TMA Desc Addr: 0x7f4e2736b080 (WorkerDict pid=82555) format 9 (WorkerDict pid=82555) dim 4 (WorkerDict pid=82555) gmem_address 0xa46a000000 (WorkerDict pid=82555) globalDim (128,16,2,61647,1) (WorkerDict pid=82555) globalStrides (2,512,256,8192,0) (WorkerDict pid=82555) boxDim (64,128,1,1,1) (WorkerDict pid=82555) elementStrides (1,1,1,1,1) (WorkerDict pid=82555) interleave 0 (WorkerDict pid=82555) swizzle 3 (WorkerDict pid=82555) l2Promotion 2 (WorkerDict pid=82555) oobFill 0 (WorkerDict pid=82555) Error: Failed to initialize the TMA descriptor 700 (WorkerDict pid=82555) TMA Desc Addr: 0x7f4e2736b080 (WorkerDict pid=82555) format 9 (WorkerDict pid=82555) dim 4 (WorkerDict pid=82555) gmem_address 0xa48819e000 (WorkerDict pid=82555) globalDim (128,16,2,61647,1) (WorkerDict pid=82555) globalStrides (2,512,256,8192,0) (WorkerDict pid=82555) boxDim (64,128,1,1,1) (WorkerDict pid=82555) elementStrides (1,1,1,1,1) (WorkerDict pid=82555) interleave 0 (WorkerDict pid=82555) swizzle 3 (WorkerDict pid=82555) l2Promotion 2 (WorkerDict pid=82555) oobFill 0 (WorkerDict pid=82555) Error: Failed to initialize the TMA descriptor 700 (WorkerDict pid=82555) TMA Desc Addr: 0x7f4e2736b080 (WorkerDict pid=82555) format 9 (WorkerDict pid=82555) dim 4 (WorkerDict pid=82555) gmem_address 0xa46a000000 (WorkerDict pid=82555) globalDim (128,16,2,61647,1) (WorkerDict pid=82555) globalStrides (2,512,256,8192,0) (WorkerDict pid=82555) boxDim (64,128,1,1,1) (WorkerDict pid=82555) elementStrides (1,1,1,1,1) (WorkerDict pid=82555) interleave 0 (WorkerDict pid=82555) swizzle 3 (WorkerDict pid=82555) l2Promotion 2 (WorkerDict pid=82555) oobFill 0 (WorkerDict pid=82555) Error: Failed to initialize the TMA descriptor 700 (WorkerDict pid=82555) TMA Desc Addr: 0x7f4e2736b080 (WorkerDict pid=82555) format 9 (WorkerDict pid=82555) dim 4 (WorkerDict pid=82555) gmem_address 0xa48819e000 (WorkerDict pid=82555) globalDim (128,16,2,61647,1) (WorkerDict pid=82555) globalStrides (2,512,256,8192,0) (WorkerDict pid=82555) boxDim (64,128,1,1,1) (WorkerDict pid=82555) elementStrides (1,1,1,1,1) (WorkerDict pid=82555) interleave 0 (WorkerDict pid=82555) swizzle 3 (WorkerDict pid=82555) l2Promotion 2 (WorkerDict pid=82555) oobFill 0 (WorkerDict pid=82555) Error: Failed to initialize the TMA descriptor 700 (WorkerDict pid=82555) CUDA error (/workspace/.deps/vllm-flash-attn-src/hopper/flash_fwd_launch_template.h:191): an illegal memory access was encountered (WorkerDict pid=82558) l2Promotion 2 (WorkerDict pid=82558) l2Promotion 2 (WorkerDict pid=82558) l2Promotion 2 (WorkerDict pid=82558) l2Promotion 2 (WorkerDict pid=82558) l2Promotion 2 ``` then the error message below is being repeated, but training has not stopped. ``` [36m(WorkerDict pid=134586) [rank7]:[W1104 07:52:01.751088784 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=90, addr=[train-kubeflow-72-46805-20251104102107-master-0]:49384, remote=[train-kubeflow-72-46805-20251104102107-master-0]:32991): Connection reset by peer [repeated 6x across cluster] (WorkerDict pid=134586) Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:679 (most recent call first): [repeated 6x across cluster] (WorkerDict pid=134580) frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::ba
https://github.com/vllm-project/vllm/issues/28031
open
[ "usage" ]
2025-11-04T08:13:45Z
2025-12-11T08:18:15Z
4
DBMing
vllm-project/vllm
28,016
[Usage]: How to recognize PDFs in DeepSeek-OCR with openai
### Your current environment ``` vllm serve deepseek-ai/DeepSeek-OCR --logits_processors vllm.model_executor.models.deepseek_ocr.NGramPerReqLogitsProcessor --no-enable-prefix-caching --mm-processor-cache-gb 0 ``` ### How would you like to use vllm How to recognize PDFs and convert PDFs to Markdown with DeepSeek-OCR via an OpenAI-compatible API? ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/28016
open
[ "usage" ]
2025-11-04T03:35:38Z
2025-11-04T07:33:07Z
2
shoted
vllm-project/vllm
28,003
[Usage]:
### Your current environment ```text Collecting environment information... ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version : Could not collect CMake version : version 4.1.0 Libc version : glibc-2.35 ============================== PyTorch Info ============================== PyTorch version : 2.8.0+cu128 Is debug build : False CUDA used to build PyTorch : 12.8 ROCM used to build PyTorch : N/A ============================== Python Environment ============================== Python version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime) Python platform : Linux-6.8.0-54-generic-x86_64-with-glibc2.35 ============================== CUDA / GPU Info ============================== Is CUDA available : True CUDA runtime version : 12.8.93 CUDA_MODULE_LOADING set to : LAZY GPU models and configuration : GPU 0: NVIDIA H100 NVL Nvidia driver version : 570.86.10 cuDNN version : Could not collect HIP runtime version : N/A MIOpen runtime version : N/A Is XNNPACK available : True ============================== CPU Info ============================== Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 52 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 48 On-line CPU(s) list: 0-47 Vendor ID: AuthenticAMD Model name: AMD EPYC 9654 96-Core Processor CPU family: 25 Model: 17 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 48 Stepping: 1 BogoMIPS: 4799.59 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean flushbyasid pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm flush_l1d arch_capabilities Virtualization: AMD-V L1d cache: 3 MiB (48 instances) L1i cache: 3 MiB (48 instances) L2 cache: 24 MiB (48 instances) L3 cache: 768 MiB (48 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-47 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected ============================== Versions of relevant libraries ============================== [pip3] flashinfer-python==0.3.1 [pip3] numpy==2.2.6 [pip3] nvidia-cublas-cu12==12.8.4.1 [pip3] nvidia-cuda-cupti-cu12==12.8.90 [pip3] nvidia-cuda-nvrtc-cu12==12.8.93 [pip3] nvidia-cuda-runtime-cu12==12.8.90 [pip3] nvidia-cudnn-cu12==9.10.2.21 [pip3] nvidia-cudnn-frontend==1.14.1 [pip3] nvidia-cufft-cu12==11.3.3.83 [pip3] nvidia-cufile-cu12==1.13.1.3 [pip3] nvidia-curand-cu12==10.3.9.90 [pip3] nvidia-cusolver-cu12==11.7.3.90 [pip3] nvidia-cusparse-cu12==12.5.8.93 [pip3] nvidia-cuspa
https://github.com/vllm-project/vllm/issues/28003
open
[ "usage" ]
2025-11-03T21:19:15Z
2025-11-26T15:32:40Z
1
amitmvyas
pytorch/ao
3,281
[moe training] Update torchao docsite with MoE training docs
Currently the MoE training docs live in this [README](https://github.com/pytorch/ao/blob/main/torchao/prototype/moe_training/README.md). To make the prototype more discoverable and usable, we should: 1. Update the the [docsite](https://docs.pytorch.org/ao/stable/index.html) 2. Update torchtitan docs with examples for mxfp8 moe training
https://github.com/pytorch/ao/issues/3281
open
[ "topic: documentation", "moe" ]
2025-11-03T18:34:01Z
2025-11-03T18:34:10Z
0
danielvegamyhre
vllm-project/vllm
27,995
[RFC]: Make PassConfig flags less verbose
### Motivation. Almost all `PassConfig` field names have `enable_` in the name, which is unnecessarily verbose. They are also pretty long, and sometimes not descriptive enough. Finally, `enable_fusion` should be split into rmsnorm+quant and activation+quant flags as we want to control these flags separately. ### Proposed Change. We should rename the flags: - `enable_async_tp` -> `fuse_gemm_comms` - `enable_attn_fusion` -> `fuse_attn_quant` - `enable_fi_allreduce_fusion` -> `fuse_allreduce_rms` - `enable_fusion` -> `fuse_norm_quant`, `fuse_act_quant` - `enable_noop` -> `eliminate_noops` - `enable_sequence_parallelism` -> `enable_sp` For future RoPE-based fusion passes, the flags will look like: - `enable_qknorm_rope_fusion` -> `fuse_qknorm_rope` - `enable_rope_cache_fusion` -> `fuse_rope_cache` - ... We can deprecate the original flags in the next release and map them to the new ones, and remove them 1 or even 2 releases later (shouldn't be hard to support). These flags will be used less commonly after `-O` optimization levels land anyway. ### Feedback Period. 1 week, 11/3 - 11/7 ### CC List. @zou3519 @youkaichao @mgoin @ilmarkov @nvpohanh @pavanimajety ### Any Other Things. With passes following a common construction convention, we can also add a `full_pass_pipeline` arg where users can control the exact order of the passes if necessary, but that is less likely to be needed urgently and can be added later. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27995
closed
[ "help wanted", "good first issue", "RFC", "torch.compile" ]
2025-11-03T17:49:29Z
2025-12-03T19:53:01Z
7
ProExpertProg
huggingface/peft
2,888
Potential remote code execution via untrusted tokenizer_kwargs in PromptEmbedding
### Description A remote code execution vector exists in the PEFT prompt-tuning flow. A remote `adapter_config.json` can inject loader kwargs that are forwarded to `AutoTokenizer.from_pretrained` calls. If an attacker sets `"tokenizer_kwargs": {"trust_remote_code": true}` and points `tokenizer_name_or_path` at an attacker-controlled repo, constructing the prompt embedding will cause `AutoTokenizer.from_pretrained(...)` to import and run code from that repo. This happens during normal initialization and requires no further user interaction. ### Root Cause `PromptEmbedding` trusts and forwards fields from config into `AutoTokenizer.from_pretrained` without validating or sanitizing them: https://github.com/huggingface/peft/blob/30a19a08f9ef85ce1095b9ac69e78269121525e2/src/peft/tuners/prompt_tuning/model.py#L78-L84 ### Impact This issue turns remote configuration files into attack vectors. Any user who loads a malicious adapter config can have arbitrary code executed on their machine. The compromise is silent, requires no extra user action beyond `from_pretrained`, and is easy to weaponize by publishing a seemingly legitimate config that explicitly set `trust_remote_code=True` and points to attacker code. Consequences include command execution, credential and data theft, file tampering, and worm infection if environment tokens or write permissions are present. This should be fixed urgently by treating config-supplied kwargs as untrusted: filter or reject sensitive parameters such as `trust_remote_code`. ### Who can help? @benjaminbossan @githubnemo ### Reproduction A malicious remote config can look like: ```json { "base_model_name_or_path": "XManFromXlab/peft-prompt-embedding-rce", "tokenizer_name_or_path": "XManFromXlab/peft-prompt-embedding-rce" "tokenizer_kwargs": { "trust_remote_code": true } } ``` When users are attracted to the repo and use peft to load the config from remote repo ```python from peft import PromptEmbedding, PromptTuningConfig from transformers import AutoModelForSeq2SeqLM t5_model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") example_model = "XManFromXlab/peft-prompt-embedding-rce" config = PromptTuningConfig.from_pretrained(example_model, trust_remote_code=False) prompt_embedding = PromptEmbedding(config, t5_model.shared) ``` During `PromptEmbedding` initialization the code reads `tokenizer_kwargs` from the remote config and calls `AutoTokenizer.from_pretrained(config.tokenizer_name_or_path, **tokenizer_kwargs)`. Because `trust_remote_code` was injected via the config, the loader imports and executes the attacker’s backend code, demonstrating RCE. ### Expected behavior In my example, the above code will print the message 'Execute Malicious Payload!!!!!!', which indicates the execution of malicious scripts. ```bash $ python3 main.py Execute Malicious Payload!!!!!! Execute Malicious Payload!!!!!! Execute Malicious Payload!!!!!! ```
https://github.com/huggingface/peft/issues/2888
closed
[]
2025-11-03T16:04:52Z
2025-11-04T17:50:28Z
3
Vancir
pytorch/pytorch
166,866
ROCm failures during provisioning step due to network issues
## Current Status Mitigated MI250 Cirrascale cluster had a network outage causing jobs to fail ## Error looks like Error during Set up job: ``` Download action repository 'pytorch/pytorch@main' (SHA:335b5c7d4bf3295d517902370142f007ca024cd0) Warning: Failed to download action 'https://api.github.com/repos/pytorch/pytorch/tarball/335b5c7d4bf3295d517902370142f007ca024cd0'. Error: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing. Warning: Back off 14.448 seconds before retry. Warning: Failed to download action 'https://api.github.com/repos/pytorch/pytorch/tarball/335b5c7d4bf3295d517902370142f007ca024cd0'. Error: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing. Warning: Back off 28.951 seconds before retry. Error: Action 'https://api.github.com/repos/pytorch/pytorch/tarball/335b5c7d4bf3295d517902370142f007ca024cd0' download has timed out. Error: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing. ``` ## Incident timeline (all times pacific) Started - 11PM PST, Nov 2 Reduce the frequency of MI2xx-based workflows - rocm.yml and inductor-rocm.yml - to once every hour - 9:33AM PST, Nov 3 Lower runner check gpu count for distributed jobs - 10:44AM PST, Nov 4 ## User impact Multiple ROCm related failures ## Root cause Network issues on MI250 Cirrascale cluster ## Mitigation *How did we mitigate the issue?* Since the networking issues were taking too long to resolve, we decided to reduce/move the workloads to the other MI2xx nodes if possible: * Reduce the frequency of MI2xx-based workflows - rocm.yml and inductor-rocm.yml - to once every hour: https://github.com/pytorch/pytorch/pull/166870 * Allow distributed jobs to run on 2-GPU MI2xx nodes: https://github.com/pytorch/pytorch/pull/166961 ## Prevention/followups *How do we prevent issues like this in the future?* We will try to implement more monitoring of metrics such as network speed at the cluster level to catch such issues faster before they impact PyTorch CI more widely. cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
https://github.com/pytorch/pytorch/issues/166866
closed
[ "module: rocm", "ci: sev" ]
2025-11-03T15:57:42Z
2025-11-04T23:54:15Z
5
atalman
huggingface/lerobot
2,371
memory increase continuously during training Groot
### System Info ```Shell - lerobot version: 0.4.1 - Platform: Linux-5.4.250-2-velinux1u3-amd64-x86_64-with-glibc2.31 - Python version: 3.10.15 - Huggingface Hub version: 0.35.3 - Datasets version: 4.1.1 - Numpy version: 2.1.3 - PyTorch version: 2.7.1+cu126 - Is PyTorch built with CUDA support?: True - Cuda version: 12.6 - GPU model: NVIDIA GeForce RTX 4090 - Using GPU in script?: <fill in> ``` ### Information - [ ] One of the scripts in the examples/ folder of LeRobot - [ ] My own task or dataset (give details below) ### Reproduction run ` lerobot-train \ --output_dir=$OUTPUT_DIR \ --save_checkpoint=true \ --batch_size=64 \ --steps=10000 \ --save_freq=1000 \ --log_freq=100 \ --policy.push_to_hub=false \ --policy.type=groot \ --dataset.repo_id=$DATASET_ID \ --dataset.root=$DATASET_ROOT_DIR \ --dataset.streaming=false \ --dataset.image_transforms.enable=true \ --wandb.enable=true \ --wandb.mode=offline \ --wandb.project=groot_test \ --job_name=$JOB_NAME \` ### Expected behavior memory increase until out of memory
https://github.com/huggingface/lerobot/issues/2371
open
[ "question", "policies", "performance" ]
2025-11-03T14:38:52Z
2025-12-31T13:17:11Z
null
caoran2025
pytorch/torchtitan
1,979
question of PP x aux_loss for MoE
In short, does PP allow multiple-args input and multiple-args output? —— Hey, we’ve been stuck for a while on how to properly integrate aux loss for MoE training with PP and compile(full_graph). For context, both DeepSeek V3 and GLM 4.5 mention that > “We also applied an auxiliary sequence-level balance loss with a 0.0001 weight to avoid extreme imbalance within any single sequence.” (We could open a PR for the sequence-level balance loss if you’re interested.) To make this work, we need to compute the extra loss at each block, either by: - Caching the per-layer aux_loss loss (which breaks compile, but not PP), or - Passing both activations and aux_loss to the next PP stage (which doesn’t affect compile). The second option basically requires the PP API to support multiple-args input and output. We tried earlier this year to explicitly pass arguments when building PP stages, but it didn’t work. I’m wondering if there have been any updates since then, or if we might have missed something. Do you have any other suggestions or better solutions? @tianyu-l @H-Huang CC: @janEbert @garrett361
https://github.com/pytorch/torchtitan/issues/1979
open
[]
2025-11-03T13:37:44Z
2025-11-20T02:22:30Z
13
rakkit
vllm-project/vllm
27,982
[Usage]: How can I access or return hidden states (representations) after generation?
### Your current environment In my training pipeline (GRPO), I need to access hidden-state representations of all layers and store prompt representations alongside generated sequences. Is there any supported way to extract or return hidden states from the vLLM inference engine? Environment vllm==0.11.0 Python 3.12 ### How would you like to use vllm ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27982
open
[ "usage" ]
2025-11-03T13:01:51Z
2025-11-04T03:07:40Z
1
hakbari14
huggingface/lerobot
2,368
Release 0.5.0
A Github Issue created for the upcoming release to discuss the planned features & changes: * Audio PR #967 * Bump transformers dependency to +v5
https://github.com/huggingface/lerobot/issues/2368
open
[ "bug", "question", "dependencies" ]
2025-11-03T12:46:51Z
2025-12-24T00:08:16Z
null
imstevenpmwork
vllm-project/vllm
27,981
[Usage]: qwenvl2.5如何指定max_pixels
### Your current environment 如题,我尝试了``--mm-processor-kwargs {"max_pixels": $MAX_PIXELS}``无效 ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27981
open
[ "usage" ]
2025-11-03T12:38:34Z
2025-11-04T08:19:54Z
3
aJupyter
huggingface/accelerate
3,829
Does Accelerate automatically set the DataLoader’s sampler to a DistributedSampler?
```python from accelerate import Accelerator accelerator = Accelerator() device = accelerator.device model, optimizer, training_dataloader, scheduler = accelerator.prepare( model, optimizer, training_dataloader, scheduler ) for batch in training_dataloader: optimizer.zero_grad() inputs, targets = batch outputs = model(inputs) loss = loss_function(outputs, targets) accelerator.backward(loss) optimizer.step() scheduler.step() ``` We know that in PyTorch DDP training the DataLoader must use torch.utils.data.DistributedSampler. In this code, when using Accelerate, do we need to manually set DistributedSampler when constructing the `training_dataloader`, or will Accelerate automatically modify the dataloader’s sampler to support DDP later? (In other words, when we build the dataloader for Accelerate, can we completely ignore DistributedSampler and just leave it as we would for single‑GPU training?)
https://github.com/huggingface/accelerate/issues/3829
closed
[]
2025-11-03T07:17:29Z
2025-12-16T15:09:43Z
2
caixxiong
vllm-project/vllm
27,957
[Usage]: What is the difference between embedding task and pooler task?
### Your current environment Any document about this? ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27957
closed
[ "usage" ]
2025-11-03T03:38:39Z
2025-11-03T10:20:18Z
1
sleepwalker2017
vllm-project/vllm
27,949
[Usage]: How do I deploy GGUF models with vLLM via Docker correct?
### Your current environment ```text The output of `python collect_env.py` ``` Here is the output from `sudo python3 collect_env.py` ``` Traceback (most recent call last): File "/export/nvme/vllm/collect_env.py", line 18, in <module> import regex as re ModuleNotFoundError: No module named 'regex' ``` ### How would you like to use vllm I am using an Ubuntu 22.04 LTS LXC in Proxmox. I have Docker installed. I downloaded `https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-70B-Q4_K_M.gguf?download=true` to `/export/nvme/huggingface/DeepSeek-R1-Distill-Llama-70B-GGUF/DeepSeek-R1-Distill-Llama-70B-Q4_K_M.gguf` via `wget`. The command that I am trying to use to start said Docker container is: ``` sudo docker run --runtime nvidia --gpus all \ --name vllm \ -v /export/nvme/huggingface/DeepSeek-R1-Distill-Llama-70B-Q4_K_M-GGUF:/root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B-Q4_K_M-GGUF \ -v /export/nvme/vllm:/export/nvme/vllm \ -e TRANSFORMERS_OFFLINE=1 \ --shm-size=16G \ -v /dev/shm:/dev/shm \ -p 0.0.0.0:8000:8000 \ --security-opt apparmor:unconfined \ vllm/vllm-openai:v0.8.5 \ --model /root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B-Q4_K_M-GGUF/DeepSeek-R1-Distill-Llama-70B-Q4_K_M.gguf \ --tokenizer /root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B \ --tensor-parallel-size 2 \ --max-model-len=32K \ --chat-template=/export/nvme/vllm/examples/tool_chat_template_deepseekr1.jinja ``` But this is the error message that I get: ``` INFO 11-02 15:21:55 [__init__.py:239] Automatically detected platform cuda. INFO 11-02 15:21:59 [api_server.py:1043] vLLM API server version 0.8.5 INFO 11-02 15:21:59 [api_server.py:1044] args: Namespace(host=None, port=8000, uvicorn_log_level='info', disable_uvicorn_access_log=False, allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template='/export/nvme/vllm/examples/tool_chat_template_deepseekr1.jinja', chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='/root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B-Q4_K_M-GGUF/DeepSeek-R1-Distill-Llama-70B-Q4_K_M.gguf', task='auto', tokenizer='/root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B', hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, load_format='auto', download_dir=None, model_loader_extra_config={}, use_tqdm_on_load=True, config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', max_model_len=32768, guided_decoding_backend='auto', reasoning_parser=None, logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=2, data_parallel_size=1, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, disable_custom_all_reduce=False, block_size=None, gpu_memory_utilization=0.9, swap_space=4, kv_cache_dtype='auto', num_gpu_blocks_override=None, enable_prefix_caching=None, prefix_caching_hash_algo='builtin', cpu_offload_gb=0, calculate_kv_scales=False, disable_sliding_window=False, use_v2_block_manager=True, seed=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_token=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config={}, limit_mm_per_prompt={}, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=None, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=None, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', speculative_config=None, ignore_patterns=[], served_model_name=None, qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, max_num_batched_tokens=None, max_num_seqs=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, num_lookahead_slots=0, scheduler_delay_factor=0.0, preemption_mode=None, num_scheduler_steps=1, multi_step_stream_outputs=True, scheduling_policy='fcfs', enable_chunked_prefill=None, disable_chunked_mm_input=False, scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilati
https://github.com/vllm-project/vllm/issues/27949
open
[ "usage" ]
2025-11-02T23:33:49Z
2025-11-02T23:36:44Z
1
alpha754293
huggingface/xet-core
549
How to get the "Xet backed hash"?
Hi, On HuggingFace, every page has a "Xet backed hash" (I've attached an example below) and I am trying to figure out how to compute that locally. I've read the documentation and it says there are 4 types of different hashes but it's not really clear how a "Xet backed hash" is calculated. So I was just wondering if you can you tell me how I can get the "Xet backed hash" on a local file? Thank you for your time. <img width="630" height="308" alt="Image" src="https://github.com/user-attachments/assets/9fad42a3-e15b-4734-b57a-a769b5b77577" />
https://github.com/huggingface/xet-core/issues/549
closed
[]
2025-11-02T09:40:39Z
2025-11-06T16:20:25Z
null
arch-btw
huggingface/lerobot
2,360
diffusion transformer
请问有大佬在lerobot中将diffusion unet改为DiT过吗
https://github.com/huggingface/lerobot/issues/2360
open
[ "question", "policies" ]
2025-11-02T09:05:30Z
2025-11-12T09:01:59Z
null
Benxiaogu
vllm-project/vllm
27,928
[Bug]: What happened to /get_world_size ?
### Your current environment vllm 0.11.0 trl 0.24.0 python 3.12 linux amd64 ### 🐛 Describe the bug TRL is expecting a `/get_world_size` route https://github.com/huggingface/trl/blob/main/trl/extras/vllm_client.py#L279 for its GRPO trainer. That gives a 404 on the latest version of vLLM. Was this changed to another route? I can't seem to find it ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27928
open
[ "bug" ]
2025-11-01T22:56:45Z
2025-11-03T02:42:14Z
1
pbarker-synth
huggingface/lerobot
2,356
AsyncInference only running one action chunk
I have my SO101 arms connected to my computer, and I'm running an asynchronous server on a cloud GPU with a RTX 4090. When I start running Pi0.5, the model is loaded and the SO101 makes its first move by setting the robot to be at its middle position, but then no further actions are made although the server logs new observations and action sequences being generated. The robot moves to this position and doesn't move further: <img width="332" height="413" alt="Image" src="https://github.com/user-attachments/assets/0499680b-4072-4c90-acda-e4fc1af18e64" /> I have one wrist camera and one top-down view camera. Here is my client command: ``` python3 -m lerobot.async_inference.robot_client \ --server_address=ip:port \ --robot.type=so101_follower \ --robot.port=/dev/ttyACM0 \ --robot.id=arm \ --robot.cameras="{ base_0_rgb: {type: opencv, index_or_path: \"/dev/video2\", width: 640, height: 480, fps: 30}, left_wrist_0_rgb: {type: opencv, index_or_path: \"/dev/video0\", width: 640, height: 480, fps: 30}}" \ --policy_device=cuda \ --aggregate_fn_name=weighted_average \ --debug_visualize_queue_size=True \ --task="Pick up the orange and place it on the plate" \ --policy_type=pi05 \ --pretrained_name_or_path=lerobot/pi05_base \ --actions_per_chunk=50 \ --chunk_size_threshold=0.0 \ --debug_visualize_queue_size=True ``` Here are my server logs: ``` (lerobot) root@eff66f201198:/workspace/arm-x64# ./robot.sh runpod async-server INFO 2025-11-01 20:17:34 y_server.py:421 {'fps': 30, 'host': '0.0.0.0', 'inference_latency': 0.03333333333333333, 'obs_queue_timeout': 2, 'port': 8080} INFO 2025-11-01 20:17:34 y_server.py:431 PolicyServer started on 0.0.0.0:8080 INFO 2025-11-01 20:18:03 y_server.py:112 Client ipv4:129.97.131.28:23025 connected and ready INFO 2025-11-01 20:18:03 y_server.py:138 Receiving policy instructions from ipv4:129.97.131.28:23025 | Policy type: pi05 | Pretrained name or path: lerobot/pi05_base | Actions per chunk: 50 | Device: cuda The PI05 model is a direct port of the OpenPI implementation. This implementation follows the original OpenPI structure for compatibility. Original implementation: https://github.com/Physical-Intelligence/openpi INFO 2025-11-01 20:18:03 ils/utils.py:43 Cuda backend detected, using cuda. WARNING 2025-11-01 20:18:03 /policies.py:82 Device 'mps' is not available. Switching to 'cuda'. INFO 2025-11-01 20:18:03 ils/utils.py:43 Cuda backend detected, using cuda. WARNING 2025-11-01 20:18:03 /policies.py:82 Device 'mps' is not available. Switching to 'cuda'. Loading model from: lerobot/pi05_base ✓ Loaded state dict from model.safetensors WARNING 2025-11-01 20:19:08 ng_pi05.py:1023 Vision embedding key might need handling: paligemma_with_expert.paligemma.model.vision_tower.vision_model.embeddings.patch_embedding.bias WARNING 2025-11-01 20:19:08 ng_pi05.py:1023 Vision embedding key might need handling: paligemma_with_expert.paligemma.model.vision_tower.vision_model.embeddings.patch_embedding.weight Remapped: action_in_proj.bias -> model.action_in_proj.bias Remapped: action_in_proj.weight -> model.action_in_proj.weight Remapped: action_out_proj.bias -> model.action_out_proj.bias Remapped: action_out_proj.weight -> model.action_out_proj.weight Remapped: paligemma_with_expert.gemma_expert.lm_head.weight -> model.paligemma_with_expert.gemma_expert.lm_head.weight Remapped: paligemma_with_expert.gemma_expert.model.layers.0.input_layernorm.dense.bias -> model.paligemma_with_expert.gemma_expert.model.layers.0.input_layernorm.dense.bias Remapped: paligemma_with_expert.gemma_expert.model.layers.0.input_layernorm.dense.weight -> model.paligemma_with_expert.gemma_expert.model.layers.0.input_layernorm.dense.weight Remapped: paligemma_with_expert.gemma_expert.model.layers.0.mlp.down_proj.weight -> model.paligemma_with_expert.gemma_expert.model.layers.0.mlp.down_proj.weight Remapped: paligemma_with_expert.gemma_expert.model.layers.0.mlp.gate_proj.weight -> model.paligemma_with_expert.gemma_expert.model.layers.0.mlp.gate_proj.weight Remapped: paligemma_with_expert.gemma_expert.model.layers.0.mlp.up_proj.weight -> model.paligemma_with_expert.gemma_expert.model.layers.0.mlp.up_proj.weight Remapped 812 state dict keys Warning: Could not remap state dict keys: Error(s) in loading state_dict for PI05Policy: Missing key(s) in state_dict: "model.paligemma_with_expert.paligemma.model.language_model.embed_tokens.weight". INFO 2025-11-01 20:19:43 y_server.py:171 Time taken to put policy on cuda: 99.9787 seconds INFO 2025-11-01 20:19:43 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver INFO 2025-11-01 20:20:02 y_server.py:226 Running inference for observation #0 (must_go: True) INFO 2025-11-01 20:20:03 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver INFO 2025-11-01 20:20:04 y_server.py:362 Preprocessing and inference took 1.3530s, action shape: torch.Size([1, 50, 32]) INFO 2025-11-01 20:20:04 y_server.py:392 Observation
https://github.com/huggingface/lerobot/issues/2356
open
[ "question", "robots" ]
2025-11-01T20:31:10Z
2025-12-23T01:10:35Z
null
kevinjosethomas
pytorch/pytorch
166,802
add ability to automatically set `set_per_process_memory_fraction` using env variable
### 🚀 The feature, motivation and pitch Hi, In multi-user / multi-tenant GPU environments (e.g., Slurm clusters, Kubernetes GPU slicing, or MPS-based sharing), it is often desirable to constrain the GPU memory usage of a process externally, without modifying the application code. Currently, torch.cuda.set_per_process_memory_fraction(fraction, device) can only be applied programmatically in Python. If there was a way to automatically set it via bash env variable it would be very efficent, as it will remove the requirement of adding something to each python script **Proposed Feature** Support an optional environment variable, for example: ``` TORCH_CUDA_MEMORY_FRACTION=<float> # e.g., 0.25 TORCH_CUDA_MEMORY_FRACTION_DEVICE=<device> # e.g., 0 or "all" ``` If set at process startup, PyTorch would internally call: ``` torch.cuda.set_per_process_memory_fraction( float(os.environ["TORCH_CUDA_MEMORY_FRACTION"]), device = os.environ.get("TORCH_CUDA_MEMORY_FRACTION_DEVICE", "all") ) ``` **Motivation & Use Cases** 1. Slurm GPU shards: e.g., cluster configured with GRES=shard or MIG. We want processes to auto-scale memory usage based on how many shards they were allocated. 2. JupyterHub / multi-user labs: enforce memory fairness without requiring users to modify their notebooks. 3. Inference services: multiple models share one GPU; memory partitioning prescribed via environment-level configuration. 4. Containerized deployments (Kubernetes): memory constraints should be set from deployment manifests (yaml), not Python code. ### Alternatives adding the suggested code to each of my python scripts. ### Additional context Conversation with ChatGPT - https://chatgpt.com/share/69065d93-3f28-8013-b3a2-52b2dd01dd5d it already has a pull request ready. cc @ptrblck @msaroufim @eqy @jerryzh168
https://github.com/pytorch/pytorch/issues/166802
closed
[ "module: cuda", "module: memory usage", "triaged" ]
2025-11-01T19:22:40Z
2025-11-07T16:58:15Z
4
orena1
pytorch/pytorch
166,796
[ROCm][CI] Machines under the label linux.rocm.gpu.2, label linux.rocm.gpu.4, linux.rocm.gpu.gfx1100 are undergoing maintenance.
> NOTE: Remember to label this issue with "`ci: sev`" > If you want autorevert to be disabled, keep the ci: disable-autorevert label <!-- Add the `merge blocking` label to this PR to prevent PRs from being merged while this issue is open --> ## Current Status *Status could be: preemptive, ongoing, mitigated, closed. Also tell people if they need to take action to fix it (i.e. rebase)*. ongoing ## Error looks like *Provide some way users can tell that this SEV is causing their issue.* Occasional rocm workflow failures for workflows with label linux.rocm.gpu.2, linux.rocm.gpu.4, linux.rocm.gpu.gfx1100. Also, potentially longer queue times for linux.rocm.gpu.2, linux.rocm.gpu.4, linux.rocm.gpu.gfx1100 workflows. ## Incident timeline (all times pacific) *Include when the incident began, when it was detected, mitigated, root caused, and finally closed.* 11/01/2025 ## User impact *How does this affect users of PyTorch CI?* Occasional rocm workflow failures for workflows with label linux.rocm.gpu.2, linux.rocm.gpu.4, linux.rocm.gpu.gfx1100. Also, potentially longer queue times for linux.rocm.gpu.2, linux.rocm.gpu.4, linux.rocm.gpu.gfx1100 workflows. ## Root cause *What was the root cause of this issue?* System Maintenance ## Mitigation *How did we mitigate the issue?* Will be resolve by EOD 11/01/2025 ## Prevention/followups *How do we prevent issues like this in the future?* N/A cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
https://github.com/pytorch/pytorch/issues/166796
closed
[ "module: rocm", "ci: sev" ]
2025-11-01T14:59:52Z
2025-11-03T11:04:50Z
0
amdfaa
vllm-project/vllm
27,916
[Feature]: Does the latest version support LoRa for visual models?
### 🚀 The feature, motivation and pitch When I loaded the QWEN2.5-VL model fine-tuned by LoRa using vllm version 0.8.4, I encountered the following prompt: > Regarding multimodal models, vLLM currently only supports adding LoRA to language model, visual.blocks.31.mlp.up_proj will be ignored. I found an issue https://github.com/vllm-project/vllm/issues/26422 with a similar problem, but it seems the PR hasn't been merged into master. How can I enable loading visual-side LORA parameters and using VLLM to accelerate inference? Looking forward to your reply ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27916
closed
[ "feature request" ]
2025-11-01T12:23:36Z
2025-12-26T12:48:22Z
1
SmartNight-cc
huggingface/lerobot
2,354
Cannot reproduce SmolVLA results on LIBERO benchmark
Hello, I am trying to reproduce LIBERO benchmark results of [SmolVLA](https://huggingface.co/HuggingFaceVLA/smolvla_libero). However, I can't reproduce results on neither [leaderboard](https://huggingface.co/spaces/HuggingFaceVLA/libero-vla-leaderboard) and [paper](https://arxiv.org/abs/2506.01844) I am working on NVIDIA Jetson AGX Orin Developer Kit (Jetpack 6.2.1, Jetson Linux 36.4.4) and below is my pip list Hello, I am trying to reproduce the LIBERO benchmark results of [SmolVLA](https://huggingface.co/HuggingFaceVLA/smolvla_libero). However, I can't reproduce the results on either the [leaderboard](https://huggingface.co/spaces/HuggingFaceVLA/libero-vla-leaderboard) or the [paper](https://arxiv.org/abs/2506.01844). I am working on an NVIDIA Jetson AGX Orin Developer Kit (JetPack 6.2.1, Jetson Linux 36.4.4), and below is my pip list. <details> <summary>pip list</summary> ``` absl-py==2.3.1 accelerate==1.10.1 aiohappyeyeballs==2.6.1 aiohttp==3.13.0 aiosignal==1.4.0 annotated-types==0.7.0 antlr4-python3-runtime==4.9.3 anyio==4.9.0 argon2-cffi==23.1.0 argon2-cffi-bindings==21.2.0 arrow==1.3.0 asttokens==3.0.0 async-lru==2.0.5 attrs==23.2.0 av==15.1.0 babel==2.17.0 bddl==1.0.1 beautifulsoup4==4.13.4 bleach==6.2.0 blinker==1.7.0 certifi==2025.1.31 cffi==1.17.1 charset-normalizer==3.4.1 click==8.3.0 cloudpickle==3.1.1 cmake==3.31.6 comm==0.2.2 contourpy==1.3.2 cryptography==41.0.7 cuda-bindings==12.8.0 cuda-python==12.8.0 cycler==0.12.1 Cython==3.0.12 dataclasses==0.6 datasets==4.1.1 dbus-python==1.3.2 debugpy==1.8.14 decorator==5.2.1 deepdiff==8.6.1 defusedxml==0.7.1 diffusers @ file:///opt/diffusers-0.34.0.dev0-py3-none-any.whl#sha256=cf07a8004c994f02e0d41e9bface90486f53a98cd3abdda39972c5ffe7009d87 dill==0.4.0 distro==1.9.0 docopt==0.6.2 docutils==0.21.2 draccus==0.10.0 easydict==1.13 egl_probe @ git+https://github.com/huggingface/egl_probe.git@eb5e5f882236a5668e43a0e78121aaa10cdf2243 einops==0.8.1 etils==1.13.0 evdev==1.9.2 executing==2.2.0 Farama-Notifications==0.0.4 fastjsonschema==2.21.1 filelock==3.18.0 fonttools==4.57.0 fqdn==1.5.1 frozenlist==1.8.0 fsspec==2025.3.2 future==1.0.0 gitdb==4.0.12 GitPython==3.1.45 glfw==2.10.0 grpcio==1.75.1 gym==0.26.2 gym-notices==0.1.0 gymnasium==0.29.1 h11==0.14.0 h5py==3.13.0 hf-xet==1.1.10 hf_transfer==0.1.9 httpcore==1.0.8 httplib2==0.20.4 httpx==0.28.1 huggingface-hub==0.35.3 hydra-core==1.3.2 id==1.5.0 idna==3.10 imageio==2.37.0 imageio-ffmpeg==0.6.0 importlib_metadata==8.6.1 importlib_resources==6.5.2 iniconfig==2.1.0 inquirerpy==0.3.4 ipykernel==6.29.5 ipython==9.1.0 ipython_pygments_lexers==1.1.1 ipywidgets==8.1.6 isoduration==20.11.0 jaraco.classes==3.4.0 jaraco.context==6.0.1 jaraco.functools==4.1.0 jedi==0.19.2 jeepney==0.9.0 Jinja2==3.1.6 json5==0.12.0 jsonlines==4.0.0 jsonpointer==3.0.0 jsonschema==4.23.0 jsonschema-specifications==2025.4.1 jupyter==1.1.1 jupyter-console==6.6.3 jupyter-events==0.12.0 jupyter-lsp==2.2.5 jupyter_client==8.6.3 jupyter_core==5.7.2 jupyter_server==2.15.0 jupyter_server_terminals==0.5.3 jupyterlab==4.4.1 jupyterlab_myst==2.4.2 jupyterlab_pygments==0.3.0 jupyterlab_server==2.27.3 jupyterlab_widgets==3.0.14 jupytext==1.17.3 keyring==25.6.0 kiwisolver==1.4.8 launchpadlib==1.11.0 lazr.restfulclient==0.14.6 lazr.uri==1.0.6 -e git+https://github.com/huggingface/lerobot@6f5bb4d4a49fbdb47acfeaa2c190b5fa125f645a#egg=lerobot libero @ git+https://github.com/huggingface/lerobot-libero.git@b053a4b0de70a3f2d736abe0f9a9ee64477365df llvmlite==0.45.1 Mako==1.3.10 Markdown==3.9 markdown-it-py==3.0.0 MarkupSafe==3.0.2 matplotlib==3.10.1 matplotlib-inline==0.1.7 mdit-py-plugins==0.5.0 mdurl==0.1.2 mergedeep==1.3.4 mistune==3.1.3 more-itertools==10.7.0 mpmath==1.3.0 mujoco==3.3.2 multidict==6.7.0 multiprocess==0.70.16 mypy_extensions==1.1.0 nbclient==0.10.2 nbconvert==7.16.6 nbformat==5.10.4 nest-asyncio==1.6.0 networkx==3.4.2 nh3==0.2.21 ninja==1.11.1.4 notebook==7.4.1 notebook_shim==0.2.4 num2words==0.5.14 numba==0.62.1 numpy==2.2.5 oauthlib==3.2.2 omegaconf==2.3.0 onnx==1.17.0 opencv-contrib-python==4.11.0.86 opencv-python==4.11.0 opencv-python-headless==4.12.0.88 optimum==1.24.0 orderly-set==5.5.0 overrides==7.7.0 packaging==25.0 pandas==2.3.3 pandocfilters==1.5.1 parso==0.8.4 pexpect==4.9.0 pfzy==0.3.4 pillow==11.2.1 pkginfo==1.12.1.2 platformdirs==4.3.7 pluggy==1.6.0 prometheus_client==0.21.1 prompt_toolkit==3.0.51 propcache==0.4.1 protobuf==6.30.2 psutil==7.0.0 ptyprocess==0.7.0 pure_eval==0.2.3 pyarrow==21.0.0 pyav==14.2.1 pycparser==2.22 pycuda==2025.1 pydantic==2.12.1 pydantic_core==2.41.3 Pygments==2.19.1 PyGObject==3.48.2 PyJWT==2.7.0 pynput==1.8.1 PyOpenGL==3.1.10 PyOpenGL-accelerate==3.1.10 pyparsing==3.1.1 pyrsistent==0.20.0 pyserial==3.5 pytest==8.4.2 python-apt==2.7.7+ubuntu4 python-dateutil==2.9.0.post0 python-json-logger==3.3.0 python-xlib==0.33 pytools==2025.1.2 pytz==2025.2 PyYAML==6.0.2 pyyaml-include==1.4.1 pyzmq==26.4.0 readme_renderer==44.0 referencing==0.36.2 regex==2024.11.6 requests==2.32.3 requests-toolbelt=
https://github.com/huggingface/lerobot/issues/2354
open
[ "question", "policies", "simulation" ]
2025-11-01T11:20:05Z
2026-01-05T08:38:48Z
null
Hesh0629
huggingface/trl
4,419
GRPO with reward model. CUDA out of memory. How to fix? Thank you very much.
train_grpo.py: ```python import argparse import os from typing import Callable, Dict, List, Optional import torch from datasets import Dataset, load_dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, AutoModelForSequenceClassification, pipeline, set_seed, ) from trl import GRPOConfig, GRPOTrainer class CombinedReward: """Combine multiple reward sources with weights. Each reward function follows signature: reward_fn(completions: List[str], prompts: List[str], **kwargs) -> List[float] """ def __init__( self, reward_fns: List[Callable[[List[str], List[str]], List[float]]], weights: Optional[List[float]] = None, ) -> None: if not reward_fns: raise ValueError("reward_fns must not be empty") self.reward_fns = reward_fns self.weights = weights or [1.0] * len(reward_fns) if len(self.weights) != len(self.reward_fns): raise ValueError("weights length must match reward_fns length") def __call__(self, completions: List[str], prompts: List[str], **kwargs) -> List[float]: if not completions: return [] all_scores: List[List[float]] = [] for reward_fn in self.reward_fns: scores = reward_fn(completions, prompts, **kwargs) if len(scores) != len(completions): raise ValueError("All reward functions must return scores for each completion") all_scores.append(scores) # weighted sum totals: List[float] = [0.0] * len(completions) for w, scores in zip(self.weights, all_scores): for i, s in enumerate(scores): totals[i] += w * float(s) return totals def build_reward_model_fn( reward_model_name: str, device: Optional[str] = None, normalize: bool = True, ) -> Callable[[List[str], List[str]], List[float]]: """Create a reward function using a sequence classification model. Returns a function that outputs a scalar reward per completion. """ rm_tokenizer = AutoTokenizer.from_pretrained(reward_model_name, use_fast=True) # ensure padding token exists for batched inference if rm_tokenizer.pad_token is None: candidate = rm_tokenizer.eos_token or rm_tokenizer.sep_token or rm_tokenizer.cls_token or rm_tokenizer.unk_token if candidate is not None: rm_tokenizer.pad_token = candidate else: rm_tokenizer.add_special_tokens({"pad_token": "[PAD]"}) rm_model = AutoModelForSequenceClassification.from_pretrained(reward_model_name, torch_dtype=torch.float16, device_map="auto") if getattr(rm_model.config, "pad_token_id", None) is None and rm_tokenizer.pad_token_id is not None: rm_model.config.pad_token_id = rm_tokenizer.pad_token_id # use a pipeline for batching and device placement pipe_device = 0 if (device == "cuda" or (device is None and torch.cuda.is_available())) else -1 rm_pipe = pipeline( task="text-classification", model=rm_model, tokenizer=rm_tokenizer, # device=pipe_device, truncation=True, top_k=None, function_to_apply="none", # use raw logits so we can map scores directly return_all_scores=True, ) def reward_fn(completions: List[str], prompts: List[str], **kwargs) -> List[float]: del prompts # unused here outputs = rm_pipe(completions, batch_size=kwargs.get("batch_size", 2)) scores: List[float] = [] for out in outputs: # If binary classifier, use logit of positive class; otherwise sum weighted by label index if len(out) == 1: scores.append(float(out[0]["score"])) else: # prefer last class as "more positive" scores.append(float(out[-1]["score"])) if not normalize: return scores # z-norm for stability (per-batch) t = torch.tensor(scores, dtype=torch.float32) std = float(t.std().clamp(min=1e-6)) mean = float(t.mean()) normed = ((t - mean) / std).tolist() return [float(x) for x in normed] return reward_fn def build_keyword_reward_fn(keywords: List[str], case_sensitive: bool = False, bonus: float = 1.0) -> Callable[[List[str], List[str]], List[float]]: ks = keywords if case_sensitive else [k.lower() for k in keywords] def reward_fn(completions: List[str], prompts: List[str], **kwargs) -> List[float]: del prompts scores: List[float] = [] for text in completions: t = text if case_sensitive else text.lower() count = sum(1 for k in ks if k in t) scores.append(bonus * float(count)) return scores return reward_fn def build_length_reward_fn(target_min: int, target_max: int, scale: float = 1.0) -> Callable[[List[str], List[str]], Li
https://github.com/huggingface/trl/issues/4419
open
[ "🏋 Reward", "🏋 GRPO" ]
2025-11-01T10:29:28Z
2025-11-20T12:26:50Z
null
guotong1988
pytorch/ao
3,274
Proposal to add a beginner-friendly introduction tutorial for TorchAO
Hello TorchAO community, I would like to contribute a beginner-friendly notebook tutorial that introduces TorchAO to users who are new to model optimization and to TorchAO (or even PyTorch in general). As someone coming from a different background with limited experience in quantization and model optimization, I found that it can be challenging to understand: - What TorchAO is, - What its main capabilities are, and - How someone can start using it effectively in a simple workflow. While TorchAO already provides strong documentation and tutorials for quantization, some of them seem to assume a level of prior familiarity that newcomers might not yet have or they may target more advanced workflows. I would like to put together a simple notebook tutorial that demonstrates one simple TorchAO quantization flow on a very small model/toy model (e.g. 2-layer MLP or simple CNN). The goal isn't to duplicate the Quick Start or advanced tutorials, but to provide a high-level guide that can help absolute beginners understand what TorchAO is and when to use it. The notebook would include clear descriptions and references to relevant PyTorch blog posts and documentation pages that already exist, so that users can easily explore more advanced material as well. Would this be useful to the community to add under tutorials/ or examples/? I’m also open to suggestions on which specific tutorial topics might be most helpful for newcomers who are just starting out with TorchAO. I appreciate your consideration and feedback!
https://github.com/pytorch/ao/issues/3274
open
[ "topic: documentation" ]
2025-11-01T07:47:08Z
2025-11-04T04:25:26Z
2
smishra8
vllm-project/vllm
27,912
[Usage]: How should I use the CPU to deploy QWEN3 VL 30B-A3B?
### Your current environment ```text The output of `python collect_env.py` ``` (APIServer pid=1033476) Traceback (most recent call last): (APIServer pid=1033476) File "/home/maxgameone/anaconda3/bin/vllm", line 33, in <module> (APIServer pid=1033476) sys.exit(load_entry_point('vllm==0.11.1rc6.dev33+g3a5de7d2d.cpu', 'console_scripts', 'vllm')()) (APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/cli/main.py", line 73, in main (APIServer pid=1033476) args.dispatch_function(args) (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/cli/serve.py", line 59, in cmd (APIServer pid=1033476) uvloop.run(run_server(args)) (APIServer pid=1033476) File "/home/maxgameone/.local/lib/python3.12/site-packages/uvloop/__init__.py", line 109, in run (APIServer pid=1033476) return __asyncio.run( (APIServer pid=1033476) ^^^^^^^^^^^^^^ (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/asyncio/runners.py", line 194, in run (APIServer pid=1033476) return runner.run(main) (APIServer pid=1033476) ^^^^^^^^^^^^^^^^ (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/asyncio/runners.py", line 118, in run (APIServer pid=1033476) return self._loop.run_until_complete(task) (APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1033476) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete (APIServer pid=1033476) File "/home/maxgameone/.local/lib/python3.12/site-packages/uvloop/__init__.py", line 61, in wrapper (APIServer pid=1033476) return await main (APIServer pid=1033476) ^^^^^^^^^^ (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py", line 1910, in run_server (APIServer pid=1033476) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs) (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py", line 1926, in run_server_worker (APIServer pid=1033476) async with build_async_engine_client( (APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/contextlib.py", line 210, in __aenter__ (APIServer pid=1033476) return await anext(self.gen) (APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py", line 185, in build_async_engine_client (APIServer pid=1033476) async with build_async_engine_client_from_engine_args( (APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/contextlib.py", line 210, in __aenter__ (APIServer pid=1033476) return await anext(self.gen) (APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py", line 232, in build_async_engine_client_from_engine_args (APIServer pid=1033476) async_llm = AsyncLLM.from_vllm_config( (APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/utils/func_utils.py", line 116, in inner (APIServer pid=1033476) return fn(*args, **kwargs) (APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^ (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/v1/engine/async_llm.py", line 218, in from_vllm_config (APIServer pid=1033476) return cls( (APIServer pid=1033476) ^^^^ (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/v1/engine/async_llm.py", line 140, in __init__ (APIServer pid=1033476) self.engine_core = EngineCoreClient.make_async_mp_client( (APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm
https://github.com/vllm-project/vllm/issues/27912
open
[ "usage" ]
2025-11-01T07:40:04Z
2025-11-01T07:40:04Z
0
maxgameone
pytorch/torchtitan
1,977
Why is the ep mesh derived from a factoring of the dp mesh, instead of its own dimension?
I see that the data parallel shard dimension is factored into two dimensions, `dp_shard_mod_ep` and `dp_shard_in_ep`. The experts use `dp_shard_mod_ep` submesh for FSDP while the rest of the blocks use the regular `dp_shard_cp` submesh. Why can't the experts use FSDP on the regular `dp_mesh`? The reason for this is unclear after reading the code. If only expert parallelism is used without data parallel or if the data parallel size is less than expert parallel, then the `dp_shard_mod_ep` dimension size would be 0, which doesn't make sense. Furthermore, the `ep` submesh is not actually a bona fide actual dimension, but rather a combination of `dp_shard_in_ep`, `cp` and `tp`. Why can't `ep` be its own dimension? Currently `ep` is like some weird factored submesh of `dp_shard` instead of being its own dimension, and I don't understand why. I understand the combining of various mesh dimensions into `dp_shard_cp` is used to limit those dimensions to a 1D mesh as FSDP accepts a 1D mesh and HSDP a 2D mesh. But why can't the mesh dims be for example: (assuming cp = 1, tp = 1, etp = 1) world mesh: `['pp', 'dp_replicate', 'dp_shard', 'ep', 'cp', 'tp']` dp_shard mesh: `['dp_shard']` (not flattening of `['dp_shard_in_ep', 'dp_shard_mod_ep']` ep mesh: `['ep']` (not `'dp_shard_in_ep'`) Sorry for all the questions I'm just pretty confused as to whats going on. The most important question is why does dp_shard need to be factored into two dimensions? I also think the ._flatten() function should be exposed publicly if so many places use that function.
https://github.com/pytorch/torchtitan/issues/1977
open
[ "question" ]
2025-11-01T02:07:24Z
2025-12-02T01:34:16Z
null
man2machine
vllm-project/vllm
27,899
[Bug]: Inductor specialize after 2.9 rebase
### Your current environment NA ### 🐛 Describe the bug Could you or someone have a look at compile ranges [PR](https://github.com/vllm-project/vllm/pull/24252) again? It seems to stop working with the update to pytorch 2.9. We started getting failed assertions in generated code like it was compiled for a single shape. Could you explain how to let the inductor know that we compile for a range not for a single shape? Example of the assertion. Compilation was done for a range (512, 8192) assert_size_stride(arg0_1, (8192, s4, s94), (s4*s94, s94, 1)) Can you add quick repro instructions? Sure, on the PR branch: vllm serve meta-llama/Meta-Llama-3.1-70B-Instruct --disable-log-requests --no-enable-prefix-caching -tp 4 -dp 1 --max-num-seqs 256 --load-format dummy --port 8001 --compilation-config '{"pass_config":{"enable_fusion":false,"enable_attn_fusion":false,"enable_noop":true,"enable_sequence_parallelism":false,"enable_async_tp":false,"enable_fi_allreduce_fusion":true}}' cc @ilmarkov
https://github.com/vllm-project/vllm/issues/27899
closed
[ "bug" ]
2025-10-31T22:16:27Z
2025-11-07T00:03:25Z
7
laithsakka
vllm-project/vllm
27,898
[Doc]: Multi-node EP on EFA (i.e. no IBGDA/DeepEP)
### 📚 The doc issue Usecase: On AWS we have EFA for high bandwidth interconnect, not Infiniband, so no IBGDA. The [documentation](https://docs.vllm.ai/en/latest/serving/expert_parallel_deployment.html#backend-selection-guide) indicates that the DeepEP kernels should be used for multi/inter-node EP, and pplx for single node. However, [DeepEP indicates that they only support IBGDA for inter-node comms](https://github.com/deepseek-ai/DeepEP/issues/369). pplx has good support for EFA. Is pplx for single node, DeepEP for multi-node a suggestion based on testing, or a hard requirement? In addition, it appears that the EP size cannot be configured and is always TP x DP. Is there any way to set EP size to equal TP size (for example), so we can have each node be a DP group and limit EP alltoall's to intra-node (NVLink) only? Thank you! EDIT: per https://github.com/vllm-project/vllm/issues/27633 it appears this may be problematic, although since pplx supports EFA as a transportation layer, this seems bizarre. Specific docs around usage on EFA would be helpful.
https://github.com/vllm-project/vllm/issues/27898
open
[ "documentation" ]
2025-10-31T21:22:28Z
2025-11-06T19:50:07Z
1
nathan-az
huggingface/peft
2,884
[Question/Bug] How to safely continue LoRA fine-tuning under DeepSpeed ZeRO-3 (multi-stage training with modules_to_save)
Hi, I’m trying to perform multi-stage LoRA fine-tuning under DeepSpeed ZeRO-3 using PEFT. However, continuing training on an existing LoRA checkpoint without merging causes a series of errors and conflicts. Problem When I load the LoRA from Stage 1 and attempt to continue training: • load_state_dict() throws shape mismatch (e.g. [0, hidden_size]) • resize_token_embeddings() fails (empty tensor) • GPU memory usage explodes (batch size drops from 4 → 1) Question What’s the recommended practice for continuing LoRA fine-tuning under ZeRO-3? • Should we always merge the previous adapter (merge_and_unload()) before starting Stage 2? • Or is there a way to safely keep the existing adapter and continue training? ### Who can help? _No response_ ### Reproduction Setup • Stage 1: LoRA fine-tuning with modules_to_save=['wte','ff_out'] • Stage 2: Continue training on a new dataset (without merging) • Using DeepSpeed ZeRO-3 (zero3_init_flag=False) ### Expected behavior Expected Behavior PEFT should provide a consistent way to: • Continue fine-tuning LoRA adapters across multiple stages with ZeRO-3 enabled. • Avoid re-initialization or memory explosion when modules_to_save is used.
https://github.com/huggingface/peft/issues/2884
closed
[]
2025-10-31T20:13:12Z
2025-12-09T15:05:26Z
null
XiangZhang-zx
pytorch/ao
3,270
[DOCS] Quick Start Guide PT2E Example does not work as is. Undefined objects
PT2E example in quick start guide does not work as is. Many undefined objects. No import for `convert_pt2e` and `example_inputs` is not defined for example. Also some indentation issues. See: https://docs.pytorch.org/ao/0.13/quick_start.html#pytorch-2-export-quantization
https://github.com/pytorch/ao/issues/3270
open
[ "topic: documentation", "triaged" ]
2025-10-31T18:46:28Z
2025-12-05T01:14:53Z
1
cjm715
pytorch/pytorch
166,736
Aarch64 unit test failures from nightly/manylinux build, jammy upgrade to gcc13 needed
### 🐛 Describe the bug We have noticed 2 test failures on AArch64 ( neoverse-v2 / c8g ) which are not happening in https://github.com/pytorch/pytorch/actions/workflows/linux-aarch64.yml ``` Mismatched elements: 1 / 513 (0.2%) Greatest absolute difference: 253 at index (512,) Greatest relative difference: 1.0 at index (512,) To execute this test, run the following from the base repo dir: python test/test_unary_ufuncs.py TestUnaryUfuncsCPU.test_contig_vs_every_other__refs__conversions_byte_cpu_float32 ``` and ``` Mismatched elements: 9 / 40 (22.5%) Greatest absolute difference: 1 at index (0, 0, 5) Greatest relative difference: 1.0 at index (0, 0, 5) The failure occurred for item [3] To execute this test, run the following from the base repo dir: python test/inductor/test_torchinductor.py CpuTests.test_to_dtype_cpu ``` These problems exist on nightly build. We have investigated and it looks like it happens since nightly 10.25 which looks like this commit https://github.com/pytorch/pytorch/commit/b31bad1b8f1331bf43d47f46602cf6141db56844 Actions Requested. Can we upgrade jammy images to GCC13 @malfet which should show these problems and then we might need to revert https://github.com/pytorch/pytorch/commit/b31bad1b8f1331bf43d47f46602cf6141db56844 ### Versions Collecting environment information... PyTorch version: 2.10.0.dev20251031+cpu Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (aarch64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0 Clang version: Could not collect CMake version: version 3.31.6 Libc version: glibc-2.35 Python version: 3.10.19 | packaged by conda-forge | (main, Oct 22 2025, 22:26:30) [GCC 14.3.0] (64-bit runtime) Python platform: Linux-6.8.0-1040-aws-aarch64-with-glibc2.35 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Is XPU available: False HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: aarch64 CPU op-mode(s): 64-bit Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: ARM Model name: Neoverse-V2 Model: 1 Thread(s) per core: 1 Core(s) per cluster: 32 Socket(s): - Cluster(s): 1 Stepping: r0p1 BogoMIPS: 2000.00 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 flagm2 frint svei8mm svebf16 i8mm bf16 dgh rng bti L1d cache: 2 MiB (32 instances) L1i cache: 2 MiB (32 instances) L2 cache: 64 MiB (32 instances) L3 cache: 36 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-31 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; __user pointer sanitization Vulnerability Spectre v2: Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] mypy==1.16.0 [pip3] mypy_extensions==1.1.0 [pip3] numpy==1.22.4 [pip3] onnx==1.19.1 [pip3] onnx-ir==0.1.11 [pip3] onnxscript==0.5.4 [pip3] optree==0.13.0 [pip3] torch==2.10.0.dev20251031+cpu [pip3] torchvision==0.25.0.dev20251031 [conda] No relevant packages cc @seemethere @malfet @atalman @pytorch/pytorch-dev-infra @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01
https://github.com/pytorch/pytorch/issues/166736
closed
[ "module: binaries", "module: ci", "triaged", "module: arm" ]
2025-10-31T17:25:47Z
2025-12-09T20:47:45Z
11
robert-hardwick
huggingface/lerobot
2,351
Details of adapting SmolVLA to other robotic arms with different configurations
I want to deploy the untuned `smolvla_base` model directly onto my AgileX PIPER robotic arm.I ran into the following two issues along the way: 1. Missing normalization parameters in the metadata. ``` File "/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/home/zwt/Projects/lerobot/lerobot/common/policies/smolvla/modeling_smolvla.py", line 434, in select_action batch = self._prepare_batch(batch) File "/home/zwt/Projects/lerobot/lerobot/common/policies/smolvla/modeling_smolvla.py", line 412, in _prepare_batch batch = self.normalize_inputs(batch) File "/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/home/zwt/Projects/lerobot/lerobot/common/policies/normalize.py", line 170, in forward assert not torch.isinf(mean).any(), _no_stats_error_str("mean") AssertionError: `mean` is infinity. You should either initialize with `stats` as an argument, or use a pretrained model. ``` The error was resolved when I copied the normalization parameters from other training results, but I'm not sure if this is the correct way to run `smolvla_base` directly. 2. I've noticed that different robotic arms may have different degrees of freedom, or even if they have the same degrees of freedom, the range of rotation of the same joint can vary. I'm unsure whether this range of rotation mapping is necessary when transferring the model to other robotic arms.It seems there is similar operation for the aloha in the code. ``` def _pi_aloha_decode_state(self, state): # Flip the joints. for motor_idx in [1, 2, 8, 9]: state[:, motor_idx] *= -1 # Reverse the gripper transformation that is being applied by the Aloha runtime. for motor_idx in [6, 13]: state[:, motor_idx] = aloha_gripper_to_angular(state[:, motor_idx]) return state def _pi_aloha_encode_actions(self, actions): # Flip the joints. for motor_idx in [1, 2, 8, 9]: actions[:, :, motor_idx] *= -1 # Reverse the gripper transformation that is being applied by the Aloha runtime. for motor_idx in [6, 13]: actions[:, :, motor_idx] = aloha_gripper_from_angular(actions[:, :, motor_idx]) return actions def _pi_aloha_encode_actions_inv(self, actions): # Flip the joints again. for motor_idx in [1, 2, 8, 9]: actions[:, :, motor_idx] *= -1 # Reverse the gripper transformation that is being applied by the Aloha runtime. for motor_idx in [6, 13]: actions[:, :, motor_idx] = aloha_gripper_from_angular_inv(actions[:, :, motor_idx]) return actions ``` btw, is it a meaningful operation to directly run smolvla_base? This is just one of my sudden thoughts.
https://github.com/huggingface/lerobot/issues/2351
closed
[ "question", "policies" ]
2025-10-31T14:55:35Z
2025-12-14T14:47:04Z
null
yquanli
vllm-project/vllm
27,880
[Installation]: [HELP]How to install the latest main version of vllm
### Your current environment I clone the vllm code, and run install commands, but it fails, Help!! ### How you are installing vllm ```sh VLLM_USE_PRECOMPILED=1 uv pip install --editable . Using Python 3.10.12 environment at: /home/alice/.venv × No solution found when resolving dependencies: ╰─▶ Because there is no version of xformers{platform_machine == 'x86_64' and sys_platform == 'linux'}==0.0.33+5d4b92a5.d20251029 and vllm==0.11.1rc6.dev16+g933cdea44.precompiled depends on xformers{platform_machine == 'x86_64' and sys_platform == 'linux'}==0.0.33+5d4b92a5.d20251029, we can conclude that vllm==0.11.1rc6.dev16+g933cdea44.precompiled cannot be used. And because only vllm==0.11.1rc6.dev16+g933cdea44.precompiled is available and you require vllm, we can conclude that your requirements are unsatisfiable. (alice) alice@dc53-p31-t0-n067:~/vllm_bak$ uv pip install -e . Using Python 3.10.12 environment at: /home/alice/.venv × No solution found when resolving dependencies: ╰─▶ Because there is no version of xformers{platform_machine == 'x86_64' and sys_platform == 'linux'}==0.0.33+5d4b92a5.d20251029 and vllm==0.11.1rc6.dev16+g933cdea44.cu126 depends on xformers{platform_machine == 'x86_64' and sys_platform == 'linux'}==0.0.33+5d4b92a5.d20251029, we can conclude that vllm==0.11.1rc6.dev16+g933cdea44.cu126 cannot be used. And because only vllm==0.11.1rc6.dev16+g933cdea44.cu126 is available and you require vllm, we can conclude that your requirements are unsatisfiable.``` ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27880
closed
[ "installation" ]
2025-10-31T13:57:20Z
2025-11-13T07:25:13Z
7
sleepwalker2017
vllm-project/vllm
27,877
[Usage]: How to install nightly version??? Why this command doesn't work?
### Your current environment I run this to install vllm with the latest code. But, the installed vllm doesn't include the code I need. I check the `siglip.py` file, it's modified 4 days ago. But in the vllm installed, it doesn't contain this commit! https://github.com/vllm-project/vllm/pull/27566/files#diff-ca771e5a262cbf32fb481c518bea41d0e341414e021d6542e421abb98cceec61 why is this? I use this command. ```text pip install -U vllm \ --pre \ --extra-index-url https://wheels.vllm.ai/nightly``` `pip install -U vllm \ --pre \ --extra-index-url https://wheels.vllm.ai/nightly Defaulting to user installation because normal site-packages is not writeable Looking in indexes: https://bytedpypi.byted.org/simple, https://bytedpypi.byted.org/simple, https://wheels.vllm.ai/nightly Requirement already satisfied: vllm in /home/alice/.local/lib/python3.10/site-packages (0.11.0) Collecting vllm Downloading https://wheels.vllm.ai/nightly/vllm-0.11.1rc6.dev16%2Bg933cdea44.cu129-cp38-abi3-manylinux1_x86_64.whl (479.0 MB) ━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.8/479.0 MB 575.3 kB/s eta 0:13:22` ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27877
open
[ "usage" ]
2025-10-31T12:29:51Z
2025-10-31T12:38:19Z
0
sleepwalker2017
pytorch/pytorch
166,721
Reference cycle in PyCodegen keeps tensors alive longer than necessary leading to OOM issues
### 🐛 Describe the bug PR with fix: https://github.com/pytorch/pytorch/pull/166714 Recursive function call creates a reference cycle: closure <- function <- cell inside closure Capturing self (PyCodegen instance) in same closure prolongs it's life until next gc.collect() which might result in worse resource management After the introduction of https://github.com/pytorch/pytorch/commit/e9209e08540e9edc69259ef0c6c715e0aa7c1b07 OOM issues has been observed. Looking for reference cycles one has been uncovered that would result in the prolonging lifetime of tensors. As the result of that OOM issues might occur. Such a dependency chain has been uncovered: <img width="1059" height="540" alt="Image" src="https://github.com/user-attachments/assets/f242f45a-04b3-4520-9e97-692f02b1ba66" /> At the end of it a reference cycle can be found that consists of a closure for function collect_temp_source, the function itself, and a cell object inside closure that would point to the function due to the recursive call. This issue can either be resolved by removing recurrency or removing PyCodegen instance from the closure. Another precaution that can be made is to explicitly empty f_locals dict. This way we cut the tensor from the chain leading to reference cycle. ### Error logs _No response_ ### Versions PyTorch version: 2.9.0+hpu_1.24.0-97.git4c6d653 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 24.04.3 LTS (x86_64) GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 Clang version: Could not collect CMake version: version 3.28.3 Libc version: glibc-2.39 Python version: 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime) Python platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.39 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Is XPU available: False HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True 13:56:57 [32/1983] CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 52 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 224 On-line CPU(s) list: 0-223 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Platinum 8480+ CPU family: 6 Model: 143 Thread(s) per core: 2 Core(s) per socket: 56 Socket(s): 2 Stepping: 8 CPU(s) scaling MHz: 34% CPU max MHz: 3800.0000 CPU min MHz: 800.0000 BogoMIPS: 4000.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid ap erfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cd p_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpci d cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xget bv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni av x512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 5.3 MiB (112 instances) L1i cache: 3.5 MiB (112 instances) L2 cache: 224 MiB (112 instances) L3 cache: 210 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-55,112-167 NUMA node1 CPU(s): 56-111,168-223 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: N
https://github.com/pytorch/pytorch/issues/166721
closed
[ "triaged", "oncall: pt2", "module: dynamo" ]
2025-10-31T12:02:30Z
2025-11-07T17:52:57Z
1
jwieczorekhabana
vllm-project/vllm
27,875
[Usage]: how to get profiler on OpenAI server
### Your current environment ```text INFO 10-31 10:27:06 [importing.py:17] Triton not installed or not compatible; certain GPU-related functions will not be available. WARNING 10-31 10:27:06 [importing.py:29] Triton is not installed. Using dummy decorators. Install it via `pip install triton` to enable kernel compilation. INFO 10-31 10:27:08 [__init__.py:39] Available plugins for group vllm.platform_plugins: INFO 10-31 10:27:08 [__init__.py:41] - ascend -> vllm_ascend:register INFO 10-31 10:27:08 [__init__.py:44] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load. INFO 10-31 10:27:08 [__init__.py:235] Platform plugin ascend is activated WARNING 10-31 10:27:12 [_custom_ops.py:22] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'") Collecting environment information... PyTorch version: 2.5.1 Is debug build: False OS: Ubuntu 22.04.5 LTS (aarch64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 4.1.0 Libc version: glibc-2.35 Python version: 3.11.13 (main, Jul 26 2025, 07:27:32) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.10.0-60.18.0.50.r865_35.hce2.aarch64-aarch64-with-glibc2.35 CPU: Architecture: aarch64 CPU op-mode(s): 64-bit Byte Order: Little Endian CPU(s): 192 On-line CPU(s) list: 0-191 Vendor ID: HiSilicon BIOS Vendor ID: HiSilicon Model name: Kunpeng-920 BIOS Model name: HUAWEI Kunpeng 920 5250 Model: 0 Thread(s) per core: 1 Core(s) per socket: 48 Socket(s): 4 Stepping: 0x1 BogoMIPS: 200.00 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs L1d cache: 12 MiB (192 instances) L1i cache: 12 MiB (192 instances) L2 cache: 96 MiB (192 instances) L3 cache: 192 MiB (8 instances) NUMA node(s): 8 NUMA node0 CPU(s): 0-23 NUMA node1 CPU(s): 24-47 NUMA node2 CPU(s): 48-71 NUMA node3 CPU(s): 72-95 NUMA node4 CPU(s): 96-119 NUMA node5 CPU(s): 120-143 NUMA node6 CPU(s): 144-167 NUMA node7 CPU(s): 168-191 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; __user pointer sanitization Vulnerability Spectre v2: Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] pyzmq==27.0.2 [pip3] torch==2.5.1 [pip3] torch-npu==2.5.1.post1 [pip3] torchvision==0.20.1 [pip3] transformers==4.52.4 [conda] Could not collect vLLM Version: 0.9.1 vLLM Ascend Version: 0.9.2.dev0+g0740d1021.d20251029 (git sha: 0740d1021, date: 20251029) ENV Variables: ATB_OPSRUNNER_KERNEL_CACHE_LOCAL_COUNT=1 ATB_STREAM_SYNC_EVERY_RUNNER_ENABLE=0 ATB_OPSRUNNER_SETUP_CACHE_ENABLE=1 ATB_WORKSPACE_MEM_ALLOC_GLOBAL=0 ATB_DEVICE_TILING_BUFFER_BLOCK_NUM=32 ATB_STREAM_SYNC_EVERY_KERNEL_ENABLE=0 VLLM_TORCH_PROFILER_DIR=/workspace/prof ATB_OPSRUNNER_KERNEL_CACHE_GLOABL_COUNT=5 ATB_HOME_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0 ASCEND_TOOLKIT_HOME=/usr/local/Ascend/ascend-toolkit/latest ATB_COMPARE_TILING_EVERY_KERNEL=0 ASCEND_OPP_PATH=/usr/local/Ascend/ascend-toolkit/latest/opp LD_LIBRARY_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/lib:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/examples:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/tests/atbopstest:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64/plugin:/usr/local/Ascend/ascend-toolkit/latest/lib64:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/opskernel:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/nnengine:/usr/local/Ascend/ascend-toolkit/latest/opp/built-in/op_impl/ai_core/tbe/op_tiling/lib/linux/aarch64:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/lib:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/examples:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/tests/atbopstest:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64/plugin:/usr/local/Ascend/ascend-toolkit/latest/lib64:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/opskernel:/usr
https://github.com/vllm-project/vllm/issues/27875
closed
[ "usage" ]
2025-10-31T10:33:49Z
2025-10-31T14:38:04Z
1
zhaohaixu
vllm-project/vllm
27,872
[Feature]: AFD support load customer connect model from local path.
### 🚀 The feature, motivation and pitch Add `afd_connector_module_path` field in AFDConfig, user can implement customer afd connect, but don't need change vllm code. https://github.com/vllm-project/vllm/pull/25162 merge after. ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27872
open
[ "feature request" ]
2025-10-31T09:08:50Z
2025-12-08T03:32:33Z
1
lengrongfu
huggingface/trl
4,413
What is the default value of num_processes?
Based on the documentation on page docs/source/grpo_trainer.md, num_processes is used but nowhere does the documentation define what num_processes is or what is its default value.
https://github.com/huggingface/trl/issues/4413
closed
[ "📚 documentation", "❓ question", "🏋 GRPO" ]
2025-10-31T05:01:23Z
2025-10-31T17:31:33Z
null
thisisraghavkumar
huggingface/diffusers
12,564
[Proposals Welcome] Fal Flashpack integration for faster model loading
Hey! 👋 We've had a request to explore integrating Fal's Flashpack for faster DiT and Text Encoder loading (https://github.com/huggingface/diffusers/issues/12550). Before we jump into implementation, we wanted to open this up to the community to gather ideas and hear from anyone who's experimented with this. We'd love your input on: 1. Performance: Has anyone tried it? What kind of speedups did you see? Are there any performance trade-offs? 2. Integration Design: How would you approach it if you were to integrating this into Diffusers? Describe your design at a high level - how would we support this in our existing framework and what would the API look like? We're looking for proposals and ideas rather than PRs at this stage. We're genuinely interested in hearing different approaches and perspectives from the community on this. Feel free to share your thoughts!
https://github.com/huggingface/diffusers/issues/12564
open
[ "help wanted", "contributions-welcome" ]
2025-10-31T02:25:55Z
2025-10-31T12:26:13Z
2
yiyixuxu
vllm-project/vllm
27,832
[RFC]: Remap `CompilationConfig` from `-O` to `-cc` in CLI
### Motivation. With #20283 (and #26847), we're repurposing `-O0`/`-O1`/`-O2`/`-O3` to map to `optimization_level` instead of `CompilationConfig.level`/`CompilationConfig.mode`. This leaves us in a slightly confusing state where `-O` can refer to optimization level or compilation config depending on what follows it: - `-O0` -> `optimization_level=0` - `-O 3` -> `optimization_level=3` - `-O {"cudagraph_mode": "NONE"}` -> `CompilationConfig(cudagraph_mode="NONE")` - `-O.use_inductor=False` -> `CompilationConfig(use_inductor=False)` - `--compilation-config.backend=eager` -> `CompilationConfig(backend="eager")` This is bad UX, and we should fix it. However, a CLI shorthand for `CompilationConfig` is still needed so users can easily compose different properties. ### Proposed Change. We should create a new shorthand for `CompilationConfig` should be `-cc`. Other options are `-c` and `-C`, but as discussed [here](https://github.com/vllm-project/vllm/pull/26847#discussion_r2439248068), single letters are not "pythonic" and capital letters are worse (extra `Shift` keystroke + less pythonic). However, the exact shorthand is up for discussion. React below to cast your vote. Example changes: - `-O0` -> `-O0` (unchanged) - `-O 3` -> `-O 3` (unchanged) - `-O {"cudagraph_mode": "NONE"}` -> `-cc {"cudagraph_mode": "NONE"}` - `-O.use_inductor=False` -> `-cc.use_inductor=False` - `--compilation-config.backend=eager` -> `--compilation-config.backend=eager` (unchanged) ### Feedback Period. One week, 10/30 - 11/5 ### CC List. @hmellor @morrison-turnansky @zou3519 ### Any Other Things. Vote for your preferred shorthand: - 👍 for `-cc` - 👎 for `-O` (keep it the same) - 🎉 for `-C` - 🚀 for `-c` ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27832
closed
[ "help wanted", "good first issue", "RFC", "torch.compile" ]
2025-10-30T20:29:31Z
2025-11-28T21:51:13Z
3
ProExpertProg
huggingface/trl
4,407
Complete paper index
These are the papers mentioned at least one in the codebase. - [ ] https://huggingface.co/papers/1707.06347 - [x] https://huggingface.co/papers/1909.08593 (only mentioned in notebook, no need to have in paper index) - [x] https://huggingface.co/papers/1910.02054 #4551 - [ ] https://huggingface.co/papers/1910.10683 - [x] https://huggingface.co/papers/2106.09685 #4441 - [ ] https://huggingface.co/papers/2211.14275 - [x] https://huggingface.co/papers/2305.10425 #3990 - [x] https://huggingface.co/papers/2305.18290 #3937 - [ ] https://huggingface.co/papers/2306.13649 - [x] https://huggingface.co/papers/2307.09288 #4094 - [x] https://huggingface.co/papers/2309.06657 #4441 - [ ] https://huggingface.co/papers/2309.16240 #3906 - [x] https://huggingface.co/papers/2310.12036 #3990 - [ ] https://huggingface.co/papers/2312.00886 - [x] https://huggingface.co/papers/2312.09244 #4094 - [ ] https://huggingface.co/papers/2401.08417 - [x] https://huggingface.co/papers/2402.00856 #3990 - [x] https://huggingface.co/papers/2402.01306 #4440 - [x] https://huggingface.co/papers/2402.03300 #4441 - [ ] https://huggingface.co/papers/2402.04792 - [x] https://huggingface.co/papers/2402.05369 #3990 - [ ] https://huggingface.co/papers/2402.09353 - [x] https://huggingface.co/papers/2402.14740 #3801 - [x] https://huggingface.co/papers/2403.00409 #3990 - [ ] https://huggingface.co/papers/2403.07691 - [x] https://huggingface.co/papers/2403.17031 (these are implementations details, no need to have in paper index) - [x] https://huggingface.co/papers/2404.04656 #3990 - [ ] https://huggingface.co/papers/2404.09656 - [ ] https://huggingface.co/papers/2404.19733 - [x] https://huggingface.co/papers/2405.00675 #3900 - [ ] https://huggingface.co/papers/2405.14734 - [ ] https://huggingface.co/papers/2405.16436 - [ ] https://huggingface.co/papers/2405.21046 - [x] https://huggingface.co/papers/2406.05882 #3990 - [x] https://huggingface.co/papers/2406.08414 #3990 - [ ] https://huggingface.co/papers/2406.11827 #3906 - [x] https://huggingface.co/papers/2407.21783 (LLaMA 3 paper, no need to have in paper index) - [x] https://huggingface.co/papers/2408.06266 #3990 - [ ] https://huggingface.co/papers/2409.06411 #3906 - [ ] https://huggingface.co/papers/2409.20370 - [ ] https://huggingface.co/papers/2411.10442 - [ ] https://huggingface.co/papers/2501.03262 - [x] https://huggingface.co/papers/2501.03884 #3824 - [ ] https://huggingface.co/papers/2501.12599 (Kimi 1.5 paper mentioned in an example, no need to have in paper index) - [ ] https://huggingface.co/papers/2501.12948 - [x] https://huggingface.co/papers/2503.14476 #3937 - [x] https://huggingface.co/papers/2503.20783 #3937 - [x] https://huggingface.co/papers/2503.24290 (link to justify beta=0 in the doc, no need to have in paper index) - [ ] https://huggingface.co/papers/2505.07291 - [x] https://huggingface.co/papers/2506.01939 #4580 - [x] https://huggingface.co/papers/2507.18071 #3775 - [x] https://huggingface.co/papers/2508.00180 #3855 - [x] https://huggingface.co/papers/2508.05629 #4042 - [x] https://huggingface.co/papers/2508.08221 #3935 - [x] https://huggingface.co/papers/2508.09726 #3989
https://github.com/huggingface/trl/issues/4407
open
[ "📚 documentation" ]
2025-10-30T20:23:26Z
2025-12-24T05:50:21Z
4
qgallouedec
vllm-project/vllm
27,830
[Usage]: GPS OSS 120b on L40S (Ada)
### Your current environment (Just a general question) ### How would you like to use vllm I want to run inference of a GPT OSS 120b with multiple L40S. I read the [docs](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html) as it clearly says it is not natively supported yet. After I had no success with vLLM it worked plug-and-play with Ollama. My question is, if there is any road map where I can see the progress? Or is it even possible to contribute on that problem? Unfortunately I am not familiar with GPUs. However I need to get it running. Any suggestion is highly appreciated. Even a clear description of the problem and what would be required to solve, is a real advantage. Thank you. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27830
closed
[ "usage" ]
2025-10-30T20:07:42Z
2025-11-17T12:46:43Z
6
Hansehart
vllm-project/vllm
27,823
[Doc]: Multi-node distributed guide issues
### 📚 The doc issue For context, see a recent issue (https://github.com/ROCm/ROCm/issues/5567) where a user was trying to set up distributed inference with `ray` by following guidance at https://docs.vllm.ai/en/v0.8.0/serving/distributed_serving.html#running-vllm-on-multiple-nodes. I ran into several issues setting this up on AMD GPUs that I believe might be deficiencies in the vLLM docs: - The `run_cluster.sh` script passes `--gpus all` which I believe is NVIDIA-only, needed to remove this from the script - I had to add `--distributed_executor_backend="ray"` to the `vllm serve` command to get vLLM to use the `ray` cluster that the script sets up - I had to set NCCL_SOCKET_IFNAME and GLOO_SOCKET_IFNAME to the appropriate network interfaces, otherwise ran into a NCCL connection error - Relevant environment variables (NCCL_SOCKET_IFNAME, GLOO_SOCKET_IFNAME, NCCL_DEBUG) are not propagated to the Docker containers that the script creates; I worked around this by adding them to the `ray` invocation in `run_cluster.sh`, but I don't see a reason why the script shouldn't pass these to the container automatically I also needed to set `--enforce-eager` but I believe that is an issue specific to our current rocm/vllm Docker images. For the above issues I'm not sure which are general gaps in the documentation, which are AMD-specific, and which might have arisen from our Docker images. The image I used and got working was `rocm/vllm:latest` which at the time had vLLM 0.11. ### Suggest a potential alternative/fix _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27823
open
[ "documentation" ]
2025-10-30T18:33:04Z
2025-10-30T18:33:04Z
0
schung-amd
huggingface/trl
4,399
Update or remove some of the notebooks
I suspect these notebooks to be outdated, if so they should be either updated or removed. - gpt2-sentiment-control.ipynb - best_of_n.ipynb - gpt2-sentiment.ipynb
https://github.com/huggingface/trl/issues/4399
closed
[ "📚 documentation" ]
2025-10-30T15:34:36Z
2025-11-04T23:52:50Z
0
qgallouedec
huggingface/trl
4,397
Remove or move Multi Adapter RL
I don't think this make sense to have this as a whole section in the doc. Either remove it or update and move it to PEFT integration
https://github.com/huggingface/trl/issues/4397
closed
[ "📚 documentation", "⚡ PEFT" ]
2025-10-30T15:12:58Z
2025-11-04T23:57:56Z
0
qgallouedec
pytorch/pytorch
166,633
Command '['ninja', '-v']' returned non-zero exit status 255.
### 🐛 Describe the bug I'm not sure it's linked to this warning message #[166580](https://github.com/pytorch/pytorch/issues/166580) and if it's a bug or how to correct it ``` ptxas info : Used 128 registers, used 16 barriers, 104 bytes cumulative stack size ptxas info : Compile time = 486.393 ms ptxas info : Compiling entry function '_ZN7cutlass13device_kernelIN5flash20enable_sm90_or_laterINS1_16FlashAttnFwdSm90INS1_25CollectiveMainloopFwdSm90ILi2EN4cute5tupleIJNS5_1CILi1EEES8_S8_EEENS6_IJNS7_ILi192EEENS7_ILi160EEENS7_ILi64EEEEEELi64ENS_12float_e4m3_tEfNS_4arch4Sm90ELb1ELb0ELb0ELb0ELb1ELb0ELb0ELb1ELb1ELb1ELb1ELb0EEENS1_21CollectiveEpilogueFwdINS6_IJSA_SC_SB_EEES9_NS_10bfloat16_tESG_Li384ELb0ELb1ELb1ELb1EEENS1_19SingleTileSchedulerILb0ELb1ELb1ELi192EEEEEEEEEvNT_6ParamsE' for 'sm_90a' ptxas info : Function properties for _ZN7cutlass13device_kernelIN5flash20enable_sm90_or_laterINS1_16FlashAttnFwdSm90INS1_25CollectiveMainloopFwdSm90ILi2EN4cute5tupleIJNS5_1CILi1EEES8_S8_EEENS6_IJNS7_ILi192EEENS7_ILi160EEENS7_ILi64EEEEEELi64ENS_12float_e4m3_tEfNS_4arch4Sm90ELb1ELb0ELb0ELb0ELb1ELb0ELb0ELb1ELb1ELb1ELb1ELb0EEENS1_21CollectiveEpilogueFwdINS6_IJSA_SC_SB_EEES9_NS_10bfloat16_tESG_Li384ELb0ELb1ELb1ELb1EEENS1_19SingleTileSchedulerILb0ELb1ELb1ELi192EEEEEEEEEvNT_6ParamsE 0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads ptxas info : Used 128 registers, used 9 barriers ptxas info : Compile time = 187.196 ms ptxas info : Compiling entry function '_ZN7cutlass13device_kernelIN5flash20enable_sm90_or_laterINS1_16FlashAttnFwdSm90INS1_25CollectiveMainloopFwdSm90ILi2EN4cute5tupleIJNS5_1CILi1EEES8_S8_EEENS6_IJNS7_ILi192EEENS7_ILi160EEENS7_ILi64EEEEEELi64ENS_12float_e4m3_tEfNS_4arch4Sm90ELb1ELb0ELb0ELb1ELb1ELb0ELb0ELb1ELb1ELb1ELb1ELb0EEENS1_21CollectiveEpilogueFwdINS6_IJSA_SC_SB_EEES9_NS_10bfloat16_tESG_Li384ELb1ELb1ELb1ELb1EEENS1_36VarlenDynamicPersistentTileSchedulerILi192ELi384ELi128ELb1ELb1ELb1EEEEEEEEEvNT_6ParamsE' for 'sm_90a' ptxas info : Function properties for _ZN7cutlass13device_kernelIN5flash20enable_sm90_or_laterINS1_16FlashAttnFwdSm90INS1_25CollectiveMainloopFwdSm90ILi2EN4cute5tupleIJNS5_1CILi1EEES8_S8_EEENS6_IJNS7_ILi192EEENS7_ILi160EEENS7_ILi64EEEEEELi64ENS_12float_e4m3_tEfNS_4arch4Sm90ELb1ELb0ELb0ELb1ELb1ELb0ELb0ELb1ELb1ELb1ELb1ELb0EEENS1_21CollectiveEpilogueFwdINS6_IJSA_SC_SB_EEES9_NS_10bfloat16_tESG_Li384ELb1ELb1ELb1ELb1EEENS1_36VarlenDynamicPersistentTileSchedulerILi192ELi384ELi128ELb1ELb1ELb1EEEEEEEEEvNT_6ParamsE 64 bytes stack frame, 140 bytes spill stores, 156 bytes spill loads ptxas info : Used 128 registers, used 9 barriers, 64 bytes cumulative stack size ptxas info : Compile time = 260.783 ms ptxas info : Compiling entry function '_ZN7cutlass13device_kernelIN5flash20enable_sm90_or_laterINS1_16FlashAttnFwdSm90INS1_25CollectiveMainloopFwdSm90ILi2EN4cute5tupleIJNS5_1CILi1EEES8_S8_EEENS6_IJNS7_ILi192EEENS7_ILi160EEENS7_ILi64EEEEEELi64ENS_12float_e4m3_tEfNS_4arch4Sm90ELb1ELb0ELb0ELb1ELb1ELb1ELb0ELb1ELb1ELb1ELb1ELb0EEENS1_21CollectiveEpilogueFwdINS6_IJSA_SC_SB_EEES9_NS_10bfloat16_tESG_Li384ELb1ELb1ELb1ELb1EEENS1_36VarlenDynamicPersistentTileSchedulerILi192ELi384ELi128ELb1ELb1ELb1EEEEEEEEEvNT_6ParamsE' for 'sm_90a' ptxas info : Function properties for _ZN7cutlass13device_kernelIN5flash20enable_sm90_or_laterINS1_16FlashAttnFwdSm90INS1_25CollectiveMainloopFwdSm90ILi2EN4cute5tupleIJNS5_1CILi1EEES8_S8_EEENS6_IJNS7_ILi192EEENS7_ILi160EEENS7_ILi64EEEEEELi64ENS_12float_e4m3_tEfNS_4arch4Sm90ELb1ELb0ELb0ELb1ELb1ELb1ELb0ELb1ELb1ELb1ELb1ELb0EEENS1_21CollectiveEpilogueFwdINS6_IJSA_SC_SB_EEES9_NS_10bfloat16_tESG_Li384ELb1ELb1ELb1ELb1EEENS1_36VarlenDynamicPersistentTileSchedulerILi192ELi384ELi128ELb1ELb1ELb1EEEEEEEEEvNT_6ParamsE 104 bytes stack frame, 280 bytes spill stores, 344 bytes spill loads ptxas info : Used 128 registers, used 16 barriers, 104 bytes cumulative stack size ptxas info : Compile time = 384.035 ms ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/workspace/LightX2V/venv/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 2506, in _run_ninja_build subprocess.run( File "/usr/lib/python3.11/subprocess.py", line 571, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 255. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/workspace/LightX2V/flash-attention/hopper/setup.py", line 622, in <module> setup( File "/workspace/LightX2V/venv/lib/python3.11/site-packages/setuptools/__init__.py", line 87, in setup return distutils.core.setup(**attrs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/LightX2V/venv/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 185, in setup return run_commands(dist) ^^^^^^^^^^^^^^^^^^ File "/workspace/LightX2V/venv/lib/python3.11/site-packages/setuptools/_distuti
https://github.com/pytorch/pytorch/issues/166633
open
[ "needs reproduction", "module: cpp-extensions", "module: cuda", "triaged" ]
2025-10-30T11:07:43Z
2025-12-31T18:42:43Z
2
christopher5106
pytorch/torchtitan
1,968
Avoiding device-to-host sync for input/output split sizes in expert parallel
I want to use the torchtitan code for a different MoE model, and I saw that if EP is used, then for FSDP, the module prefetching for forward and backward has to be manually set. This would be quite cumbersome as more models are used, and there would not be an easy standard way to do EP + FSDP. I looked through the code in expert_parallel.py and it seems that the input_sizes and output_sizes are set based on the number of tokens assigned to each expert. Since the input/output split size arguments to dist.all_to_all_single is a list of ints, I understand that the expert counts must be moved from GPU -> CPU which causes the D2H sync. However, it seems that dist.all_to_all just accepts a list of tensors, without any split size arguments. Would that avoid the D2H sync altogether? Or is the implementation underneath the same? For example, you could retrieve the list of inputs to each expert by using a mask or using index_select (instead of token reordering), and then use that as the input to dist.all_to_all. Would such a implementation simplify things and remove the D2H sync? Furthermore, in deepseed's moe implementation they seem to utilize the maximum capacity among all the experts and then don't specify the input/output split sizes (as it is an even split). Would this circumvent the D2H sync (at the expense of extra padded communication)?
https://github.com/pytorch/torchtitan/issues/1968
closed
[ "question" ]
2025-10-30T10:00:34Z
2025-11-12T22:29:19Z
null
man2machine
huggingface/transformers
41,948
Does Qwen2VLImageProcessor treat two consecutive images as one group/feature?
When looking at Qwen3-VL model's image processor (which uses Qwen2-VL's one), I found the following lines of code hard to understand. `L296-300` checks the number of input images (`patches.shape[0]`), and repeat the last one to make it divisible by `temporal_patch_size`. This make the model processes two consecutive images as a single feature due to the use of 3DConv with temporal_patch_size=2 by default. https://github.com/huggingface/transformers/blob/76fc50a1527a7db593a6057903b749598f7000a9/src/transformers/models/qwen2_vl/image_processing_qwen2_vl.py#L293-L300 But as I understand, Qwen2-VL paper mentions that it repeats each input image `temporal_patch_size` times. Did I misunderstand the code?.? <img width="787" height="205" alt="Image" src="https://github.com/user-attachments/assets/fc697460-e0a2-49fa-99b8-ea3e733bb097" />
https://github.com/huggingface/transformers/issues/41948
closed
[]
2025-10-30T09:23:50Z
2025-10-31T01:01:09Z
3
priancho
huggingface/transformers
41,947
why Smolvlm-256M-Instruct slower then Internvl-v2-1B ?
As title, Smolvlm have smaller model size (1/4 less matrix multiplication), smaller input embedding. But, both torch.CudaEvent, timer.perf_counter with torch.sync report the slower inference time ? I wonder that does this related with the wrong implementation of Smolvlm in transformers ? inference performance comparison : internvl-1B > inp_embed : (1, 547, 896) trainable params: 17,596,416 || all params: 647,260,288 || trainable%: 2.7186 smolvlm-256M > inp_embed : (1, 171, 576) trainable params: 9,768,960 || all params: 172,742,976 || trainable%: 5.6552 --- model init (all flags turns on, especially flash attention!) : ```python if 'internvl' in self.variant.lower(): if '3_5' in self.variant: self.model = AutoModelForImageTextToText.from_pretrained(self.variant, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, use_flash_attn=True, trust_remote_code=True) # internvl3.5, lm_head is not part of language_model !? lm_head = self.model.lm_head self.model = self.model.language_model self.model.lm_head = lm_head else: self.model = AutoModel.from_pretrained("OpenGVLab/InternVL2-1B", torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, use_flash_attn=True, trust_remote_code=True) self.model = self.model.language_model try: self.model.embed_tokens = self.model.base_model.embed_tokens except: self.model.embed_tokens = self.model.model.tok_embeddings elif 'smolvlm' in self.variant.lower(): self.model = AutoModelForImageTextToText.from_pretrained("HuggingFaceTB/SmolVLM-256M-Instruct", torch_dtype=torch.bfloat16, _attn_implementation="flash_attention_2", trust_remote_code=True) lm_head = self.model.lm_head self.model = self.model.model.text_model self.model.lm_head = lm_head # self.model.embed_tokens already built-in! else: raise ValueError(f"Carefull: Variant {self.variant} not tested.") ``` code snippet to measure fps : ```python for _ in range(30): _, _, _ = self.model(model_input) print('warm up done!') prof = Tch_prof(device=self.device) #prof = CudaEvent_Tch_prof(device=self.device) with torch.no_grad(): with prof: pred_speed_wps, pred_route, language = self.model(model_input, device=self.device) # timer + sync : # internvl v2-1b, lang mode : 0.3302s > 330ms ; no-lang mode : 0.0972s > 97ms (10 FPS) ? # smolvlm 256m, 0.3974s > 390ms ; no-lang : 0.1 s > 100ms ? # CudaEvent + sync : # internvl v2-1b, no-lang : 82.55ms ? # smolvlm 256m > no-lang : 90.68ms ? print(prof.get_profile()) ``` code snippet for timer classes : ```python class Tch_prof(object): def __init__(self, device): self.device = device self.hw_type = 'gpu' self.tlt_time = { 'cpu' : 0, 'gpu' : 0 } def __enter__(self): torch.cuda.current_stream(self.device).synchronize() self.s = time.perf_counter() def __exit__(self, *exc): torch.cuda.current_stream(self.device).synchronize() self.tlt_time[self.hw_type] += time.perf_counter() - self.s def get_profile(self, hw_type='all'): if hw_type == 'all': return self.tlt_time elif hw_type in self.tlt_time.keys(): return self.tlt_time[hw_type] else: raise RuntimeError(f"No such hardware type {hw_type}") class CudaEvent_Tch_prof(object): def __init__(self, device): self.device = device self.start = torch.cuda.Event(enable_timing=True) self.end = torch.cuda.Event(enable_timing=True) def __enter__(self): self.start.record() def __exit__(self, *exc): self.end.record() torch.cuda.current_stream(self.device).synchronize() self.tlt_time = self.start.elapsed_time(self.end) def get_profile(self): return self.tlt_time ``` Any suggestion will be helpful !!
https://github.com/huggingface/transformers/issues/41947
closed
[]
2025-10-30T08:10:28Z
2025-10-31T11:47:44Z
4
HuangChiEn
huggingface/trl
4,386
Reference supported trainers in Liger Kernel integration guide
Currently, we only have an example with SFT, and it's hard to know which trainer supports liger. We should list the trainer which support liger.
https://github.com/huggingface/trl/issues/4386
closed
[ "📚 documentation", "🏋 SFT" ]
2025-10-30T04:08:04Z
2025-11-03T18:16:04Z
0
qgallouedec
huggingface/trl
4,385
Use a common 'trl-lib` namespace for the models/datasets/spaces
In the doc, we have examples using different namespaces, like `kashif/stack-llama-2`, `edbeeching/gpt-neo-125M-imdb` etc. we should unify all these examples to use a common `trl-lib` namespace.
https://github.com/huggingface/trl/issues/4385
open
[ "📚 documentation", "✨ enhancement" ]
2025-10-30T04:04:10Z
2025-10-30T04:04:38Z
0
qgallouedec
huggingface/trl
4,384
Write the subsection "Multi-Node Training"
This section must be written, with a simple code example, and a link to the `accelerate` documentation
https://github.com/huggingface/trl/issues/4384
open
[ "📚 documentation", "⚡accelerate" ]
2025-10-30T03:57:53Z
2025-12-08T16:23:23Z
2
qgallouedec
huggingface/trl
4,383
Add PEFT subsection to "Reducing Memory Usage"
PEFT is a major technique to reduce memory usage of the training. We should have a small section pointing to the PEFT integration guide
https://github.com/huggingface/trl/issues/4383
closed
[ "📚 documentation", "✨ enhancement", "⚡ PEFT" ]
2025-10-30T03:55:55Z
2025-11-07T00:03:01Z
0
qgallouedec
huggingface/trl
4,382
Populate "Speeding Up Training"
Currently, this section only mentions vLLM. We should have a small guide for other methods, like flash attention. Ideally, to avoid repetition, we should have a very light example, and a link to the place in the doc where it's more extensively discussed, example vLLM pointing to vLLM integration guide
https://github.com/huggingface/trl/issues/4382
closed
[ "📚 documentation", "⚡accelerate" ]
2025-10-30T03:54:34Z
2025-12-01T09:47:23Z
0
qgallouedec
huggingface/trl
4,380
Fully transition from `flash-attn` to `kernels`
The new recommended way to use flash attention is to use kernels. We should update our tests, and documentation to use `kernels` instead of "flash_attention2". Eg https://github.com/huggingface/trl/blob/1eb561c3e9133892a2e907d84123b46e40cbc5a0/docs/source/reducing_memory_usage.md#L149 ```diff - training_args = DPOConfig(..., padding_free=True, model_init_kwargs={"attn_implementation": "flash_attention_2"}) + training_args = DPOConfig(..., padding_free=True, model_init_kwargs={"attn_implementation": "kernels-community/flash-attn2"}) ```
https://github.com/huggingface/trl/issues/4380
closed
[ "📚 documentation", "✨ enhancement" ]
2025-10-30T03:46:07Z
2025-11-13T04:07:35Z
0
qgallouedec
huggingface/trl
4,379
Remove or populate "Training customization"
Currently, this part of the documentation shows some possible customizations that applies to all trainers https://huggingface.co/docs/trl/main/en/customization However, it only features a few examples. This sections would make sense if it gets populated with other customizations, or removed. This thread can be used to discussed additional customizations
https://github.com/huggingface/trl/issues/4379
closed
[ "📚 documentation" ]
2025-10-30T03:41:02Z
2025-12-01T09:39:09Z
0
qgallouedec
huggingface/trl
4,378
Extend basic usage example to all supported CLIs
currently https://huggingface.co/docs/trl/main/en/clis?command_line=Reward#basic-usage shows only basic example usage for SFT, DPO and Reward. We should have it for all supported CLIs (ie, GRPO, RLOO, KTO)
https://github.com/huggingface/trl/issues/4378
closed
[ "📚 documentation", "🏋 KTO", "🏋 RLOO", "📱 cli", "🏋 GRPO" ]
2025-10-30T03:35:36Z
2025-11-14T01:13:17Z
0
qgallouedec
vllm-project/vllm
27,783
[Usage]: Model performance different from api
### Your current environment ```text vllm==0.10.0 ``` ### How would you like to use vllm I'm running model Qwen3-8B with vllm. I also run the same experiment using Qwen3-8B api. But I find the result is quite different, the accuracy of api-model on my task is much higher than the vllm-model. I use the same temperature and top_k. Is there anyone else meeting the same question (the api-model is stronger than the vllm-model)? ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27783
open
[ "usage" ]
2025-10-30T03:30:02Z
2025-10-30T03:30:02Z
0
fny21
vllm-project/vllm
27,782
[Usage]: The same configuration v0.11.0 will report insufficient video memory compared to v0.8.5
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm The server is a 4090 with 4 cards Docker runs vllm openai: v0.8.5 deployment command: "command: --model /models/Qwen3/Qwen3-30B-A3B --enable-reasoning --reasoning-parser deepseek_r1 --tensor_parallel_size 4" Can be deployed and started normally, switch the image version to v0.11.0, and run the command "command: --model /models/Qwen3/Qwen3-30B-A3B --reasoning-parser deepseek_r1 --tensor_parallel_size 4" It will report that the graphics card memory is insufficient, and the error log is: Capturing CUDA graphs (mixed prefill-decode, PIECEWISE): 100%|██████████| 67/67 [00:19<00:00, 3.43it/s] Capturing CUDA graphs (decode, FULL): 100%|██████████| 35/35 [00:07<00:00, 4.78it/s] vllm | (Worker_TP3 pid=263) INFO 10-29 19:57:20 [gpu_model_runner.py:3480] Graph capturing finished in 28 secs, took 1.88 GiB vllm | (Worker_TP1 pid=261) INFO 10-29 19:57:20 [gpu_model_runner.py:3480] Graph capturing finished in 28 secs, took 1.88 GiB vllm | (Worker_TP0 pid=260) INFO 10-29 19:57:20 [gpu_model_runner.py:3480] Graph capturing finished in 28 secs, took 1.88 GiB vllm | (Worker_TP2 pid=262) INFO 10-29 19:57:20 [gpu_model_runner.py:3480] Graph capturing finished in 28 secs, took 1.88 GiB vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] WorkerProc hit an exception. vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] Traceback (most recent call last): vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 3217, in _dummy_sampler_run vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] sampler_output = self.sampler(logits=logits, vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] return self._call_impl(*args, **kwargs) vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1784, in _call_impl vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] return forward_call(*args, **kwargs) vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/sample/sampler.py", line 100, in forward vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] sampled, processed_logprobs = self.sample(logits, sampling_metadata) vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/sample/sampler.py", line 180, in sample vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] random_sampled, processed_logprobs = self.topk_topp_sampler( vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] return self._call_impl(*args, **kwargs) vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1784, in _call_impl vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] return forward_call(*args, **kwargs) vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/sample/ops/topk_topp_sampler.py", line 122, in forward_cuda vllm | (Worker
https://github.com/vllm-project/vllm/issues/27782
open
[ "usage" ]
2025-10-30T03:24:54Z
2025-11-06T06:53:15Z
2
lan-qh