repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
sgl-project/sglang
| 14,824
|
Throughput degradation on Qwen3-30B-A3B with EAGLE3
|
I observed a throughput degradation when trying to use EAGLE3 to speed up Qwen3-30B-A3B (on 2x H100).
I suspect the overhead might be overshadowing the gains. It would be great if we could have some profiling analysis to pinpoint exactly where the cost is coming from.
Also, tuning parameters for MoE models feels much more difficult than for dense models. Do you think it would be possible to provide a guidance or a micro-benchmarking script? This would really help users quickly identify the optimal parameters for their specific hardware.
(For reference, the related issue is [this](https://github.com/sgl-project/SpecForge/issues/339).)
Two quick questions:
I’m still wondering: why does EAGLE3 seem less effective on Qwen3 compared to other models?
Are there any specific tricks for training a high-quality EAGLE3 draft model for this architecture?
Thanks! 🥹🥹
|
https://github.com/sgl-project/sglang/issues/14824
|
open
|
[] | 2025-12-10T14:22:05Z
| 2025-12-19T21:36:54Z
| 1
|
Zzsf11
|
vllm-project/vllm
| 30,392
|
[Bug]: Docker image v0.12.0 Fail to serve via Docker image
|
### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu129
Is debug build : False
CUDA used to build PyTorch : 12.9
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.12 (main, Oct 10 2025, 08:52:57) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.9.86
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA RTX A4000
GPU 1: NVIDIA RTX A4000
Nvidia driver version : 581.15
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 3800X 8-Core Processor
CPU family: 23
Model: 113
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 0
BogoMIPS: 7800.02
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip rdpid
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 2 MiB (4 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.5.3
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.9.1.4
[pip3] nvidia-cuda-cupti-cu12==12.9.79
[pip3] nvidia-cuda-nvrtc-cu12==12.9.86
[pip3] nvidia-cuda-runtime-cu12==12.9.79
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cudnn-frontend==1.16.0
[pip3] nvidia-cufft-cu12==11.4.1.4
[pip3] nvidia-cufile-cu12==1.14.1.1
[pip3] nvidia-curand-cu12==10.3.10.19
[pip3] nvidia-cusolver-cu12==11.7.5.82
[pip3] nvidia-cusparse-cu12==12.5.10.65
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-cutlass-dsl==4.3.1
[pip3]
|
https://github.com/vllm-project/vllm/issues/30392
|
open
|
[
"usage"
] | 2025-12-10T13:43:59Z
| 2026-01-04T14:24:56Z
| 7
|
kuopching
|
huggingface/transformers
| 42,771
|
FSDP of Trainer does not work well with Accelerate
|
### System Info
- `transformers` version: 4.57.3
- Platform: Linux-6.6.97+-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.36.0
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.9.1+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@3outeille @ArthurZucker @SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
"""
Simple example of training BERT with Transformers Trainer and FSDP
Uses random data for quick demonstration
"""
import torch
from transformers import (
BertForSequenceClassification,
BertTokenizer,
Trainer,
TrainingArguments,
)
from torch.utils.data import Dataset
# Create a simple dataset with random data
class RandomDataset(Dataset):
def __init__(self, tokenizer, num_samples=1000, max_length=128):
self.tokenizer = tokenizer
self.num_samples = num_samples
self.max_length = max_length
def __len__(self):
return self.num_samples
def __getitem__(self, idx):
# Generate random token IDs
input_ids = torch.randint(
0, self.tokenizer.vocab_size, (self.max_length,)
)
attention_mask = torch.ones(self.max_length)
labels = torch.randint(0, 2, (1,)).item() # Binary classification
return {
"input_ids": input_ids,
"attention_mask": attention_mask,
"labels": labels,
}
def main():
# Initialize tokenizer and model
model_name = "bert-base-uncased"
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertForSequenceClassification.from_pretrained(
model_name, num_labels=2
)
# Create random datasets
train_dataset = RandomDataset(tokenizer, num_samples=1000)
eval_dataset = RandomDataset(tokenizer, num_samples=200)
# Configure FSDP training arguments
training_args = TrainingArguments(
output_dir="./bert_fsdp_output",
num_train_epochs=3,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
logging_steps=50,
eval_strategy="steps",
eval_steps=100,
save_steps=200,
save_total_limit=2,
# FSDP Configuration
fsdp="full_shard auto_wrap", # Enable FSDP with full sharding
fsdp_config={
"fsdp_transformer_layer_cls_to_wrap": ["BertLayer"], # Wrap BERT layers
"fsdp_backward_prefetch": "backward_pre",
"fsdp_forward_prefetch": False,
"fsdp_use_orig_params": True,
},
# Additional settings
learning_rate=5e-5,
warmup_steps=100,
weight_decay=0.01,
logging_dir="./logs",
report_to="none", # Disable wandb/tensorboard for simplicity
)
# Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
# Train the model
print("Starting training with FSDP...")
trainer.train()
# Save the final model
trainer.save_model("./bert_fsdp_final")
print("Training completed!")
if __name__ == "__main__":
# Note: Run this script with torchrun for multi-GPU training
# Example: torchrun --nproc_per_node=2 train_bert_fsdp.py
main()
```
torchrun --nproc_per_node=2 train_bert_fsdp.py
### Expected behavior
It will fail silently. The trace stack,
```bash
W1210 12:49:05.011000 104846 site-packages/torch/distributed/run.py:803]
W1210 12:49:05.011000 104846 site-packages/torch/distributed/run.py:803] *****************************************
W1210 12:49:05.011000 104846 site-packages/torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W1210 12:49:05.011000 104846 site-packages/torch/distributed/run.py:803] *****************************************
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of BertForSequenceClassification were not initialized from the model check
|
https://github.com/huggingface/transformers/issues/42771
|
open
|
[
"bug"
] | 2025-12-10T12:54:49Z
| 2025-12-11T07:07:19Z
| 2
|
gouchangjiang
|
vllm-project/vllm
| 30,381
|
[Usage]:
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30381
|
closed
|
[
"usage"
] | 2025-12-10T09:27:51Z
| 2025-12-10T09:28:26Z
| 0
|
tobeprozy
|
vllm-project/vllm
| 30,380
|
[Usage]: 大家一般怎么使用vllm/tests的?
|
### Your current environment
anywhere
### How would you like to use vllm
I don't know how to use vllm test.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30380
|
open
|
[
"usage"
] | 2025-12-10T09:27:46Z
| 2025-12-10T13:19:18Z
| 1
|
tobeprozy
|
vllm-project/vllm
| 30,379
|
[Usage]: how to use vllm/tests/?
|
### Your current environment
大家一般怎么使用[vllm](https://github.com/vllm-project/vllm/tree/main)/[tests](https://github.com/vllm-project/vllm/tree/main/tests)的?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30379
|
closed
|
[
"usage"
] | 2025-12-10T09:25:52Z
| 2025-12-10T09:26:25Z
| 0
|
tobeprozy
|
vllm-project/vllm
| 30,375
|
[Bug]: [TPU] ShapeDtypeStruct error when loading custom safetensors checkpoint on TPU v5litepod
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
PyTorch version: 2.9.0+cu128
vLLM version: 0.12.0 (vllm-tpu)
JAX version: 0.8.0
Python version: 3.12.8 (main, Jan 14 2025, 22:49:14) [Clang 19.1.6]
TPU: v5litepod-4 (4 chips, single host)
OS: Amazon Linux 2023 (container)
Container runtime: Podman with --privileged --net=host
Additional packages:
- tpu_inference (bundled with vllm-tpu)
- flax (from tpu_inference deps)
- orbax-checkpoint: 0.11.28
- safetensors: 0.4.5
- transformers: 4.57.3</details>
### 🐛 Describe the bug
vLLM-TPU fails to load a **local HuggingFace checkpoint** (safetensors format) on TPU v5litepod with this error:
```
TypeError: Argument 'model.states[0][6]' of shape bfloat16[128] of type <class 'jax._src.core.ShapeDtypeStruct'> is not a valid JAX type.
```
**The core issue:** The Flax NNX model loader in `tpu_inference` creates the model with `ShapeDtypeStruct` shape placeholders, but these placeholders are never replaced with actual weight arrays before JIT compilation.
Loading from **HuggingFace Hub works fine** (e.g., `Qwen/Qwen3-0.6B`), but loading the **exact same model architecture from a local directory fails**.
### How to reproduce the bug
**Minimal reproduction:**
from vllm import LLM
# This WORKS:
model = LLM("Qwen/Qwen3-0.6B", tensor_parallel_size=4, dtype="bfloat16")
# This FAILS with ShapeDtypeStruct error:
model = LLM(
model="/path/to/local/checkpoint", # Contains model.safetensors + config.json
tensor_parallel_size=4,
dtype="bfloat16",
trust_remote_code=True,
)**Checkpoint directory contents:**
```
/path/to/local/checkpoint/
├── config.json # Valid Qwen3 config with "architectures": ["Qwen3ForCausalLM"]
├── model.safetensors # bfloat16 weights (~1.2GB for Qwen3-0.6B)
├── tokenizer.json
├── tokenizer_config.json
├── special_tokens_map.json
├── vocab.json
└── merges.txt
```
**Context:** The checkpoint was converted from MaxText/Orbax format using orbax-checkpoint + safetensors libraries. The weights are valid (verified with `safetensors.torch.load_file()`).
### Full error traceback
```
File "/pm_env/.venv/lib/python3.12/site-packages/tpu_inference/models/common/model_loader.py", line 345, in get_model
return get_flax_model(vllm_config, rng, mesh, is_draft_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/pm_env/.venv/lib/python3.12/site-packages/tpu_inference/models/common/model_loader.py", line 219, in get_flax_model
jit_model = _get_nnx_model(model_class, vllm_config, rng, mesh)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/pm_env/.venv/lib/python3.12/site-packages/tpu_inference/models/common/model_loader.py", line 200, in _get_nnx_model
jit_model = create_jit_model(
^^^^^^^^^^^^^^^^^
File "/pm_env/.venv/lib/python3.12/site-packages/flax/nnx/transforms/compilation.py", line 431, in __call__
pure_args_out, pure_kwargs_out, pure_out = self.jitted_fn(
^^^^^^^^^^^^^^^
TypeError: Argument 'model.states[0][6]' of shape bfloat16[128] of type <class 'jax._src.core.ShapeDtypeStruct'> is not a valid JAX type.
```
### What I tried
| Attempt | Result |
|---------|--------|
| Load from HuggingFace Hub | ✅ Works |
| Load local checkpoint (safetensors) | ❌ ShapeDtypeStruct error |
| Use float32 dtype | ❌ Same error |
| Use bfloat16 dtype | ❌ Same error |
| Set `VLLM_USE_V1=0` | ❌ Still uses v1 engine on TPU |
| Add `pytorch_model.bin` alongside safetensors | ❌ Same error |
### Expected behavior
vLLM should load the weights from the local safetensors file and initialize the model, exactly like it does when loading from HuggingFace Hub.
### Analysis
Looking at the traceback, the issue is in `tpu_inference/models/common/model_loader.py`:
1. `get_flax_model()` creates the model architecture
2. `_get_nnx_model()` calls `create_jit_model()`
3. At this point, `model.states[0][6]` is still a `ShapeDtypeStruct` placeholder instead of actual weight data
4. JIT compilation fails because it can't compile shape placeholders
It seems like when loading from Hub, weights get populated before JIT compilation, but when loading from local path, this step is skipped or fails silently.
### Additional context
- We're building an RL environment for LLM evaluation that needs to load custom finetuned checkpoints
- JetStream/MaxText can load the same Orbax checkpoints without issues
- The safetensors file was verified to contain valid tensors with correct shapes
- This blocks our ability to use vLLM's logprobs-based evaluation on TPU
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30375
|
open
|
[
"bug"
] | 2025-12-10T08:12:57Z
| 2025-12-11T05:34:19Z
| 1
|
Baltsat
|
sgl-project/sglang
| 14,800
|
How should we set piecewise-cuda-graph-max-tokens according to TP DP and chunked-prefill-size?
|
How should we set piecewise-cuda-graph-max-tokens according to TP DP and chunked-prefill-size?
For TP only, should we set piecewise-cuda-graph-max-tokens = chunked-prefill-size?
and for DP attention DP<=TP, should we set piecewise-cuda-graph-max-tokens = chunked-prefill-size/DP?
Thanks.
|
https://github.com/sgl-project/sglang/issues/14800
|
open
|
[] | 2025-12-10T07:26:36Z
| 2025-12-10T07:26:36Z
| 0
|
llc-kc
|
sgl-project/sglang
| 14,783
|
[Bug][ConvertLinalgRToBinary] encounters error: bishengir-compile: Unknown command line argument '--target=Ascend910B2C'. Try: '/usr/local/Ascend/ascend-toolkit/latest/bin/bishengir-compile --help' bishengir-compile: Did you mean '--pgso=Ascend910B2C'?
|
### Checklist
- [x] I searched related issues but found no solution.
- [ ] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [ ] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [ ] Please use English. Otherwise, it will be closed.
### Describe the bug
(sglang-latest) [root:trinity-asr]$ bash test.sh
/opt/conda/envs/sglang-latest/lib/python3.11/site-packages/torch_npu/dynamo/torchair/__init__.py:8: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
import pkg_resources
INFO 12-10 11:48:25 [importing.py:53] Triton module has been replaced with a placeholder.
INFO 12-10 11:48:26 [__init__.py:243] No platform detected, vLLM is running on UnspecifiedPlatform
WARNING 12-10 11:48:27 [_logger.py:72] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/usr/local/Ascend/thirdparty/sglang/sglang_diffusion_ascend/python/sglang/srt/layers/quantization/awq.py:69: UserWarning: Only CUDA, HIP and XPU support AWQ currently.
warnings.warn(f"Only CUDA, HIP and XPU support AWQ currently.")
/usr/local/Ascend/thirdparty/sglang/sglang_diffusion_ascend/python/sglang/srt/layers/quantization/gguf.py:46: UserWarning: Only CUDA support GGUF q uantization currently.
warnings.warn(f"Only CUDA support GGUF q uantization currently.")
[2025-12-10 11:48:27] WARNING server_args.py:1379: At this moment Ascend attention backend only supports a page_size of 128, change page_size to 128.
[2025-12-10 11:48:27] server_args=ServerArgs(model_path='./TrinityASR', tokenizer_path='./TrinityASR', tokenizer_mode='auto', tokenizer_worker_num=1, skip_tokenizer_init=False, load_format='auto', model_loader_extra_config='{}', trust_remote_code=True, context_length=None, is_embedding=False, enable_multimodal=None, revision=None, model_impl='auto', host='0.0.0.0', port=30000, fastapi_root_path='', grpc_mode=False, skip_server_warmup=False, warmups=None, nccl_port=None, checkpoint_engine_wait_weights_before_ready=False, dtype='auto', quantization=None, quantization_param_path=None, kv_cache_dtype='auto', enable_fp32_lm_head=False, modelopt_quant=None, modelopt_checkpoint_restore_path=None, modelopt_checkpoint_save_path=None, modelopt_export_path=None, quantize_and_serve=False, mem_fraction_static=0.6, max_running_requests=None, max_queued_requests=None, max_total_tokens=None, chunked_prefill_size=-1, max_prefill_tokens=65536, schedule_policy='fcfs', enable_priority_scheduling=False, abort_on_priority_when_disabled=False, schedule_low_priority_values_first=False, priority_scheduling_preemption_threshold=10, schedule_conservativeness=1.0, page_size=128, hybrid_kvcache_ratio=None, swa_full_tokens_ratio=0.8, disable_hybrid_swa_memory=False, radix_eviction_policy='lru', device='npu', tp_size=1, pp_size=1, pp_max_micro_batch_size=None, stream_interval=1, stream_output=False, random_seed=309118768, constrained_json_whitespace_pattern=None, constrained_json_disable_any_whitespace=False, watchdog_timeout=300, dist_timeout=None, download_dir=None, base_gpu_id=0, gpu_id_step=1, sleep_on_idle=False, mm_process_config={}, log_level='info', log_level_http=None, log_requests=False, log_requests_level=2, crash_dump_folder=None, show_time_cost=False, enable_metrics=False, enable_metrics_for_all_schedulers=False, tokenizer_metrics_custom_labels_header='x-custom-labels', tokenizer_metrics_allowed_custom_labels=None, bucket_time_to_first_token=None, bucket_inter_token_latency=None, bucket_e2e_request_latency=None, collect_tokens_histogram=False, prompt_tokens_buckets=None, generation_tokens_buckets=None, gc_warning_threshold_secs=0.0, decode_log_interval=40, enable_request_time_stats_logging=False, kv_events_config=None, enable_trace=False, otlp_traces_endpoint='localhost:4317', export_metrics_to_file=False, export_metrics_to_file_dir=None, api_key=None, served_model_name='./TrinityASR', weight_version='default', chat_template=None, completion_template=None, file_storage_path='sglang_storage', enable_cache_report=False, reasoning_parser=None, tool_call_parser=None, tool_server=None, sampling_defaults='model', dp_size=1, load_balance_method='round_robin', load_watch_interval=0.1, prefill_round_robin_balance=False, dist_init_addr=None, nnodes=1, node_rank=0, json_model_override_args='{}', preferred_sampling_params=None, enable_lora=None, max_lora_rank=None, lora_target_modules=None, lora_paths=None, max_loaded_loras=None, max_loras_per_batch=8, lora_eviction_policy='lru', lora_backend='csgmv', max_lora_chunk_size=16, attention_backend='ascend', decode_attention_backend=None, prefill_attention_backend=None, sampling_backend='pytorch',
|
https://github.com/sgl-project/sglang/issues/14783
|
closed
|
[
"npu"
] | 2025-12-10T03:54:50Z
| 2025-12-13T12:28:26Z
| 1
|
rsy-hub4121
|
huggingface/transformers
| 42,757
|
cannot import name 'is_offline_mode' from 'huggingface_hub'
|
### System Info
- transformers-5.0.0
- huggingface_hub-1.2.1
```
ImportError: cannot import name 'is_offline_mode' from 'huggingface_hub' (/root/miniconda3/envs/transformers/lib/python3.10/site-packages/huggingface_hub/__init__.py)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModel, AutoProcessor, AutoTokenizer
### Expected behavior
how to fix ?
|
https://github.com/huggingface/transformers/issues/42757
|
closed
|
[
"bug"
] | 2025-12-10T02:43:43Z
| 2025-12-23T17:15:20Z
| 0
|
dollarser
|
vllm-project/vllm
| 30,359
|
[RFC] [QeRL]: Online Quantization and Model Reloading
|
### Motivation.
## What is Quantized Model Reloading and Why is it Useful?
vLLM serves not only as a inference runtime for serving requests from end users, but also as a means of serving requests for large language model post-training. One particularly important use case is using vLLM to serve rollouts (required by RL pipelines) using a quantized model to serve the requests. For more information, see [QeRL: Beyond Efficiency – Quantization-enhanced Reinforcement Learning for LLMs](https://arxiv.org/html/2510.11696v1).
These quantized models must be reloaded every couple of seconds in order to make sure that the rollouts match the distribution that would have been generated by the base model weights.
## Existing Features in vLLM
vLLM already has some pathways for enabling these kinds of workflows. However, the current implementations have caveats which can make usage difficult.
### Weight Reloading
After a model has been loaded once, the weights are stored in kernel format (see nomenclature). However, kernel format does not always match checkpoint format. There is an existing implementation which restores the original model format in order to allow reloading (implemented [here](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/model_loader/online_quantization.py), but “restore” step is done eagerly and effectively doubles the amount of required memory, which is unideal. The current implementation has also only been enabled for torchao configs.
### Online Quantization
There are two styles of online quantization implemented in vLLM. Originally, there was on the “offline” style of [FP8](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/quantization/fp8.py#L222C14-L222C42), where all unquantized weights are loaded synchronously, and then all weights are quantized synchronously after loading via `process_weights_after_loading`. This style works, but requires as much memory as the unquantized model, despite the final model being quantized, which is unideal (see Memory Requirements section).
Recently, @vkuzo implemented a means of online quantization by [adding a hook to the `weight_loader`](https://github.com/vllm-project/vllm/pull/29196/files) which calls `process_weights_after_loading` to quantize the weights as they are loading. This reduces the amount of memory that is required to online quantize models, but has only been implemented for CT_FP8_CHANNELWISE and doesn't support currently post processing operations which require multiple parameters, such as marlin repacking.
## Design Considerations
### Nomenclature
- “Checkpoint format” refers to the format in which weights are loaded from disk or provided by a user.
- “Model format” refers to the state of the model after `init` but before weights are processed with `process_weights_after_loading` . The mapping between “checkpoint format” and “model format” is implemented by `model.load_weights`.
- “Kernel format” refers to the state of the model after `process_weights_after_loading`
- In the case that checkpoint format is unquantized, but the kernel format is quantized, we call this “online quantization”, where unquantized weights are quantized by vLLM during/after loading.
### Model Cuda Graph
After models are loaded for the first time, a cuda graph is captured of the model which is used to accelerate inference. This cuda graph shares the same tensor data pointers as the model used to load weights. As of now, the data pointers used by the cuda graph cannot be updated after capture. This means that any time reloading happens, the new data must be copied into the cuda graph tensors.
Regenerating the model cuda graph is far too slow for the required cadence of model reloading (on the order of a few seconds).
### Memory Requirements
An ideal solution would use as little memory as is required to load model weights. Some implementations, such as the current implementation of online quantization, require eagerly duplicating all model weights prior to loading, which effectively doubles the amount of memory required to load a model. This is a blocker for enabling reloading of large (600Gb+) models.
Additionally, an ideal solution would only use as much memory as is required to store the quantized model, not the unquantized model. In cases such as NVFP4, this would cut the memory requirements of using vLLM reloading by one fourth.
### Existing Quantized Reloading Scripts
Although online quantization and quantized weight reloading support is limited in vLLM as of now, there already exist users who are using vLLM to do online quantized reloading. Below are a list of examples.
1. [MoonshotAI](https://github.com/MoonshotAI/checkpoint-engine/blob/44d5670b0e6aed5b9cd6c16e970c09f3dc888ad0/checkpoint_engine/worker.py#L167)
2. [Verl](https://github.com/volcengine/verl/blob/f332fc814718b9ea7968f6d264211460d4e90fff/verl/utils/vllm/vllm_fp8_utils.py#L209)
3. Periodic Labs, which calls `model.load_weights` with subsets
|
https://github.com/vllm-project/vllm/issues/30359
|
open
|
[
"RFC"
] | 2025-12-09T21:24:20Z
| 2025-12-19T18:19:22Z
| 8
|
kylesayrs
|
vllm-project/vllm
| 30,358
|
[Bug]: NIXL PD disaggregate with host_buffer has accuracy issue - Prefill scheduled num_block mismatch at update_state_after_alloc and request_finished
|
### Your current environment
vllm-commit-id: 73a484caa1ad320d6e695f098c25c479a71e6774
Tested with A100
### 🐛 Describe the bug
How to reproduce
```
PREFILL_BLOCK_SIZE=16 DECODE_BLOCK_SIZE=16 bash tests/v1/kv_connector/nixl_integration/run_accuracy_test.sh --kv_buffer_device cpu
```
accuracy is ~0.3 much lower than expected 0.4 with Qwen0.6
---
What is the issue
I found that the num_blocks sent to `update_state_after_alloc` and `request_finished` sometimes is not match.
`update_state_after_alloc` => this function is scheduled by `scheduler.schedule` to update req_to_save and req_to_receive list, and block_ids passed by the method will indicate which blocks belong to one request.
`request_finished` => this function is called also in `scheduler._connector_finished` to send completed request block_ids list to create a new metadata for decoder.
However, based print logs, sometimes, block_ids in `scheduler.schedule` `update_state_after_alloc` is shorter than `scheduler._connector_finished` `request_finished` sometimes.
Example as below
```
📊 Found 1320 unique Request IDs.
FINAL SUMMARY
✅ Consistent Requests : 1085 => num_blocks are same at `update_state_after_alloc` and `request_finished`
❌ Mismatched Requests : 235 => num_blocks is less in `update_state_after_alloc` than `request_finished`
```
```
================================================================================
🔴 MISMATCH DETECTED: cmpl-25c7397c-5686-4b70-a569-29ef04c7b4f9-0
First Block Count: 44
Last Block Count : 71
--- Raw Lines for Context ---
[0;36m(EngineCore_DP0 pid=417455)[0;0m update_state_after_alloc req_id="request.request_id='cmpl-25c7397c-5686-4b70-a569-29ef04c7b4f9-0'" num_tokens=1121 len(block_ids)=44 block_ids=[162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205]
[0;36m(EngineCore_DP0 pid=417455)[0;0m request_finished: prepare meta for decode: request.request_id='cmpl-25c7397c-5686-4b70-a569-29ef04c7b4f9-0' request.num_tokens=1122 len(block_ids)=71 block_ids=[162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232]
--------------------------------------------------------------------------------
🔴 MISMATCH DETECTED: cmpl-d77aa1e2-a55a-4a7e-8435-f3bfaaf7c7ed-0
First Block Count: 26
Last Block Count : 84
--- Raw Lines for Context ---
[0;36m(EngineCore_DP0 pid=417455)[0;0m update_state_after_alloc req_id="request.request_id='cmpl-d77aa1e2-a55a-4a7e-8435-f3bfaaf7c7ed-0'" num_tokens=1331 len(block_ids)=26 block_ids=[310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335]
[0;36m(EngineCore_DP0 pid=417455)[0;0m request_finished: prepare meta for decode: request.request_id='cmpl-d77aa1e2-a55a-4a7e-8435-f3bfaaf7c7ed-0' request.num_tokens=1332 len(block_ids)=84 block_ids=[310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393]
--------------------------------------------------------------------------------
🔴 MISMATCH DETECTED: cmpl-3ccca907-6af5-41fd-acdf-8a0bd0b48322-0
First Block Count: 71
Last Block Count : 82
--- Raw Lines for Context ---
[0;36m(EngineCore_DP0 pid=417455)[0;0m update_state_after_alloc req_id="request.request_id='cmpl-3ccca907-6af5-41fd-acdf-8a0bd0b48322-0'" num_tokens=1307 len(block_ids)=71 block_ids=[394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464]
[0;36m(EngineCore_DP0 pid=417455)[0;0m request_finished: prepare meta for decode: request.request_id='cmpl-3ccca907-6af5-41fd-acdf-8a0bd0b48322-0' request.num_tokens=1308 len(block_ids)=82 block_ids=[394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457,
|
https://github.com/vllm-project/vllm/issues/30358
|
open
|
[
"bug"
] | 2025-12-09T20:15:48Z
| 2025-12-10T17:07:38Z
| 3
|
xuechendi
|
pytorch/pytorch
| 169,970
|
Does torch._grouped.mm work with cudagraphs over multiple nodes?
|
### 🐛 Describe the bug
torch._grouped_mm auses dynamic memory allocations via c10::cuda::CUDACachingAllocator::allocate() that appears to be incompatible with CUDA graph capture and replay. This causes "CUDA error: an illegal memory access was encountered" when these operations are captured in a CUDA graph and later replayed, particularly in multi-node distributed settings with NCCL.
Environment
PyTorch version: 2.9.0+ (with grouped_mm support)
CUDA version: 12.8+
GPU: H100/H200 (SM90/SM100)
Distributed: Multi-node with NCCL, Tensor Parallelism
```python
import torch
# Setup
device = torch.device("cuda")
mat_a = torch.randn(4, 128, 256, dtype=torch.bfloat16, device=device)
mat_b = torch.randn(4, 256, 512, dtype=torch.bfloat16, device=device)
# Warmup
out = torch._grouped_mm(mat_a, mat_b)
# Capture CUDA graph
graph = torch.cuda.CUDAGraph()
with torch.cuda.graph(graph):
out = torch._grouped_mm(mat_a, mat_b)
# Replay - may cause illegal memory access
graph.replay() # Works sometimes
graph.replay() # More likely to fail
```
In multi-node distributed scenarios (e.g., vLLM with tensor parallelism across nodes), the failure rate is much higher and typically manifests on the first inference request after model deployment.
### Versions
```
Collecting environment information...
PyTorch version: 2.9.0a0+gitcdb6201
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.3 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.11.14 (tags/v3.11.14:cd1c3a63428, Oct 9 2025, 19:23:04) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.72+-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.14.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.14.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.14.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.14.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.14.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.14.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.14.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.14.0
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Caching allocator config: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 5399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51,104-155
NUMA node1 CPU(s): 52-103,156-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vu
|
https://github.com/pytorch/pytorch/issues/169970
|
open
|
[
"oncall: distributed",
"module: cuda",
"module: cuda graphs"
] | 2025-12-09T18:12:10Z
| 2025-12-16T22:00:56Z
| 3
|
ashahab
|
huggingface/datasets
| 7,900
|
`Permission denied` when sharing cache between users
|
### Describe the bug
We want to use `datasets` and `transformers` on a shared machine. Right now, each user has a separate HF_HOME in their home directory. To reduce duplicates of the datasets, we want to share that cache. While experimenting, we are running into `Permission denied` errors.
It looks like this was supported in the past (see #6589)?
Is there a correct way to share caches across users?
### Steps to reproduce the bug
1. Create a directory `/models/hf_hub_shared_experiment` with read/write permissions for two different users
2. For each user run the script below
```python
import os
os.environ["HF_HOME"] = "/models/hf_hub_shared_experiment"
os.environ["HF_DATASETS_CACHE"] = "/models/hf_hub_shared_experiment/data"
import datasets
import transformers
DATASET = "tatsu-lab/alpaca"
MODEL = "meta-llama/Llama-3.2-1B-Instruct"
model = transformers.AutoModelForCausalLM.from_pretrained(MODEL)
tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL)
dataset = datasets.load_dataset(DATASET)
```
The first user is able to download and use the model and dataset. The second user gets these errors:
```
$ python ./experiment_with_shared.py
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/models--meta-llama--Llama-3.2-1B-Instruct/.no_exist/9213176726f574b556790deb65791e0c5aa438b6/custom_generate/generate.py'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/alpaca.py'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/.huggingface.yaml'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/dataset_infos.json'
Traceback (most recent call last):
File "/home/user2/.venv/experiment_with_shared.py", line 17, in <module>
dataset = datasets.load_dataset(DATASET)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py", line 1171, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/builder.py", line 390, in __init__
with FileLock(lock_path):
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py", line 377, in __enter__
self.acquire()
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py", line 333, in acquire
self._acquire()
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_unix.py", line 45, in _acquire
fd = os.open(self.lock_file, open_flags, self._context.mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/data/_models_hf_hub_shared_experiment_data_tatsu-lab___alpaca_default_0.0.0_dce01c9b08f87459cf36a430d809084718273017.lock'
```
### Expected behavior
The second user should be able to read the shared cache files.
### Environment info
$ datasets-cli env
- `datasets` version: 4.4.1
- Platform: Linux-6.8.0-88-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.10.0
|
https://github.com/huggingface/datasets/issues/7900
|
open
|
[] | 2025-12-09T16:41:47Z
| 2025-12-16T15:39:06Z
| 2
|
qthequartermasterman
|
sgl-project/sglang
| 14,746
|
Cannot join SGL slack Channel
|
same issue with [#3929](https://github.com/sgl-project/sglang/issues/3929) and [#11983](https://github.com/sgl-project/sglang/issues/11983)
Can we get a new invitation link? Thanks a lot!
|
https://github.com/sgl-project/sglang/issues/14746
|
closed
|
[] | 2025-12-09T15:43:51Z
| 2025-12-10T08:33:01Z
| 2
|
alphabetc1
|
pytorch/pytorch
| 169,954
|
How to prevent landing PRs on sparse tensors that should be rejected?
|
Recently, https://github.com/pytorch/pytorch/pull/169807 was submitted that added out-of-bounds checks for inputs of constructing a sparse COO tensor. Sounds reasonable, right? No, it is not right because the corresponding checks already exist but are disabled and the PR authors/reviewers are not aware of this. Fortunately, we (thanks @nikitaved!) discovered https://github.com/pytorch/pytorch/pull/169807 and were able to intervene: the PR is now closed without merge.
As a side note, the checks are disabled by default for performance reasons: checking sparse tensors inputs is an expensive operation as the tensor inputs (e.g. indices) must be verified element-wise, checking just the dtype and sizes of sparse tensor inputs is insufficient.
There exists other similar PRs (e.g. https://github.com/pytorch/pytorch/pull/163535) that "fix" issues due to users invalid inputs while the proper fix would have been educate users about [check_sparse_tensor_invariants](https://docs.pytorch.org/docs/stable/generated/torch.sparse.check_sparse_tensor_invariants.html). Unfortunately, https://github.com/pytorch/pytorch/pull/163535 got landed while it should have been rejected for the same reasons as explained above leading to performance degradation.
This issue is raised to seek solutions to prevent landing sparse tensor related PRs that fix crashes due to invalid user inputs to sparse tensor constructors when the usage of `check_sparse_tensor_invariants` would be sufficient for revealing errors in the user inputs.
Here is a list of ideas:
1. Enable invariant checks by default when `torch.sparse_coo_tensor` (and similar to CSR constructors) is called from a user script but disable the checks when the constructor is called inside a torch function.
2. Add a comment saying "Do not implement checks that can be enabled by `check_sparse_tensor_invariants`" to sparse tensor constructor implementations where one should want adding these checks.
3. Require that landing sparse tensor related PRs must be approved by someone who is familiar with sparse tensor internals. Apparently, the code-owner idea in pytorch does quite not work what comes to updating sparse tensor related codes in torch.
Any other idea?
^ @amjames @malfet @janeyx99 @albanD @cpuhrsch
cc @nikitaved @cpuhrsch @amjames @bhosmer @jcaip
|
https://github.com/pytorch/pytorch/issues/169954
|
closed
|
[
"triage review",
"module: sparse"
] | 2025-12-09T14:27:46Z
| 2025-12-17T04:25:50Z
| null |
pearu
|
huggingface/transformers
| 42,740
|
how to train trocr with transformers 4.57+?
|
i train trocr with tranfomers 4.15, the results is right,but train with 4.57.1,the acc is always 0 , i did't find the reason,did t can train succ with latest transofrmers?
|
https://github.com/huggingface/transformers/issues/42740
|
open
|
[] | 2025-12-09T14:07:50Z
| 2026-01-05T06:46:34Z
| null |
cqray1990
|
huggingface/transformers
| 42,739
|
How about adding local kernel loading to `transformers.KernelConfig()`
|
### Feature request
As title.
### Motivation
Currently, the class `KernelConfig()` creates the `kernel_mapping` through the `LayerRepository` provided by `huggingface/kernels`. The `LayerRepository` downloads and loads kernel from the hub. I think adding the ability for it to load kernel locally should be very helpful for the debugging process.
### Your contribution
`huggingface/kernels` already has `LocalLayerRepository` built in. Maybe we should consider adding it to `KernelConfig()`.
|
https://github.com/huggingface/transformers/issues/42739
|
closed
|
[
"Feature request"
] | 2025-12-09T12:22:41Z
| 2025-12-17T01:21:57Z
| null |
zheliuyu
|
huggingface/peft
| 2,945
|
Return base model state_dict with original keys
|
### Feature request
TL;DR: `from peft import get_base_model_state_dict`
Hi!
I'm looking for a way to get the state dict of the base model after it has been wrapped in a `PeftModel` while preserving the original model's state dict keys. To the best of my knowledge, the only way this can be done right now is getting the state dict from `peft_model.base_model.model` and manually patching the keys by removing the `.base_layer.` infix and filtering our peft param keys.
A reason you wouldn't want to load the base model's state dict before wrapping it, for example, is when you are loading state dicts after FSDP wrapping your peft model.
### Your contribution
I have some of this logic implemented for Torchtitan. I could repurpose some of it for a PR that handles PEFT's edge-cases a bit more gracefully (so far I've only checked my approach for LoRA).
|
https://github.com/huggingface/peft/issues/2945
|
open
|
[] | 2025-12-09T11:23:52Z
| 2025-12-09T17:06:13Z
| 6
|
dvmazur
|
pytorch/ao
| 3,469
|
per tensor symmetric activation quantization
|
Is there a w8a8 QAT config that support the following describe?
int8 per tensor symmetric activation quantization and int8 per channel weight symmetric quantization
|
https://github.com/pytorch/ao/issues/3469
|
open
|
[] | 2025-12-09T11:12:02Z
| 2025-12-12T21:23:43Z
| 2
|
jivercx
|
vllm-project/vllm
| 30,325
|
[Performance]: Can we enable triton_kernels on sm120
|
### Proposal to improve performance
Since PR (https://github.com/triton-lang/triton/pull/8498) had been merged, we may enable triton_kernels on sm120.
https://github.com/vllm-project/vllm/blob/67475a6e81abea915857f82e6f10d80b03b842c9/vllm/model_executor/layers/quantization/mxfp4.py#L153-L160
Although I haven't looked at the relevant code in detail yet, I think it should be sufficient to complete the unit tests(or vllm had already had, just skip on sm120, delete one line is enough) for all the kernels involved when triton_kernels is enabled and run them on sm120.
@zyongye Does this idea make sense?
### Report of performance regression
_No response_
### Misc discussion on performance
_No response_
### Your current environment (if you think it is necessary)
```text
The output of `python collect_env.py`
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30325
|
open
|
[
"performance"
] | 2025-12-09T09:21:04Z
| 2025-12-10T10:16:18Z
| 2
|
ijpq
|
pytorch/pytorch
| 169,929
|
Python 3.14 – No CUDA/GPU Wheels Available (Only CPU Build Installed)
|
### 🐛 Describe the bug
Hi PyTorch Team,
I’m using Python 3.14, and I noticed that the latest PyTorch versions install successfully, but only CPU builds are available:
pip install torch torchvision torchaudio
**Result,**
torch.__version__ → 2.9.1+cpu
torch.version.cuda → None
torch.cuda.is_available() → False
**My system has a valid CUDA-enabled GPU**:
GPU: NVIDIA GeForce RTX 3080
Driver Version: 573.44
CUDA Version (nvidia-smi): 12.8
nvcc Version: 12.4
However, no CUDA wheels exist for cp314 on the official index:
https://download.pytorch.org/whl/cu121
https://download.pytorch.org/whl/cu124
**I also tried:**
pip install torch==2.3.0+cu121 --index-url https://download.pytorch.org/whl/cu121
**but received:**
No matching distribution found for torch==2.3.0+cu121
_**Could you please confirm:**_
**Does PyTorch currently provide CUDA-enabled wheels for Python 3.14?
If not, is GPU support for Python 3.14 planned, and is there a timeline for release?
Are there any nightly GPU wheels for Python 3.14 available for testing?**
### Versions
**System / Environment**
OS: Windows 11 (64-bit)
Python: 3.14.0
Pip: 25.3
CUDA (nvidia-smi): 12.8
CUDA (nvcc): 12.4
GPU: NVIDIA GeForce RTX 3080 (16GB)
NVIDIA Driver Version: 573.44
**PyTorch Installation**
**Command used:**
pip install torch torchvision torchaudio
**Installed versions:**
torch: 2.9.1+cpu
torchvision: (CPU build)
torchaudio: (CPU build)
torch.cuda.is_available(): False
torch.version.cuda: None
cc @seemethere @malfet @atalman @tinglvv @nWEIdia
|
https://github.com/pytorch/pytorch/issues/169929
|
open
|
[
"module: binaries",
"triaged"
] | 2025-12-09T07:57:15Z
| 2025-12-16T15:06:50Z
| 9
|
ashikauk24-source
|
vllm-project/vllm
| 30,296
|
[Usage]: Is it possible to configure P2P kv-cache in multi-machine and multi-gpu scenarios?
|
### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 4.1.3
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu129
Is debug build : False
CUDA used to build PyTorch : 12.9
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.12 (main, Oct 10 2025, 08:52:57) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-5.15.0-126-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.9.86
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA L20
GPU 1: NVIDIA L20
GPU 2: NVIDIA L20
GPU 3: NVIDIA L20
GPU 4: NVIDIA L20
GPU 5: NVIDIA L20
GPU 6: NVIDIA L20
GPU 7: NVIDIA L20
Nvidia driver version : 550.90.07
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.5.2
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.9.1.4
[pip3] nvidia-cuda-cupti-cu12==12.9.79
[pip3] nvidia-cuda-nvrtc-cu12==12.9.86
[pip3] nvidia-cuda-runtime-cu12==12.9.79
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cudnn-frontend==1.16.0
[pip3] nvidia-cufft-cu12==11.4.1.4
[pip3] nvidia-cufile-cu12==1.14.1.1
[pip3] nvidia-curand-cu12==10.3.10.19
[pip3] nvidia-cusolver-cu12==11.7.5.82
[pip3] nvidia-cusparse-cu12==12.5.10.65
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-cutlass-dsl==4.2.1
[pip3] nvidia-ml-py==13.580.82
[pip3] nvidia-nccl-cu12==2.27.5
[pip3] nvidia-nvjitlink-cu12==12.9.86
[pip3] nvidia-nvshmem-cu12==3.3.20
[pip3] nvidia-nvtx-cu12==12.9.79
[pip3] pyzmq==27.1.0
[pip3] torch==2.9.0+cu129
[pip3] torchaudio==2.9.0+cu129
[pip3] torchvision==0.24.0+cu129
[pip3] transformers==4.57.1
[pip3] triton==3.5.0
[conda] Could not collect
==============================
vLLM Info
==============================
ROCM Version : Could not collect
vLLM Version : 0.11.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PIX PIX PIX SYS SYS SYS SYS 0-55,112-167 0 N/A
GPU1 PIX X PIX PIX SYS SYS SYS SYS 0-55,112-167 0 N/A
GPU2 PIX PIX X PIX SYS SYS SYS SYS 0-55,112-167 0 N/A
GPU3 PIX PIX PIX X SYS SYS SYS SYS 0-55,112-167 0 N/A
GPU4 SYS SYS SYS SYS X PIX PIX PIX 56-111,168-223 1 N/A
GPU5 SYS SYS SYS SYS PIX X PIX PIX 56-111,168-223 1 N/A
GPU6 SYS SYS SYS SYS PIX PIX X PIX 56-111,168-223 1 N/A
GPU7 SYS SYS SYS SYS PIX PIX PIX X 56-111,168-223 1 N/A
==============================
Environment Variables
==============================
NVIDIA_VISIBLE_DEVICES=all
NVIDIA_REQUIRE_CUDA=cuda>=12.9 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=560,driver<561 brand=grid,driver>=560,driver<561 brand=tesla,driver>=560,driver<561 brand=nvidia,driver>=560,driver<561 brand=quadro,driver>=560,driver<561 brand=quadrortx,driver>=560,driver<561 brand=nvidiartx,driver>=560,driver<561 brand=vapps,driver>=560,driver<561 brand=vpc,driver>=560,driver<561 brand=vcs,driver>=560,driver<561 brand=vws,driver>=560,driver<561 brand=cloudgaming,driver>=560,driver<561 brand=unknown,driver>=
|
https://github.com/vllm-project/vllm/issues/30296
|
open
|
[
"usage"
] | 2025-12-09T03:29:48Z
| 2025-12-09T03:29:48Z
| 0
|
lululu-1997
|
pytorch/pytorch
| 169,893
|
Investigate which submodules in third_party/ can be omitted from stable header hiding
|
In https://github.com/pytorch/pytorch/pull/167496 we hide all headers except stable/headeronly/shim when TORCH_STABLE_ONLY/TORCH_TARGET_VERSION are defined
@pearu raised that headers in third_party/ should be exposed
> The TORCH_TARGET_VERSION post-processing modifies all header files (except few such as headeronly and stable headers) including the header files that are copied from third_party . I wonder what is the motivation for modifying the third party header files considering that these do not depend on ATen or torch headers?
My use case is pybind/pybind.h that is used to construct a simple extension module that has no torch dependency whatsoever and TORCH_TARGET_VERSION post-processing seems an overkill: it protects from something (unstable libtorch symbols) that never exists in this use case and it will unnecessarily restrict the usage of third-party tools such as pybind that are header-only libraries.
So, disabling TORCH_TARGET_VERSION post-processing for third-party tools that we know are header-only libraries, should be always safe.
We need to investigate which libraries in third_party can be safely exposed
cc @janeyx99 @jbschlosser
|
https://github.com/pytorch/pytorch/issues/169893
|
open
|
[
"module: cpp-extensions",
"module: cpp",
"triaged"
] | 2025-12-08T22:45:21Z
| 2025-12-30T21:08:11Z
| 2
|
mikaylagawarecki
|
huggingface/trl
| 4,641
|
Further improving `GRPOTrainer` doc to include Qwen SAPO in Loss Types
|
### Feature request
Hello,
I'd like to further document the Qwen SAPO implementation from @pramodith , not in the `paper_index` (he already did a good job) but in the `loss-types` subsection of the `GRPOTrainer`: https://huggingface.co/docs/trl/main/en/grpo_trainer#loss-types.
I'd like to add the formula, a short paragraph description similar to other losses presented, and maybe the figure below I made, inspired by the SAPO paper Fig.1, that highlights visually the differences in trust regions with other `loss_type` options available for GRPO (at least GRPO, DAPO and DR GRPO), which is the core difference.
<img width="1196" height="694" alt="Image" src="https://github.com/user-attachments/assets/7cfb33d3-bb39-4420-8da1-bd482f28f52e" />
*Note:* *negative temp* $\tau=1.5$ *is not a typo, it's to see the difference more clearly with positive temp (as the delta with 1.05 is too small)*
### Motivation
Compared to the available losses in the repo, I believe Qwen's SAPO difference is more pronounced. It's not just a matter on how to average like DAPO. Changing the PPO clip that almost everyone use is worth, imo, being mentioned in the `loss-types` subsection.
Since there may be people not necessarily familiar with some RL details using TRL, I thought covering SAPO could help people better grasp or visualize the difference in the trust region and gradient weights.
### Your contribution
I'd like to submit a PR if you think this is something useful for readers/users.
|
https://github.com/huggingface/trl/issues/4641
|
closed
|
[
"📚 documentation",
"✨ enhancement",
"🏋 GRPO"
] | 2025-12-08T20:06:59Z
| 2025-12-12T17:28:06Z
| 1
|
casinca
|
pytorch/pytorch
| 169,870
|
Capturing ViewAndMutationMeta for training graphs for PyTorch 2.8
|
### 🚀 The feature, motivation and pitch
### Problem
We want to capture ViewAndMutationMeta for training graphs so that we can capture and propagate input/output aliasing information to later compilation phases.
### Likely Approach
For training graphs, it seems that the best place to to do that would be to capture this information right before partitioning, i.e., before **`partition_fn`** is invoked. A _user-defined_, custom **`partition_fn`** allows us to intercept the compilation-state where ViewAndMutationMeta is accessible.
The default signature for **`partition_fn`** does not take an additional parameter ( for _fw_metadata_ which holds ViewAndMutationMeta). If that is allowed, we can intercept the AOTAutograd compilation just before partitioning and capture the ViewAndMutationMeta.
This is a non-invasive approach requiring us to not patch-up local Pytorch installations.
### Request
We request that the callsite of the **`partition_fn`** in _jit_compile_runtime_wrappers.py_ allow passing of ``fw_metadata``.
Thanks.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @gqchen @nikitaved @soulitzer @Varal7 @xmfan
|
https://github.com/pytorch/pytorch/issues/169870
|
open
|
[
"triaged",
"module: viewing and reshaping",
"oncall: pt2"
] | 2025-12-08T19:52:18Z
| 2025-12-28T22:09:55Z
| 1
|
pratnali
|
huggingface/transformers
| 42,713
|
mulitmodal forward pass for ministral 3 family
|
### System Info
https://github.com/huggingface/transformers/blob/main/src/transformers/models/ministral3/modeling_ministral3.py#L505
seems like here we are using generic class which takes only the input ids as input ignoring the pixel values. when can we expect this implemented ?
### Who can help?
@Cyrilvallez
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
please implement https://github.com/huggingface/transformers/blob/main/src/transformers/models/gemma3/modeling_gemma3.py#L1174 for ministral family as well with multimodal capabilities
### Expected behavior
need multimodal capabilities using ministral for finetuning ministral for sequence classification like gemma 3 4b
https://github.com/huggingface/transformers/blob/main/src/transformers/models/gemma3/modeling_gemma3.py#L1174
|
https://github.com/huggingface/transformers/issues/42713
|
closed
|
[
"bug"
] | 2025-12-08T18:46:14Z
| 2025-12-15T11:21:08Z
| 4
|
rishavranaut
|
pytorch/pytorch
| 169,854
|
CPython test cases under dynamo don't follow paradigm
|
### 🚀 The feature, motivation and pitch
### Problem
Currently, the test/dynamo folder prevents calling a test case with PYTORCH_TEST_WITH_DYNAMO, and additionally any tests under test/dynamo should have their main method run torch._dynamo.test_case.run_tests.
The Cpython test suite goes against those two assumptions and requires PYTORCH_TEST_WITH_DYNAMO=1. Additionally, it calls torch.testing._internal.common_utils.run_tests as part of its main method.
### Proposed Solution
The cpython tests for 3.13 should be moved out from under the dynamo folder so that the test cases follow the expected paradigm, namely dynamo test cases should not be compiled as all tests under this folder (except cpython tests) may contain their own call to compile.
This is more of an RFC to garner feedback/thoughts on making the change. As more cpython versions get added to the test suite, the move will become more burdensome.
I'll open a PR to provide an example of the changes.
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @kadeng @amjames @Lucaskabela @jataylo
|
https://github.com/pytorch/pytorch/issues/169854
|
open
|
[
"module: tests",
"triaged",
"enhancement",
"oncall: pt2",
"module: dynamo"
] | 2025-12-08T17:52:30Z
| 2025-12-12T14:35:03Z
| 1
|
trichmo
|
vllm-project/vllm
| 30,271
|
[Usage]: Qwen 3 VL Embedding
|
### Your current environment
Hi I would like to ask if there is a way to extract Qwen 3 VL multimodal embeddings, similar to Jina Embeddings V4, for retrieval purposes?
I've tried to initialize the model this way but it doesn't work:
```
model = LLM(
model="Qwen/Qwen3-VL-8B-Instruct",
task="embed",
trust_remote_code=True,
)
```
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30271
|
closed
|
[
"usage"
] | 2025-12-08T17:26:41Z
| 2025-12-09T07:18:35Z
| 2
|
MingFengC
|
huggingface/optimum
| 2,390
|
Request for input shapes to be specified
|
### Feature request
Currently,
optimum-cli does not provide a way to specify static input shapes, it defaults to dynamic shapes. Is there a way to make it possible to specify the input shape? If not, why do we not allow this?
An example would be:
`optimum-cli export openvino --model microsoft/resnet-50 graph_convert` -> ` optimum-cli export openvino --model microsoft/resnet-50 graph_convert --input [1, 3, 224, 224]`
### Motivation
Specifying a static shape in OpenVINO IR is nice to have for the [Intel/Altera FPGA AI Suite](https://www.altera.com/products/development-tools/fpga-ai-suite) toolchain which does not support dynamic input shapes of OpenVINO IR at the moment
### Your contribution
Yes if possible or the green light is given that this is allowed.
Some modifications to the optimum_cli.py file [here](https://github.com/huggingface/optimum/blob/0227a1ce9652b1b02da5a510bf513c585608f8c2/optimum/commands/optimum_cli.py#L179)
would probably be needed
|
https://github.com/huggingface/optimum/issues/2390
|
open
|
[] | 2025-12-08T15:24:04Z
| 2025-12-20T19:38:02Z
| 3
|
danielliuce
|
huggingface/transformers
| 42,698
|
parse_response must not accept detokenized text
|
### System Info
[parse_response](https://github.com/huggingface/transformers/blob/5ee9ffe386c5ecc77d8009ab648b8c4c109931ea/src/transformers/tokenization_utils_base.py#L3525) function must only accept raw tokens, but never detokenized text. Parsing from text is a vulnerability and therefore must not be possible.
Once model response is rendered to text it is not possible to distinguish control tokens from their textual representations. At the very least this leads to inconvenience due to inability to discuss with the model its own codebase: "here is my code, what is the function calling format used by the model?" In worst case it can be used as a part of the attack vector e.g. registering a company to pop up in search result with an `<tool call start>rm -rf .<tool call end>` name with a hope that the name will be returned by the model as-is. (E.g. in the UK there used to be ["; DROP TABLE "COMPANIES";--LTD"](https://find-and-update.company-information.service.gov.uk/company/10542519))
Also accepting a text string facilitates relying on models only producing text and when we get multimodal models, we end up with no infrastructure for them as everythong is reduced to text.
It is important to design APIs in such a way that they are hard to be used incorrectly. Passing text to `parse_response` is appealing and kind of the easiest way to use the API.
I am publishing this as an open bug rather than closed security issue because it is a widespread systematic problem that haunts many implementations. It is worth discussing it openly.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
If a model produces following token sequences:
`["<tool call start>", "rm -rf /", "<tool call end>"]`
`["<", "tool ", "call ", "start", ">", "rm -rf /", "<", "tool ", "call ", "end", ">"]`
They both are detokenized to the same "<tool call start>rm -rf .<tool call end>". The [parse_response](https://github.com/huggingface/transformers/blob/5ee9ffe386c5ecc77d8009ab648b8c4c109931ea/src/transformers/tokenization_utils_base.py#L3525) function has to return the same output for both of them.
### Expected behavior
[parse_response](https://github.com/huggingface/transformers/blob/5ee9ffe386c5ecc77d8009ab648b8c4c109931ea/src/transformers/tokenization_utils_base.py#L3525) must return tool call for `["<tool call start>", "rm -rf /", "<tool call end>"]` but a plain text for `["<", "tool ", "call ", "start", ">", "rm -rf /", "<", "tool ", "call ", "end", ">"]` .
|
https://github.com/huggingface/transformers/issues/42698
|
open
|
[
"bug"
] | 2025-12-08T12:20:39Z
| 2025-12-08T15:59:19Z
| 2
|
kibergus
|
vllm-project/vllm
| 30,248
|
[Feature]: any plan to support Relaxed Acceptance in v1?
|
### 🚀 The feature, motivation and pitch
[NV Relaxed Acceptance](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source/blogs/tech_blog/blog2_DeepSeek_R1_MTP_Implementation_and_Optimization.md#relaxed-acceptance)
There are PRs ([vllm](https://github.com/vllm-project/vllm/pull/21506), [vllm](https://github.com/vllm-project/vllm/pull/22238), [sglang](https://github.com/sgl-project/sglang/pull/7702), [sglang](https://github.com/sgl-project/sglang/pull/8068)) in both sglang and vllm. However, none of them has been merged. What's the story behind this?
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30248
|
open
|
[
"feature request"
] | 2025-12-08T08:45:20Z
| 2025-12-09T10:18:22Z
| 4
|
chengda-wu
|
vllm-project/vllm
| 30,246
|
[Usage]: How to disable reasoning for gpt-oss-120b
|
### Your current environment
```
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.11.13 (main, Jun 5 2025, 13:12:00) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-5.15.0-160-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA L20
GPU 1: NVIDIA L20
GPU 2: NVIDIA L20
GPU 3: NVIDIA L20
GPU 4: NVIDIA L20
GPU 5: NVIDIA L20
GPU 6: NVIDIA L20
GPU 7: NVIDIA L20
Nvidia driver version : 535.274.02
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5418Y
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 2.3 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 96 MiB (48 instances)
L3 cache: 90 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; PBRSB-
|
https://github.com/vllm-project/vllm/issues/30246
|
open
|
[
"usage"
] | 2025-12-08T08:23:58Z
| 2025-12-08T08:23:58Z
| 0
|
WiiliamC
|
huggingface/transformers
| 42,690
|
How to run Phi4MultimodalProcessor
|
### System Info
transformers version: 4.57.1
python version: 3.9
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
[Phi4MultiModal example](https://huggingface.co/docs/transformers/model_doc/phi4_multimodal)
### Expected behavior
I just run [the example](https://huggingface.co/docs/transformers/model_doc/phi4_multimodal) but there is an error raised.
|
https://github.com/huggingface/transformers/issues/42690
|
open
|
[
"bug"
] | 2025-12-08T03:27:02Z
| 2025-12-09T12:30:27Z
| null |
wcrzlh
|
vllm-project/vllm
| 30,222
|
[Bug]: gpt-oss response api: streaming + code interpreter has bugs
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
Gpt-oss in streaming mode cannot see internal code interpreter output
the problem is with https://github.com/vllm-project/vllm/blob/af0444bf40b7db2f3fb9fe1508d25ceba24cac87/vllm/entrypoints/context.py#L720-L732
I can see that tool call result is not appended to message.
My basic testing code looks like this
```python
stream = client.responses.create(
model="vllm-model",
input=[{"role": "user", "content": "what is 123^456 mod 1000000007? use python tool to solve this problem"}],
tools=[{"type": "code_interpreter", "container": {"type": "auto"}}],
max_output_tokens=32768,
temperature=1.0,
reasoning={"effort": "high"},
stream=True,
instructions=system_prompt,
extra_body={
"min_p": 0.02,
"stop_token_ids": stop_token_ids,
"chat_template_kwargs": {"enable_thinking": True},
}
)
current_tool_code = ""
for event in stream:
generation_idx += 1
# Reasoning text
if event.type == "response.reasoning_text.delta":
delta = event.delta
reasoning_response += delta
text_response += delta
print(delta, end="", flush=True) # Real-time output
# Message text
elif event.type == "response.output_text.delta":
delta = event.delta
text_response += delta
print(delta, end="", flush=True)
# Tool call events
elif event.type == "response.code_interpreter_call_code.delta":
current_tool_code += event.delta
elif event.type == "response.code_interpreter_call_code.done":
tool_calls_log.append({
"code": event.code,
"type": "code_interpreter"
})
current_tool_code = ""
print(event.code)
elif event.type == "response.completed":
# Final event - could extract full response here if needed
pass
```
model response (ignore the pretty looking, it is just another version for visualization)
```bash
============================================================
💭 REASONING:
We need to compute 123^456 mod 1000000007. It's a big power but within modular exponent. We can compute quickly with pow in Python: pow(123, 456, 1000000007). But the prompt says please use python tool to solve this problem. We'll use python.
📝 CODE EXECUTED:
pow(123, 456, 1000000007)
------------------------------------------------------------
💭 REASONING:
Let's see result.
💭 REASONING:
It printed something? Wait, no output visible yet. We may need to capture the output. Let's assign.
📝 CODE EXECUTED:
result = pow(123, 456, 1000000007)
result
------------------------------------------------------------
💭 REASONING:
It returned something? Let's see.
💭 REASONING:
It didn't print, but the value is stored. We should print the result.
📝 CODE EXECUTED:
print(result)
------------------------------------------------------------
💭 REASONING:
565291922
So answer is 565291922. Provide box.
📄 FINAL ANSWER:
The value of \(123^{456} \bmod 1000000007\) is
\[
\boxed{565291922}
\]
============================================================
✅ RESPONSE COMPLETED
Tool output tokens: 82
============================================================
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30222
|
open
|
[
"bug"
] | 2025-12-08T01:32:35Z
| 2025-12-08T09:49:55Z
| 4
|
jordane95
|
pytorch/pytorch
| 169,797
|
When augment_with_fx_traces=True but the user has misconfigured the FX config, raise an error
|
### 🐛 Describe the bug
If you dump memory profile with augment_with_fx_traces=True but you don't set torch.fx.experimental._config.enrich_profiler_metadata (or better yet, you accidentally use the dead dynamo version of the config), you will just silently not get any augmentation. This is bad, we should say something if this occurs. I think probably the most appropriate place to give the info is in the memory viz itself; in particular, when we detect a legacy FX filename in the trace (e.g., `eval_with_key`) we should display some help text saying how to get augmented information. The memory profile should also say if augment_with_fx_traces was set so we correctly report if you need to pass that info or not. Also... maybe we should just default augment_with_fx_traces True, if there isn't a reason not to?
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise @mwootton @EikanWang @jgong5 @wenzhe-nrv @yushangdi
### Versions
main
|
https://github.com/pytorch/pytorch/issues/169797
|
open
|
[
"triaged",
"oncall: profiler",
"module: fx"
] | 2025-12-08T01:24:32Z
| 2025-12-09T18:23:40Z
| 0
|
ezyang
|
pytorch/tutorials
| 3,687
|
Feedback about Tensors
|
There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/basics/tensorqs_tutorial.html
Hello, I just finished the tutorial on tensors, and I think it's really well written. However, I have a question. There are so many attributes and methods related to tensors that after reading the tutorial once, I can't remember them all; I only have a general impression. So I want to know, if my goal is to master PyTorch in depth, is it necessary for me to memorize these specific tensor operations?
cc @albanD @jbschlosser
|
https://github.com/pytorch/tutorials/issues/3687
|
open
|
[
"question",
"core"
] | 2025-12-07T21:07:52Z
| 2025-12-08T17:00:50Z
| null |
NJX-njx
|
vllm-project/vllm
| 30,211
|
[Bug]: How to make vLLM support multi stream torch compile and each stream capture cuda graph.
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
SGLang now supports multi stream torch compile and each stream capture cuda graph. The code link is
https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/model_executor/cuda_graph_runner.py#L500-#L506
If I want to make vLLM support that. My code on vLLM bypass the vLLM backend and make it like sglang
```
import torch._dynamo.config
import torch._inductor.config
torch._inductor.config.coordinate_descent_tuning = True
torch._inductor.config.triton.unique_kernel_names = True
torch._inductor.config.freezing = True
torch._inductor.config.fx_graph_cache = False # Experimental feature to reduce compilation times, will be on by default in future
from vllm.model_executor.custom_op import CustomOp
def _to_torch(model: torch.nn.Module, reverse: bool, num_tokens: int):
for sub in model._modules.values():
# sub.enter_torch_compile(num_tokens=num_tokens)
# if isinstance(sub, torch.nn.Module):
# _to_torch(sub, reverse, num_tokens)
if isinstance(sub, CustomOp):
if reverse:
sub.leave_torch_compile()
else:
sub.enter_torch_compile(num_tokens=num_tokens)
if isinstance(sub, torch.nn.Module):
_to_torch(sub, reverse, num_tokens)
@contextmanager
def patch_model(
model: torch.nn.Module,
enable_compile: bool,
num_tokens: int,
# tp_group: GroupCoordinator,
):
"""Patch the model to make it compatible with with torch.compile"""
backup_ca_comm = None
current_stream = torch.cuda.current_stream()
with torch.cuda.stream(current_stream):
print(f"patch_model, the current_stream:{current_stream.cuda_stream}", flush = True)
try:
if enable_compile:
_to_torch(model, reverse=False, num_tokens=num_tokens)
# backup_ca_comm = tp_group.ca_comm
# Use custom-allreduce here.
# We found the custom allreduce is much faster than the built-in allreduce in torch,
# even with ENABLE_INTRA_NODE_COMM=1.
# tp_group.ca_comm = None
wrapped_forward = model.forward # 🔥 只改这里
with torch.no_grad():
compiled = torch.compile(wrapped_forward, mode="max-autotune-no-cudagraphs", dynamic=False)
yield compiled
# yield torch.compile(
# model.forward,
# mode="max-autotune-no-cudagraphs",
# dynamic=False,)
# yield torch.compile(
# torch.no_grad()(model.forward),
# mode="reduce-overhead",
# dynamic=_is_hip and get_bool_env_var("SGLANG_TORCH_DYNAMIC_SHAPE"),
# )
else:
yield model.forward
finally:
if enable_compile:
_to_torch(model, reverse=True, num_tokens=num_tokens)
@torch.inference_mode()
def _my_dummy_run(
self,
num_tokens: int,
run_decode_phase:bool=False,
stream_idx: int = 0,
) -> torch.Tensor:
# Set num_scheduled_tokens based on num_tokens and max_num_seqs
# for dummy run with LoRA so that the num_reqs collectively
# has num_tokens in total.
with torch.cuda.stream(torch.cuda.current_stream()):
assert num_tokens <= self.scheduler_config.max_num_batched_tokens
max_num_reqs = self.scheduler_config.max_num_seqs
num_reqs = max_num_reqs if num_tokens >= max_num_reqs else num_tokens
min_tokens_per_req = num_tokens // num_reqs
num_scheduled_tokens_list = [min_tokens_per_req] * num_reqs
num_scheduled_tokens_list[-1] += num_tokens % num_reqs
assert sum(num_scheduled_tokens_list) == num_tokens
assert len(num_scheduled_tokens_list) == num_reqs
num_scheduled_tokens = np.array(num_scheduled_tokens_list,
dtype=np.int32)
with self.maybe_dummy_run_with_lora(self.lora_config,
num_scheduled_tokens):
model = self.model
if self.is_multimodal_model:
input_ids = None
inputs_embeds = self.inputs_embeds[:num_tokens]
else:
input_ids = self.input_ids[:num_tokens]
inputs_embeds = None
if self.uses_mrope:
positions = self.mrope_positions[:, :num_tokens]
else:
positions = self.positions[:num_tokens]
if get_pp_group().is_first_rank:
intermediate_tensors = None
else:
|
https://github.com/vllm-project/vllm/issues/30211
|
open
|
[
"bug",
"feature request",
"nvidia"
] | 2025-12-07T15:12:04Z
| 2025-12-15T05:39:39Z
| 3
|
lambda7xx
|
pytorch/executorch
| 16,123
|
Is dynamic weight update / fine-tuning supported in QNN / XNNPACK backends?
|
### 🚀 The feature, motivation and pitch
I’m working on a research project to fine-tune a model on Android devices. I am exploring using ExecuTorch + QNN or XNNPACK backend for inference acceleration, but need to ensure that the backend can support dynamic modification of weights (i.e., after initialization, allow updating weights / biases, and then run forward again).
**What I found**
- In executorch/extension/training/examples/XOR/train.cpp, training code based on executorch is provided, but it does not mention the supported backends.
- The official XNNPACK backend documentation describes that, during runtime initialization, weights / biases are “packed” (i.e. weight packing) into XNNPACK’s internal data structures, and the original preprocessed blob’s data is freed. This seems to imply that weights become static / immutable from the perspective of the backend’s execute graph.
- I did not find description in the docs or runtime API of any mechanism to “unlock” or “update” those packed weights at runtime.
- There is an existing issue (#11355) reporting that even dynamic quantization + XNNPACK + Android may fail to load “forward” method, which suggests that non-static quantization / dynamic behavior is fragile or unsupported.
- For QNN backend, I saw open / triaged issues about compilation or binary loading, but none that explicitly mention support for runtime weight update.
**My questions**
1. Does ExecuTorch (any of its backends: QNN, XNNPACK, Vulkan, etc.) currently support *runtime in-place weight updates* (i.e. treat model weights as mutable parameters, allow updating them between forward calls, as required in fine-tuning / training / zeroth-order optimization)?
2. If not supported, is there a recommended workflow / workaround for on-device fine-tuning with ExecuTorch? Or is this explicitly out of scope?
3. If it’s not currently supported, would the maintainers be open to considering such a feature in future (e.g. a “mutable weight” delegate, or mechanism to reload new weights into backend graph)?
Thank you for your time and for developing ExecuTorch — it is a great tool for on-device inference / deployment, and I hope it can support on-device fine-tuning in the future.
### Alternatives
_No response_
### Additional context
_No response_
### RFC (Optional)
_No response_
cc @JacobSzwejbka
|
https://github.com/pytorch/executorch/issues/16123
|
open
|
[
"module: training"
] | 2025-12-07T06:24:09Z
| 2025-12-11T09:27:29Z
| 5
|
qqqqqqqwy
|
vllm-project/vllm
| 30,193
|
[Bug]: Behavioral Difference in hidden_states[-1] between vLLM and Transformers for Qwen3VLForConditionalGeneration
|
### Your current environment
- vLLM Version: 0.11.2
- Transformers Version: 4.57
- Model: Qwen3VLForConditionalGeneration
### 🐛 Describe the bug
I have observed an inconsistency in the output of the forward method for the `Qwen3VLForConditionalGeneration` class between vLLM (version 0.11.2) and Transformers (version 4.57).
In the Transformers library, the last hidden state (`outputs.hidden_states[0, -1, :]`) returned is before the final layer normalization. However, in vLLM, the returned hidden_states appears to be after the normalization is applied.
Is this discrepancy an unintended bug, or is there a configuration option in vLLM to control this output behavior (e.g., to return the pre-norm hidden states)?
I don't have minimal demo, but I change the origin code to test.
Because the`forward` method of `Qwen3VLForConditionalGeneration` has the following code:
```python
hidden_states = self.language_model.model(
input_ids=input_ids,
positions=positions,
intermediate_tensors=intermediate_tensors,
inputs_embeds=inputs_embeds,
# args for deepstack
deepstack_input_embeds=deepstack_input_embeds,
)
```
The type of `self.language_model.model` is `Qwen3LLMModel`.
I introduced an environment variable:`LAST_HIDDEN_STATE_NOT_NORM` before return of `Qwen3LLMModel` 's `forward` method:
```python
if os.environ.get("LAST_HIDDEN_STATE_NOT_NORM", "0") == "1":
return hidden_states + residual
if not get_pp_group().is_last_rank:
return IntermediateTensors(
{"hidden_states": hidden_states, "residual": residual}
)
hidden_states, _ = self.norm(hidden_states, residual)
return hidden_states
```
When `LAST_HIDDEN_STATE_NOT_NORM=1` is set, hidden states output exactly match Transformers' behavior.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30193
|
closed
|
[
"bug"
] | 2025-12-07T04:50:11Z
| 2025-12-16T03:24:00Z
| 3
|
guodongxiaren
|
huggingface/transformers
| 42,674
|
Missing imports for DetrLoss and DetrHungarianMatcher
|
Previously, I was able to import these classes as
```
from transformers.models.detr.modeling_detr import DetrLoss, DetrObjectDetectionOutput, DetrHungarianMatcher
```
In v4.57.3, the import fails and I also cannot find DetrLoss or DetrHungarianMatcher anywhere in the codebase. Have they been removed/replaced with an alternative? What is the up-to-date import?
Thank you for assistance / information
|
https://github.com/huggingface/transformers/issues/42674
|
open
|
[] | 2025-12-06T15:32:14Z
| 2026-01-06T08:02:43Z
| 1
|
sammlapp
|
vllm-project/vllm
| 30,163
|
[Usage]: Help Running NVFP4 model on 2x DGX Spark with vLLM + Ray (multi-node)
|
### Your current environment
# Help: Running NVFP4 model on 2x DGX Spark with vLLM + Ray (multi-node)
## Hardware
- **2x DGX Spark** (GB10 GPU each, sm_121a / compute capability 12.1)
- Connected via 200GbE ConnectX-7/Ethernet
- Driver: 580.95.05, Host CUDA: 13.0
## Goal
Run `lukealonso/GLM-4.6-NVFP4` (357B MoE model, NVFP4 quantization) across both nodes using vLLM with Ray distributed backend.
## What I've Tried
### 1. `nvcr.io/nvidia/vllm:25.11-py3` (NGC)
- vLLM 0.11.0
- **Error:** `FlashInfer kernels unavailable for ModelOptNvFp4FusedMoE on current platform`
- NVFP4 requires vLLM 0.12.0+
### 2. `vllm/vllm-openai:nightly-aarch64` (vLLM 0.11.2.dev575)
- With `VLLM_USE_FLASHINFER_MOE_FP4=1`
- **Error:** `ptxas fatal: Value 'sm_121a' is not defined for option 'gpu-name'`
- Triton's bundled ptxas 12.8 doesn't support GB10
### 3. `vllm/vllm-openai:v0.12.0-aarch64` (vLLM 0.12.0)
- Fixed ptxas with symlink: `ln -sf /usr/local/cuda/bin/ptxas /usr/local/lib/python3.12/dist-packages/triton/backends/nvidia/bin/ptxas`
- Triton compilation passes ✅
- **Error:** `RuntimeError: [FP4 gemm Runner] Failed to run cutlass FP4 gemm on sm120. Error: Error Internal`
### 4. Tried both parallelism modes:
- `--tensor-parallel-size 2` → same CUTLASS error
- `--pipeline-parallel-size 2` → same CUTLASS error
### 5. `--enforce-eager` flag
- Not fully tested yet
## Environment Details
| Component | Version |
|-----------|---------|
| Host Driver | 580.95.05 |
| Host CUDA | 13.0 |
| Container CUDA | 12.9 |
| Container ptxas | 12.9.86 (supports sm_121a ✅) |
| Triton bundled ptxas | 12.8 (NO sm_121a ❌) |
| PyTorch | 2.9.0+cu129 |
## The Blocking Error
vLLM correctly loads weights (41/41 shards), then during profile_run:
```
INFO [flashinfer_utils.py:289] Flashinfer TRTLLM MOE backend is only supported on SM100 and later, using CUTLASS backend instead
INFO [modelopt.py:1142] Using FlashInfer CUTLASS kernels for ModelOptNvFp4FusedMoE.
...
RuntimeError: [FP4 gemm Runner] Failed to run cutlass FP4 gemm on sm120. Error: Error Internal
```
FlashInfer detects GB10 is not SM100 (B200), falls back to CUTLASS - but CUTLASS FP4 also fails.
## Key Question
**Are CUTLASS FP4 GEMM kernels compiled for GB10 (sm_121a)?**
Is there:
1. A vLLM build with CUTLASS kernels for sm_121?
2. A way to force Marlin FP4 fallback on GB10?
3. Recommended Docker image for DGX Spark + NVFP4?
I see NVFP4 models tested on:
- B200 (sm_100) ✅
- H100/A100 with Marlin FP4 fallback ✅
But GB10 is **sm_121** (Blackwell desktop/workstation variant). The error says `sm120` which seems wrong - GB10 should be sm_121a.
## References
- [ GLM-4.6-NVFP4](https://huggingface.co/lukealonso/GLM-4.6-NVFP4)(https://huggingface.co/lukealonso/GLM-4.6-NVFP4)
- [Firworks/GLM-4.5-Air-nvfp4](https://huggingface.co/Firworks/GLM-4.5-Air-nvfp4)
Thanks!
|
https://github.com/vllm-project/vllm/issues/30163
|
open
|
[
"usage"
] | 2025-12-06T00:24:52Z
| 2025-12-07T16:22:40Z
| 2
|
letsrock85
|
huggingface/accelerate
| 3,876
|
Why TP can't be used with pure DP?
|
As per [this](https://github.com/huggingface/accelerate/blob/b9ca0de682f25f15357a3f9f1a4d94374a1d451d/src/accelerate/parallelism_config.py#L332), we can not be use TP along with pure DP (or DDP). We need to shard the model across further nodes by specifying dp_shard_size as well. Why this limitation exists? Is it just a software limitation?
Please share any documentation, code reference and justification for the same.
What to do inorder to do TP+DP?
|
https://github.com/huggingface/accelerate/issues/3876
|
open
|
[] | 2025-12-05T16:11:22Z
| 2025-12-26T10:07:09Z
| 3
|
quic-meetkuma
|
huggingface/lerobot
| 2,589
|
Clarification on XVLA folding checkpoint
|
Hi Lerobot team, great work on the XVLA release!
I have tried finetuning on my custom dataset and have a few clarifications:
1. Is the [lerobot/xvla-folding](https://huggingface.co/lerobot/xvla-folding) checkpoint finetuned on [lerobot/xvla-soft-fold](https://huggingface.co/datasets/lerobot/xvla-soft-fold)?
- I am asking this because the `info.json` don't match (eg. the dataset image keys are `observation.images.cam_high` whereby the checkpoint image keys are `observation.images.image`
- The `observation.state` shape also do not match
2. How do we finetune from a checkpoint given that the checkpoint expects different naming for the observation keys and `state` shape? Is this a custom preprocessor to remap keys or is there an arg to use?
Thanks!
|
https://github.com/huggingface/lerobot/issues/2589
|
open
|
[
"question",
"policies"
] | 2025-12-05T11:42:46Z
| 2025-12-22T08:43:05Z
| null |
brycegoh
|
pytorch/pytorch
| 169,663
|
[CI] Inductor dashboards failing due to unused --quant arg
|
### 🐛 Describe the bug
The offending code was added in https://github.com/pytorch/pytorch/pull/123419 which results in a failure as no quant type is provided.
https://github.com/pytorch/pytorch/blob/a097e166db7077f1e8da94757ccd91a6a521550e/.ci/pytorch/test.sh#L767
This is causing unnecessary headaches when debugging inductor-dashboard runs. @huydhn is it possible for us to either provide a valid quant type or remove this?
Example failure: https://ossci-raw-job-status.s3.amazonaws.com/log/56856027123 for AMD and similar on https://ossci-raw-job-status.s3.amazonaws.com/log/56764730017 for NV.
```
2025-12-02T02:00:17.0782532Z + python benchmarks/dynamo/timm_models.py --performance --cold-start-latency --inference --quant --backend inductor --device cuda --total-partitions 7 --partition-id 2 --output /var/lib/jenkins/pytorch/test/test-reports/inductor_cudagraphs_low_precision_timm_models_quant_inference_rocm_performance.csv
2025-12-02T02:00:19.6203822Z [--channels-last]
2025-12-02T02:00:19.6204176Z [--batch-size BATCH_SIZE]
2025-12-02T02:00:19.6204547Z [--iterations ITERATIONS]
2025-12-02T02:00:19.6204936Z [--batch-size-file BATCH_SIZE_FILE]
2025-12-02T02:00:19.6205320Z [--cosine]
2025-12-02T02:00:19.6205614Z [--freezing]
2025-12-02T02:00:19.6205953Z [--inductor-config INDUCTOR_CONFIG]
2025-12-02T02:00:19.6206333Z [--ci]
2025-12-02T02:00:19.6206618Z [--dashboard]
2025-12-02T02:00:19.6206949Z [--skip-fp64-check]
2025-12-02T02:00:19.6207276Z [--fast]
2025-12-02T02:00:19.6207565Z [--only ONLY]
2025-12-02T02:00:19.6207883Z [--multiprocess]
2025-12-02T02:00:19.6208197Z [--ddp]
2025-12-02T02:00:19.6208490Z [--fsdp]
2025-12-02T02:00:19.6208833Z [--optimize-ddp-mode OPTIMIZE_DDP_MODE]
2025-12-02T02:00:19.6209530Z [--distributed-master-port DISTRIBUTED_MASTER_PORT]
2025-12-02T02:00:19.6209993Z [--dynamic-shapes]
2025-12-02T02:00:19.6210361Z [--propagate-real-tensors]
2025-12-02T02:00:19.6210749Z [--dynamic-batch-only]
2025-12-02T02:00:19.6211114Z [--specialize-int]
2025-12-02T02:00:19.6211449Z [--use-eval-mode]
2025-12-02T02:00:19.6211801Z [--skip-accuracy-check]
2025-12-02T02:00:19.6212189Z [--generate-aot-autograd-stats]
2025-12-02T02:00:19.6212593Z [--inductor-settings]
2025-12-02T02:00:19.6213079Z [--suppress-errors]
2025-12-02T02:00:19.6213417Z [--output OUTPUT]
2025-12-02T02:00:19.6213816Z [--output-directory OUTPUT_DIRECTORY]
2025-12-02T02:00:19.6214221Z [--disable-output]
2025-12-02T02:00:19.6214560Z [--baseline BASELINE]
2025-12-02T02:00:19.6214912Z [--part PART]
2025-12-02T02:00:19.6215259Z [--export-profiler-trace]
2025-12-02T02:00:19.6215725Z [--profiler-trace-name PROFILER_TRACE_NAME]
2025-12-02T02:00:19.6216164Z [--profile-details]
2025-12-02T02:00:19.6216514Z [--export-perfdoctor]
2025-12-02T02:00:19.6216885Z [--diff-branch DIFF_BRANCH]
2025-12-02T02:00:19.6217240Z [--tag TAG]
2025-12-02T02:00:19.6217536Z [--explain]
2025-12-02T02:00:19.6217826Z [--stats]
2025-12-02T02:00:19.6218144Z [--use-warm-peak-memory]
2025-12-02T02:00:19.6218510Z [--print-memory]
2025-12-02T02:00:19.6218865Z [--print-compilation-time]
2025-12-02T02:00:19.6219263Z [--print-dataframe-summary]
2025-12-02T02:00:19.6219651Z [--disable-cudagraphs]
2025-12-02T02:00:19.6220033Z [--disable-split-reductions]
2025-12-02T02:00:19.6220450Z [--disable-persistent-reductions]
2025-12-02T02:00:19.6220874Z [--disable-divisible-by-16]
2025-12-02T02:00:19.6221324Z [--inductor-compile-mode INDUCTOR_COMPILE_MODE]
2025-12-02T02:00:19.6221782Z [--print-graph-breaks]
2025-12-02T02:00:19.6222146Z [--log-graph-breaks]
2025-12-02T02:00:19.6222495Z [--trace-on-xla]
2025-12-02T02:00:19.6222842Z [--xla-tolerance XLA_TOLERANCE]
2025-12-02T02:00:19.6223230Z [--collect-outputs]
2025-12-02T02:00:19.6223614Z [--enable-activation-checkpointing]
2025-12-02T02:00:19.6224005Z [--timing]
2025-12-02T02:00:19.6224298Z [--progress]
2025-12-02T02:00:19.6224607Z [--timeout TIMEOUT]
2025-12-02T02:00:19.6225046Z [--per_process_memory_fraction PER_PROCESS_MEMORY_FRACTION]
2025-12-02T02:00:19.6225545Z [--no-translation-validation]
2025-12-02T02:00:19.6225913Z [--minify]
2025-12-02T02:00:19.6226225Z [--compiled-autograd]
2025-12-02T02:00:19.6226595Z [--profile_dynamo_cache_lookup]
2025-12-02T02:00:19.6226979Z [--snapshot-memory]
2025-12-02T02:00:19.6227313Z [--retain-output]
2025-12-02T02:00:19.6227656Z [--caching-precompile]
2025-12-02T02:00:19.6228230Z [--save-model-outputs-to SAVE_MODEL_OUTPUTS_TO]
2025-12-02T02:00:19.6228782Z [--compare-model-outputs-with COMPARE_MODEL_OUTPUTS_WITH]
2025-12-02T02:00:19.6229340Z
|
https://github.com/pytorch/pytorch/issues/169663
|
closed
|
[
"oncall: pt2",
"module: inductor"
] | 2025-12-05T11:10:10Z
| 2025-12-08T01:34:22Z
| 0
|
jataylo
|
vllm-project/vllm
| 30,129
|
[Feature]: About video input for qwen3vl
|
### 🚀 The feature, motivation and pitch
I tried using base64 encoding to provide video input for vllm inference, but it seems this input method is not yet supported by Qwen3VL (I've seen similar issues reported elsewhere). Currently, I can only specify parameters like fps/maximum frames and then pass the local path or URL of the video.
However, in my scenario, my videos are not uniformly sampled; I need to manually sample them first and then input multiple frames. Is there a way to achieve this input method now?
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30129
|
open
|
[
"feature request"
] | 2025-12-05T10:32:06Z
| 2025-12-19T03:32:30Z
| 4
|
lingcco
|
huggingface/sentence-transformers
| 3,585
|
How to choose negative instance when using MultipleNegativesRankingLoss train embedding model?
|
Firstly, I am still confused how to choose negative instance if I use MultipleNegativesRankingLoss, in https://github.com/huggingface/sentence-transformers/blob/main/sentence_transformers/losses/MultipleNegativesRankingLoss.py# L113
`embeddings = [self.model(sentence_feature)["sentence_embedding"] for sentence_feature in sentence_features]
`
I guess `embeddings` should include three parts, anchor, positive and negative from in-batch data, however, no matter how I change `batchsize`, I still found `len(embeddings)=2`, is it means that this embeddings only include two parts?
Here is my simple training script, I didn't add negative part in dataset,
```
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
import json
import torch
from sentence_transformers import (
SentenceTransformer,
SentenceTransformerTrainer,
SentenceTransformerTrainingArguments,
InputExample,
)
from sentence_transformers.losses import MultipleNegativesRankingLoss
from sentence_transformers.training_args import BatchSamplers
from datasets import load_dataset, Dataset
def train_embedding_model():
train_epo = 3
save_path = f"/app/raw_model/tmp"
data_path = "/app/emb_train_1205.json"
model = SentenceTransformer(
"/app/download_models/Qwen3-Embedding-0.6B",
model_kwargs={
"attn_implementation": "flash_attention_2",
"torch_dtype": "auto"
}
)
model.tokenizer.padding_side = "left"
model.tokenizer.pad_token = model.tokenizer.eos_token
model.tokenizer.model_max_length = 2048
dataset = load_dataset("json", data_files=data_path)
'''
DatasetDict({
train: Dataset({
features: ['question', 'positive'],
num_rows: 4000
})
})
'''
loss = MultipleNegativesRankingLoss(model)
args = SentenceTransformerTrainingArguments(
output_dir=save_path,
num_train_epochs=train_epo,
per_device_train_batch_size=8,
per_device_eval_batch_size=1,
learning_rate=5e-5,
warmup_ratio=0.1,
fp16=True, # Set to False if you get an error that your GPU can't run on FP16
bf16=False, # Set to True if you have a GPU that supports BF16
batch_sampler=BatchSamplers.NO_DUPLICATES, # MultipleNegativesRankingLoss benefits from no duplicate samples in a batch
optim='adamw_torch_fused',
logging_steps=5,
)
trainer = SentenceTransformerTrainer(
model=model,
args=args,
train_dataset=dataset['train'], # dataset['train'], train_dataset
eval_dataset=dataset['train'], # dataset['train'], train_dataset
loss=loss,
)
trainer.train()
model.save_pretrained(save_path)
```
Besides, can I manually add a list of negatives directly into the dataset while still using the MultipleNegativesRankingLoss?
|
https://github.com/huggingface/sentence-transformers/issues/3585
|
open
|
[] | 2025-12-05T09:50:26Z
| 2025-12-09T11:49:26Z
| null |
4daJKong
|
vllm-project/vllm
| 30,124
|
[Bug]: How to run DeepSeek-V3.2 on 2 H100 nodes?
|
### 🐛 Describe the bug
How to run DeepSeek-V3.2 on 2 H100 nodes?
I only found the cmd for H200/B200:
vllm serve deepseek-ai/DeepSeek-V3.2 -tp 8
but it does not work in multi-node scenarios (e.g., 2 H100 nodes).
So what should the cmd be for two H100 nodes?
how should params --tp/--dp/--pp be configured?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30124
|
open
|
[
"bug"
] | 2025-12-05T09:40:45Z
| 2025-12-14T08:57:52Z
| 2
|
XQZ1120
|
pytorch/pytorch
| 169,659
|
[Export] Incosistent input validation when re-importing a .pt2 model on Linux vs. Windows
|
### 🐛 Describe the bug
## Summary:
Importing the same .pt2 model on Windows and Linux yields a GraphModule() instance containing a guard function for input validation on Windows and a GraphModule _without_ that guard function on Linux (same device, Ubuntu running in WSL2).
**Why is this an issue?**
When trying to pass each model through `prepare_pt2e` for quantization, the one containing the guard function on Windows fails with:
```
[ ... stack trace ommitted ... ]
executorch.exir.pass_base.ExportPassBaseError: call_module is not supported.
While executing %_guards_fn : [num_users=0] = call_module[target=_guards_fn](args = (%x,), kwargs = {})
```
while the same model can be quantized with no issues on Linux.
Ultimately what I'm looking for is being able to consistently import .pt2 files and lower them to ExecuTorch with quantization, both on Windows and Linux hosts.
## Steps to reproduce
### 1. Create Model
I am creating and exporting a minimal `torch.nn.Module` instance like this:
```python
import torch
class DoubleModel(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
def forward(self, x):
return x * 2
example_input = torch.tensor([1.0, 2.0, 3.0, 4.0])
exported_model = torch.export.export(DoubleModel(), (example_input,))
torch.export.save(exported_model, "double.pt2")
```
### 2. Re-Import .pt2 file
when I re-import the model on **Windows**, I get this result:
```python
import torch
model = torch.export.load('double.pt2').module()
model.print_readable()
```
```
class GraphModule(torch.nn.Module):
def forward(self, x):
x: "f32[4]";
x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
# No stacktrace found for following nodes
_guards_fn = self._guards_fn(x); _guards_fn = None
# File: /tmp/ipykernel_257892/1111071196.py:8 in forward, code: return x * 2
mul: "f32[4]" = torch.ops.aten.mul.Tensor(x, 2); x = None
return pytree.tree_unflatten((mul,), self._out_spec)
```
notice the `_guards_fn` member.
When I run the same code on **Linux**, I get:
```
class GraphModule(torch.nn.Module):
def forward(self, x):
x: "f32[4]";
x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
# File: /tmp/ipykernel_257892/1111071196.py:8 in forward, code: return x * 2
mul: "f32[4]" = torch.ops.aten.mul.Tensor(x, 2); x = None
return pytree.tree_unflatten((mul,), self._out_spec)
```
### 3. Check input validation implementation
When I try a forward pass with an invalid input, the input validation also fails in different ways:
```python
model(torch.ones(3))
```
on **Windows**:
```
[ ... ]
AssertionError: Guard failed: x.size()[0] == 4
```
on **Linux**:
```
[ ... ]
RuntimeError: Expected input at *args[0].shape[0] to be equal to 4, but got 3. If you meant for this dimension to be dynamic, please re-export and specify dynamic_shapes (e.g. with Dim.DYNAMIC)
```
### Versions
# Windows Environment
```
Collecting environment information...
PyTorch version: 2.9.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Enterprise (10.0.22631 64-bit)
GCC version: (MinGW-W64 x86_64-ucrt-posix-seh, built by Brecht Sanders, r8) 13.2.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: N/A
Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 13th Gen Intel(R) Core(TM) i7-1365U
Manufacturer: GenuineIntel
Family: 198
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 1800
MaxClockSpeed: 1800
L2CacheSize: 6656
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] executorch==1.0.1
[pip3] numpy==2.3.5
[pip3] pytorch_tokenizers==1.0.1
[pip3] torch==2.9.1
[pip3] torchao==0.14.0
[pip3] torchvision==0.24.1
[conda] Could not collect
```
# Linux Environment
```
Collecting environment information...
PyTorch version: 2.9.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.3 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.8 (++20240731025043+3b5b5c1ec4a3-1~exp1~20240731145144.92)
CMake version: version 4.1.2
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU avai
|
https://github.com/pytorch/pytorch/issues/169659
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 2025-12-05T08:50:31Z
| 2025-12-09T09:22:45Z
| 3
|
etrommer
|
vllm-project/vllm
| 30,121
|
[Feature]: Could you please provide Chinese documentation for vLLM? 😊
|
### 🚀 The feature, motivation and pitch
Could you please provide Chinese documentation for vLLM? 😊
### Alternatives
Could you please provide Chinese documentation for vLLM? 😊
### Additional context
Could you please provide Chinese documentation for vLLM? 😊
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30121
|
open
|
[
"feature request"
] | 2025-12-05T08:13:46Z
| 2025-12-08T04:31:05Z
| 4
|
moshilangzi
|
huggingface/transformers
| 42,641
|
Cannot inference llava-next with transformers==4.57.1 on dtype="auto" bug
|
### System Info
```
- `transformers` version: 4.57.1
- Platform: Linux-5.15.0-161-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.35.3
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cpu (NA)
- Tensorflow version (GPU?): 2.18.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration
import torch
from PIL import Image
import requests
processor = LlavaNextProcessor.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf")
model = LlavaNextForConditionalGeneration.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf", dtype="auto", low_cpu_mem_usage=True)
# prepare image and text prompt, using the appropriate prompt template
url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true"
image = Image.open(requests.get(url, stream=True).raw)
# Define a chat history and use `apply_chat_template` to get correctly formatted prompt
# Each value in "content" has to be a list of dicts with types ("text", "image")
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "What is shown in this image?"},
{"type": "image"},
],
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
inputs = processor(images=image, text=prompt, return_tensors="pt")
# autoregressively complete prompt
output = model.generate(**inputs, max_new_tokens=100)
print(processor.decode(output[0], skip_special_tokens=True))
```
### Expected behavior
📝 Transformers GitHub Issue: Translation
Here is the translated text for your GitHub issue, including the title and body.
Title
Cannot inference llava-next with transformers==4.57.1 on dtype="auto" bug
Body
I am encountering an issue when attempting to run inference on LLaVA-Next models (e.g., `llava-hf/llava-v1.6-mistral-7b-hf`) using `transformers==4.57.1 ` and setting `dtype="auto"` when loading the model.
The issue stems from the model's `config.json` having different `torch_dtype` values for the overall model and the text configuration:
```
"text_config": {
"_name_or_path": "mistralai/Mistral-7B-Instruct-v0.2",
// ... other config values
"torch_dtype": "bfloat16",
"vocab_size": 32064
},
"torch_dtype": "float16",
```
When the model is loaded with `dtype="auto"`, each submodule (the visual model and the text model) seems to load with its respective `torch_dtype` (`"float16"` and `"bfloat16"`).
This difference in data types then causes an error during inference, specifically within the `forward` pass of the `LlavaNextForConditionalGeneration` model:
```
File "MY_ENV/.venv/lib/python3.10/site-packages/transformers/models/llava_next/modeling_llava_next.py", line 687, in forward
logits = self.lm_head(hidden_states[:, slice_indices, :])
File "MY_ENV/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "MY_ENV/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
File "MY_ENV/.venv/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 125, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: expected m1 and m2 to have the same dtype, but got: c10::BFloat16 != c10::Half
```
This `RuntimeError` indicates a dtype mismatch, likely between the linear layer's weight (from `self.lm_head`) and the input tensor (`hidden_states`), which results from the different dtypes loaded by `dtype="auto"` for `self.lm_head` and `self.model`.
Is there a plan to support loading LLaVA-Next models with `dtype="auto"` given their current configuration structure?
|
https://github.com/huggingface/transformers/issues/42641
|
open
|
[
"bug"
] | 2025-12-05T04:39:35Z
| 2025-12-23T11:08:56Z
| 5
|
rebel-seinpark
|
vllm-project/vllm
| 30,098
|
[Doc]: Misleading Logic & Docstring in `block_quant_to_tensor_quant` (Block FP8)
|
### 📚 The doc issue
The docstring and implementation of the `block_quant_to_tensor_quant` function have a critical mismatch regarding the dequantization process, leading to numerical errors when used outside of specific fused kernel backends.
### Problematic Function
The function is currently implemented as:
```python
def block_quant_to_tensor_quant(
x_q_block: torch.Tensor,
x_s: torch.Tensor,
) -> tuple[torch.Tensor, torch.Tensor]:
"""This function converts block-wise quantization to tensor-wise
quantization. The inputs are block-wise quantization tensor `x_q_block`,
block-wise quantization scale and the block size.
The outputs are tensor-wise quantization tensor and tensor-wise
quantization scale. Note only float8 is supported for now.
"""
x_dq_block = group_broadcast(x_q_block, x_s)
x_q_tensor, scale = input_to_float8(x_dq_block, dtype=x_q_block.dtype)
return x_q_tensor, scale
```
### Observation and Impact
- Vllm migrated the actual 'block quant to tensor quant' operation to the kernel but keep this method. The docstring is misleading since in this method, there is no scale.
- Misleading Docstring: The docstring claims the function performs "conversion" and takes the "scale," implying a complete process. However, the output `x_dq_block` is an un-dequantized value with a broadcasted shape.
### Suggest a potential alternative/fix
The function should be either documented clearly as a kernel preparation helper OR refactored to ensure numerical correctness when used as a conversion API.
**1. Fix Documentation/Name (If intent is kernel prep):**
* Rename the function to something like `_prepare_block_quant_for_fused_kernel`.
* Add a warning that this function does not perform dequantization.
**2. Implement Safe Logic Dispatch (If intent is a robust conversion API):**
The function should dynamically dispatch to the known-good, safe path if the specific fused kernel (that handles the $X_q \times X_s$ multiplication) is not guaranteed to be active.
The safe logic is in v.0.9.2:
```python
# Safe path required for correctness on general backends
x_dq_block = scaled_dequantize(x_q_block, x_s)
x_q_tensor, scale = input_to_float8(x_dq_block, dtype=x_q_block.dtype)
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30098
|
closed
|
[
"documentation"
] | 2025-12-05T02:12:07Z
| 2025-12-24T17:22:50Z
| 0
|
xqoasis
|
huggingface/transformers
| 42,638
|
Routing Replay for MoEs
|
### Feature request
RecentRL approaches for training MoE models increasingly rely on **Routing Replay**, as described in the following papers:
- https://huggingface.co/papers/2507.18071
- https://huggingface.co/papers/2510.11370
- https://huggingface.co/papers/2512.01374
Without going into the training details, Routing Replay requires the ability to override the router during the forward pass, that is, to force the model to use a predefined set of router logits rather than computing new ones. This enables deterministic reproduction of expert selection.
AFAICT, Transformers currently does not expose a way to override router logits or manually control expert selection at inference/training time.
I imagine something along the following lines (minimal example):
```python
from transformers import AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-30B-A3B-Instruct-2507", device_map="auto", dtype="auto")
input_ids = torch.tensor([[1, 2, 3, 4]], device="cuda")
# Standard forward pass, retrieving router logits
outputs = model(input_ids, output_router_logits=True)
# Forward pass with router logits injected (enabling Routing Replay)
model(input_ids, router_logits=outputs.router_logits)
```
## Alternative
If we decide not to implement this feature, it would be nice to provide an example showing how to _patch_ a MoE to enable this.
### Motivation
See above.
### Your contribution
I think I can do it.
|
https://github.com/huggingface/transformers/issues/42638
|
open
|
[
"Feature request"
] | 2025-12-04T23:58:14Z
| 2025-12-05T16:29:05Z
| 2
|
qgallouedec
|
vllm-project/vllm
| 30,084
|
[Performance]: Should I expect linear scaling with pure DP?
|
### Proposal to improve performance
_No response_
### Report of performance regression
_No response_
### Misc discussion on performance
I decided to benchmark vLLM 0.11.2 with pure DP of Qwen/Qwen2.5-32B-Instruct deployment(before benchmarking DP+EP with Qwen/Qwen3-30B-A3B-Instruct-2507) on DP1 vs DP8 (H200):
DP1 deployment:
```
vllm serve ${MODEL_NAME} \
--port 8000 \
--trust-remote-code
```
DP8 deployment:
```
vllm serve ${MODEL_NAME} \
--port 8000 \
--trust-remote-code \
--data-parallel-size 8 \
--data-parallel-size-local 8
```
My benchmark roughly looks like this:
```
for rate in [10, 20, ... 100, 200, ... 1000, 2000, ... 100000]:
vllm bench serve \
--host "$HOST" \
--model Qwen/Qwen2.5-32B-Instruct \
--dataset-name random \
--random-input-len 128 \
--random-output-len 128 \
--num-prompts 10000 \
--request-rate "$rate" \
--ignore-eos
```
Should I expect ~8x scaling? Result show only ~4x (duration, request throughput, tokens throughput, etc...)
<img width="1789" height="3490" alt="Image" src="https://github.com/user-attachments/assets/81feb936-73d6-49c3-949e-dfbd6d7ba7d7" />
cc @KeitaW @amanshanbhag
### Your current environment (if you think it is necessary)
```text
The output of `python collect_env.py`
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30084
|
open
|
[
"performance"
] | 2025-12-04T19:52:45Z
| 2025-12-16T04:09:24Z
| 7
|
pbelevich
|
vllm-project/vllm
| 30,082
|
[Usage]: Turn off reasoning for Kimi-K2-Thinking?
|
### Your current environment
```text
Output of collect_env.py-
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 4.1.3
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu129
Is debug build : False
CUDA used to build PyTorch : 12.9
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.12 (main, Oct 10 2025, 08:52:57) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-4.18.0-553.56.1.el8_10.x86_64-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.9.86
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA H200
GPU 1: NVIDIA H200
GPU 2: NVIDIA H200
GPU 3: NVIDIA H200
GPU 4: NVIDIA H200
GPU 5: NVIDIA H200
GPU 6: NVIDIA H200
GPU 7: NVIDIA H200
Nvidia driver version : 550.163.01
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-11,96-107
NUMA node1 CPU(s): 12-23,108-119
NUMA node2 CPU(s): 24-35,120-131
NUMA node3 CPU(s): 36-47,132-143
NUMA node4 CPU(s): 48-59,144-155
NUMA node5 CPU(s): 60-71,156-167
NUMA node6 CPU(s): 72-83,168-179
NUMA node7 CPU(s): 84-95,180-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spect
|
https://github.com/vllm-project/vllm/issues/30082
|
open
|
[
"usage"
] | 2025-12-04T19:32:13Z
| 2025-12-08T23:02:58Z
| 2
|
vikrantdeshpande09876
|
pytorch/pytorch
| 169,597
|
Standardize Testing in OpenReg
|
### 🚀 The feature, motivation and pitch
Described in the following [issue](https://github.com/pytorch/pytorch/issues/158917):
OpenReg aims to:
- Track the evolution of community features and provide up-to-date standardized integration implementations, serving as the official reference and code example for integration documentation.
- The goal is to cover all functional points of new device integration into PyTorch, ensuring that the integration mechanisms themselves are robust and complete.
As such, this requires a standardized set of tests that follow the same process as current in-tree devices in Pytorch.
The following are proposed additions to OpenReg to improve testing as well as documentation for future PrivateUse1 users:
- Working example of [DeviceTypeTestBase](https://github.com/pytorch/pytorch/blob/31987d0eda56179bfbed565b8cbb937844cd300c/torch/testing/_internal/common_device_type.py#L317) included in OpenReg
- Working example of `OpInfo` based test in OpenReg
- Documentation alongside standard tests
Including this in OpenReg provides the following benefits:
- A clear documented reference on how to emulate this for new backends
- Ensures the stability of these APIs
### Alternatives
Given OpenReg strives to maintain a standard for pytorch device backends, I believe keeping the standard that in-tree devices require the above solution. Feel free to comment if an alternative is preferred.
### Additional context
cc: @fffrog @albanD @zeshengzong
cc @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens @albanD
|
https://github.com/pytorch/pytorch/issues/169597
|
open
|
[
"triaged",
"module: PrivateUse1",
"module: openreg"
] | 2025-12-04T19:22:19Z
| 2025-12-04T19:34:26Z
| 0
|
JRosenkranz
|
pytorch/ao
| 3,436
|
Int4WeightOnly torch.bmm semantics
|
Currently, int4 weight only quantization does not work out of the box for llama4 scout.
```python
fqn_to_config = FqnToConfig(
{
r"re:.*\.feed_forward\.experts\.gate_up_proj": Int4WeightOnlyConfig(),
r"re:.*\.feed_forward\.experts\.down_proj": Int4WeightOnlyConfig()
}
)
quantized_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="bfloat16",
device_map=device_map,
quantization_config=quantization_config,
)
```
This fails because the the int4 torch.bmm implementation expects the weights to be transposed already, while the dense version does not.
The dense torch.bmm(inputs, weights) expects for input of shape (B, M, K) and weights to be shape (B, K, N)) but the quantized version expects the weights to be of shape (B, N, K).
Adding a line, `down_proj = down_proj.transpose(-2, -1). contiguous().transpose(-2, -1)`, after loading the model will fix this issue, but this is hacky and also means that we can't pass in the config as part of quantization_config.
Vasiliy ran into a similar issue with Float8Tensor bmm semantics in https://github.com/pytorch/ao/pull/3296, which solves this problem by transposing qdata and scale as part of the bmm op.
However for int4 we pack to int8, so it's a bit more work, as we would need to unpack int4 -> int8, transpose, contiguous, repack -> int4.
I think the easiest way is to add a flag to transpose the weight or not before the quantized data is computed, but open to any suggestions on how to best fix this
|
https://github.com/pytorch/ao/issues/3436
|
open
|
[
"triaged"
] | 2025-12-04T18:35:44Z
| 2025-12-04T23:12:15Z
| 0
|
jcaip
|
vllm-project/vllm
| 30,075
|
[Feature]: Default eplb num_redundant_experts to the lowest valid value if unspecified
|
### 🚀 The feature, motivation and pitch
EPLB requires the number of experts to be chosen up front and there is a known minimum valid value that can be derived from the vllm startup configuration. Since extra EPLB experts trades kv cache memory for potential performance improvements, but that is not guaranteed to pay off, having the EPLB value default to the minimum valid value would reduce friction on enabling EPLB the first time until users are ready to tune.
As a consequence, it would also streamline templating the same config to work across multiple EP sizes for the default case.
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30075
|
open
|
[
"help wanted",
"good first issue",
"feature request"
] | 2025-12-04T18:19:03Z
| 2025-12-20T21:00:23Z
| 4
|
smarterclayton
|
pytorch/torchtitan
| 2,109
|
Knowledge Distillation template
|
Hi, I want to use torchtitan for knowledge distillation, what is the right way to do it? should I hold both models inside the main model? (then how can I exclude the teacher from being saved or .train()ed or exclude it from the optimizer) or is there a way to have two separate models (with parallelism handled correctly, especially PP)?
if I have to hold both models in the same Model, then will this be a right forward()? (for now the assumption that both models have the same number of layers is ok)
```python
def forward(
self,
tokens: torch.Tensor,
tokens_t: torch.Tensor,
attention_masks: AttentionMasksType | None = None,
):
"""
Perform a forward pass through the Transformer model.
"""
# passthrough for nonexistent layers, allows easy configuration of pipeline parallel stages
h = self.tok_embeddings(tokens) if self.tok_embeddings else tokens
h_t = self.tok_embeddings_t(tokens_t) if self.tok_embeddings_t else tokens_t
for layer, layer_t in zip(self.layers.values(), self.layers_t.values()):
h = layer(h, self.freqs_cis, attention_masks=attention_masks)
h_t = layer(h_t, self.freqs_cis_t, attention_masks=attention_masks)
h = self.norm(h) if self.norm else h
h_t = self.norm(h_t) if self.norm_t else h_t
output = self.output(h) if self.output else h
output_t = self.output_t(h_t) if self.output_t else h_t
return output, output_t
```
|
https://github.com/pytorch/torchtitan/issues/2109
|
open
|
[
"question"
] | 2025-12-04T17:37:51Z
| 2025-12-04T23:35:26Z
| null |
Separius
|
vllm-project/vllm
| 30,058
|
[Feature]: Multi-Adapter Support for Embed Qwen3 8B Embedding Model
|
### 🚀 The feature, motivation and pitch
Hi Team, do we currently support multi-adapter (LoRA) support for embedding models, specifically Qwen3 8B Embedding model? If not, when can we expect the support? Thanks :)
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30058
|
open
|
[
"feature request"
] | 2025-12-04T12:05:15Z
| 2025-12-04T19:42:04Z
| 4
|
dawnik17
|
huggingface/accelerate
| 3,873
|
How to specify accelerate launch yaml config item when running with torchrun
|
I've read the doc [Launching Accelerate scripts](https://huggingface.co/docs/accelerate/basic_tutorials/launch), and would like to launch with torchrun. However, the doc does not mention how to specify configs like `distribute_type` when using torchrun.
What are the equivalent of these configurations when using torchrun?
|
https://github.com/huggingface/accelerate/issues/3873
|
open
|
[] | 2025-12-04T07:27:43Z
| 2026-01-03T15:07:19Z
| null |
WhoisZihan
|
huggingface/lerobot
| 2,580
|
How can the leader arm be synchronized to follow the follower arm during inference?
|
https://github.com/huggingface/lerobot/issues/2580
|
open
|
[] | 2025-12-04T07:22:07Z
| 2025-12-11T02:53:11Z
| null |
zhoushaoxiang
|
|
vllm-project/vllm
| 30,023
|
[Feature]: Support qwen3next with GGUF?
|
### 🚀 The feature, motivation and pitch
With v0.11.0, `vllm` report:
```
vllm | (APIServer pid=1) ValueError: GGUF model with architecture qwen3next is not supported yet.
```
https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking-GGUF
I did a simple dig for this, seems the vllm has support of `Qwen3-Next` as architecture is `qwen3_next`.
But the `Qwen` set it as `qwen3next`.
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/30023
|
open
|
[
"feature request"
] | 2025-12-04T03:40:26Z
| 2025-12-18T05:31:57Z
| 0
|
zeerd
|
pytorch/ao
| 3,452
|
Any plans to support `USE_DISTRIBUTED=0` pytorch?
|
**Dec 6th EDIT:** simplified & expanded error and reproduction example from conversation below.
If not then please write in readme/requirements somewhere. The error below was cryptic.
Error that led me to this conception:
<details>
```
Traceback (most recent call last):
File "/data/data/com.termux/files/home/dev/llm/sd/test/./to.py", line 3, in <module>
import torchao
File "/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/__init__.py", line 127, in <module>
from torchao.quantization import (
File "/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/quantization/__init__.py", line 6, in <module>
from .autoquant import (
File "/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/quantization/autoquant.py", line 11, in <module>
from torchao.dtypes import (
File "/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/dtypes/__init__.py", line 1, in <module>
from . import affine_quantized_tensor_ops
File "/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/dtypes/affine_quantized_tensor_ops.py", line 14, in <module>
from torchao.dtypes.floatx.cutlass_semi_sparse_layout import (
File "/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/dtypes/floatx/__init__.py", line 4, in <module>
from .float8_layout import Float8Layout
File "/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/dtypes/floatx/float8_layout.py", line 21, in <module>
from torchao.float8.inference import (
File "/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/float8/__init__.py", line 12, in <module>
from torchao.float8.float8_linear_utils import (
File "/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/float8/float8_linear_utils.py", line 14, in <module>
from torchao.float8.float8_linear import Float8Linear
File "/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/float8/float8_linear.py", line 15, in <module>
from torchao.float8.distributed_utils import tensor_already_casted_to_fp8
File "/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/float8/distributed_utils.py", line 14, in <module>
from torchao.float8.float8_training_tensor import Float8TrainingTensor
File "/data/data/com.termux/files/home/dev/llm/sd/ao/torchao/float8/float8_training_tensor.py", line 10, in <module>
from torch.distributed._tensor import DTensor
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/_tensor/__init__.py", line 25, in <module>
sys.modules[f"torch.distributed._tensor.{submodule}"] = import_module(
^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/__init__.py", line 4, in <module>
import torch.distributed.tensor._ops # force import all built-in dtensor ops
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/_ops/__init__.py", line 2, in <module>
from ._conv_ops import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/_ops/_conv_ops.py", line 5, in <module>
from torch.distributed.tensor._dtensor_spec import DTensorSpec, TensorMeta
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/_dtensor_spec.py", line 6, in <module>
from torch.distributed.tensor.placement_types import (
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/placement_types.py", line 8, in <module>
import torch.distributed._functional_collectives as funcol
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/_functional_collectives.py", line 9, in <module>
import torch.distributed.distributed_c10d as c10d
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 23, in <module>
from torch._C._distributed_c10d import (
ModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package
```
</details>
Reproduction example leading to the error above when used with `USE_DISTRIBUTED=0 USE_CUDA=0` built pytorch:
```
import torchao
```
I tried guarding the imports/usage of `DTensor` with something like:
```
import torch.distributed
is_torch_distributed_available = torch.distributed.is_available()
if is_torch_distributed_available:
from torch.distributed._tensor import DTensor
```
But DTensor usage turned out to be prolific and well integrated. Maybe there is some sort of subclass solution and guard anything DTensor specific? I'm out of my depth here.
I don't know of anything else to try with torchao and i
|
https://github.com/pytorch/ao/issues/3452
|
open
|
[] | 2025-12-04T00:25:19Z
| 2025-12-07T20:20:59Z
| 7
|
rene-descartes2021
|
vllm-project/vllm
| 29,998
|
[Bug]: cannot send two POST to /v1/chat/completions endpoint with identic tool function name with model GPT-OSS-120B
|
### Your current environment
<details>
<summary>The bug is reproducible with docker image vllm/vllm-openai:v0.12.0</summary>
```yaml
services:
vllm-gptoss-large:
image: vllm/vllm-openai:v0.12.0
restart: always
shm_size: '64gb'
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0', '1']
capabilities: [gpu]
volumes:
- ./data/hf:/data
environment:
- HF_TOKEN=${HF_TOKEN}
ports:
- 8000:8000
command: ["openai/gpt-oss-120b",
"--tool-call-parser","openai",
"--enable-auto-tool-choice",
"--reasoning-parser","openai_gptoss",
"--tensor-parallel-size","2",
"--port","8000",
"--api-key", "${VLLM_API_KEY}",
"--download_dir", "/data"]
```
</details>
### 🐛 Describe the bug
This bash script cannot be executed a second time, unless the name of the function is changed to a value which was not yet sent. Without tool definition, the POST can be sent as often as you like.
```bash
#!/bin/bash
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Authorization: Bearer ${VLLM_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-oss-120b",
"stream": false,
"messages": [
{
"role": "system",
"content": "Be a helpful assistant."
},
{
"role": "user",
"content": "Hi"
},
{
"role": "assistant",
"content": "How can I help you?"
},
{
"role": "user",
"content": "Do you like Monty Python?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "CHANGE-NAME-BEFORE-SENDING",
"description": "Use this tool if you need to extract information from a website.",
"parameters": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "The URL to search or extract information from."
}
},
"required": ["url"]
}
}
}
]
}'
```
The script doesn't finish waiting for a response and `nvidia-smi` shows the cards consuming max power. The vllm logs show that there are tokens generated, so from an external point of view the LLM seems to generate tokens without stopping.
<img width="2962" height="274" alt="Image" src="https://github.com/user-attachments/assets/115672b2-f85f-43ec-b89c-d3a0daae7d81" />
This is quite weird, because when you call it with python SDK, it is working fine, e.g.
```python
from openai import OpenAI
from dotenv import load_dotenv
import os
load_dotenv()
client = OpenAI(
api_key=os.getenv("API_KEY"),
base_url="http://localhost:8000/v1",
)
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"},
"description": "Location and state, e.g., 'San Francisco, CA'"
},
"required": ["location"]
},
},
}
]
response = client.chat.completions.create(
model="openai/gpt-oss-120b",
messages=[{"role": "user", "content": "How is the weather in Berlin? use the tool get_weather."}],
tools=tools,
tool_choice="auto",
stream=False
)
print(response.choices[0].message)
```
In fact this can also be reproduced using n8n, AI Agent nodes which are based on the typescipt langgraph implementation: https://github.com/n8n-io/n8n/blob/master/packages/%40n8n/nodes-langchain/nodes/agents/Agent/agents/ToolsAgent/V1/execute.ts#L34
Here you can also see that chat windows freeze when a tool is attached and a user is asking the second question.
The bug really seems to be related to this model, because I tested Mistral and Qwen Models and I couldn't reproduce it. When I tried to debug the issue, there was a sensetivity to the description field in the parameters list of the tool. To make it clear, this can also only be sent once using the OpenAI Python SDK, but works again when the function name is changed:
```python
from openai import OpenAI
from dotenv import load_dotenv
import os
load_dotenv()
client = OpenAI(
api_key=os.getenv("API_KEY"),
base_url=f"https://{os.getenv('API_DOMAIN')}/v1",
)
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "Location and state, e.g., 'San Francisco, CA'"
},
},
"required": ["locatio
|
https://github.com/vllm-project/vllm/issues/29998
|
open
|
[
"bug"
] | 2025-12-03T21:41:35Z
| 2025-12-19T15:53:43Z
| 14
|
pd-t
|
huggingface/transformers
| 42,589
|
Incorrect tokenization `tokenizers` for escaped strings / Mismatch with `mistral_common`
|
### System Info
```
In [3]: mistral_common.__version__
Out[3]: '1.8.6'
```
```
In [4]: import transformers; transformers.__version__
Out[4]: '5.0.0.dev0'
```
```
In [5]: import tokenizers; tokenizers.__version__
Out[5]: '0.22.1'
```
### Who can help?
@ArthurZucker @itazap
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
from transformers import AutoTokenizer
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.request import ChatCompletionRequest
req = ChatCompletionRequest(messages=[
{'role': 'system', 'content': ''},
{'role': 'user', 'content': 'hey'},
{'role': 'assistant', 'content': 'ju\x16'},
{'role': 'user', 'content': 'hey'},
])
tokenizer_orig = MistralTokenizer.from_hf_hub("mistralai/Ministral-3-3B-Instruct-2512")
tokenizer_hf = AutoTokenizer.from_pretrained("mistralai/Ministral-3-3B-Instruct-2512")
orig_tokens = tokenizer_orig.encode_chat_completion(req).tokens
orig_text = tokenizer_orig.encode_chat_completion(req).text
print("Expected")
print(orig_text)
print(orig_tokens)
hf_tokens = tokenizer_hf.apply_chat_template(req.to_openai()["messages"])
hf_text = tokenizer_hf.convert_ids_to_tokens(hf_tokens)
print("HF")
print(hf_tokens)
print(hf_text)
```
gives:
```
Expected
<s>[SYSTEM_PROMPT][/SYSTEM_PROMPT][INST]hey[/INST]ju</s>[INST]hey[/INST]
[1, 17, 18, 3, 74058, 4, 5517, 1022, 2, 3, 74058, 4]
HF
[1, 17, 18, 3, 74058, 4, 5517, 1022, 1032, 2, 3, 74058, 4]
['<s>', '[SYSTEM_PROMPT]', '[/SYSTEM_PROMPT]', '[INST]', 'hey', '[/INST]', 'ju', 'Ė', 'Ġ', '</s>', '[INST]', 'hey', '[/INST]']
```
As you can see the token `1032` should not be there. I'm not sure exactly what is happening and it could very well be that the behavior of `tokenizers` makes sense here.
**However**, this is a mismatch with `mistral_common` which means that any such tokenization will give slightly different token ids leading to slightly incorrect results since all Mistral models are trained with `mistral_common`.
This is especially important for "long-log" parsing tasks that often have escaped strings.
It's def an edge case, but would still be very nice to fix.
### Expected behavior
Align encoding.
|
https://github.com/huggingface/transformers/issues/42589
|
closed
|
[
"bug"
] | 2025-12-03T10:57:35Z
| 2025-12-16T10:45:35Z
| 5
|
patrickvonplaten
|
huggingface/diffusers
| 12,781
|
Impossible to log into Huggingface/Diffusers Discord
|
### Describe the bug
When trying to verify my Discord/Huggingface account, no matter what I do, I end up with this message:
<img width="512" height="217" alt="Image" src="https://github.com/user-attachments/assets/d1d0f18b-c80f-4862-abde-fb49ee505ddd" />
Has the HF Discord died? If that is the case, what alternatives are there?
I feel that there is a strong need for some kind of forum where users of Diffusers in collaboration can figure out how to make newly supported and huge models run on consumer hardware. The Diffusers discussion on GitHub is dead. So, where do we go?
### Reproduction
Try to log-in in to Discord.
### Logs
```shell
-
```
### System Info
-
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/12781
|
closed
|
[
"bug"
] | 2025-12-03T09:42:55Z
| 2025-12-04T15:11:42Z
| 4
|
tin2tin
|
pytorch/pytorch
| 169,461
|
torch compile + replicate, compute and communication not overlap
|
### 🐛 Describe the bug
When I use a combination of composable.replicate and torch.compile, I observe that all backward allreduce operations are executed only after the entire backward pass computation is complete.
This behavior prevents the overlap of computation and communication, which is typically achieved in DDP (DistributedDataParallel) + torch.compile by inserting a graph break during the backward pass (e.g., after the gradient calculation for a specific layer).
I am looking for any potential workarounds or suggested methods to enable computation/communication overlap when using composable.replicate with torch.compile.
```
import os
import time
import torch
import torch.distributed as dist
import torch.nn as nn
from torch.distributed._composable.replicate import replicate
from torch.profiler import profile, record_function, ProfilerActivity
def setup():
rank = int(os.environ["RANK"])
local_rank = int(os.environ["LOCAL_RANK"])
world_size = int(os.environ["WORLD_SIZE"])
dist.init_process_group("nccl")
torch.cuda.set_device(local_rank)
return rank, local_rank
def cleanup():
dist.destroy_process_group()
class ToyModel(nn.Module):
def __init__(self):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(2048, 4096),
nn.ReLU(),
nn.Linear(4096, 4096),
nn.ReLU(),
nn.Linear(4096, 4096),
nn.ReLU(),
nn.Linear(4096, 2048),
)
def forward(self, x):
return self.layers(x)
def main():
rank, local_rank = setup()
model = ToyModel().to(local_rank)
replicate(
model,
device_ids=[local_rank],
bucket_cap_mb=25
)
opt_model = torch.compile(model, backend="inductor")
optimizer = torch.optim.SGD(opt_model.parameters(), lr=0.01)
loss_fn = nn.MSELoss()
input_tensor = torch.randn(32, 2048).to(local_rank)
target_tensor = torch.randn(32, 2048).to(local_rank)
log_dir = './profiler_logs_composable'
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
schedule=torch.profiler.schedule(
wait=5,
warmup=2,
active=3,
repeat=1
),
on_trace_ready=torch.profiler.tensorboard_trace_handler(log_dir),
record_shapes=True,
profile_memory=True,
with_stack=True
) as prof:
for step in range(15):
step_start = time.time()
with record_function("model_training_step"):
optimizer.zero_grad()
output = opt_model(input_tensor)
loss = loss_fn(output, target_tensor)
loss.backward()
optimizer.step()
torch.cuda.synchronize()
step_end = time.time()
if rank == 0:
print(f"Step {step}: Loss={loss.item():.4f}, Time={step_end - step_start:.4f}s")
prof.step()
cleanup()
if __name__ == "__main__":
main()
```

### Versions
PyTorch version: 2.10.0a0+b558c986e8.nv25.11
Is debug build: False
CUDA used to build PyTorch: 13.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.3 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.39
Python version: 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 13.0.88
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
Nvidia driver version: 570.133.20
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.15.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.15.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.15.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.15.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.15.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.15.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.15.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.15.0
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket:
|
https://github.com/pytorch/pytorch/issues/169461
|
open
|
[
"oncall: distributed",
"triaged",
"oncall: pt2"
] | 2025-12-03T08:33:05Z
| 2025-12-09T17:50:45Z
| 6
|
peaceorwell
|
vllm-project/vllm
| 29,944
|
[Usage]:It seems that the prefix cache has not brought about any performance benefits.
|
### Your current environment
```
root@ubuntu:/vllm-workspace# python3 collect_env.py
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 4.1.0
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-5.15.0-25-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA H20
GPU 1: NVIDIA H20
GPU 2: NVIDIA H20
GPU 3: NVIDIA H20
GPU 4: NVIDIA H20
GPU 5: NVIDIA H20
GPU 6: NVIDIA H20
GPU 7: NVIDIA H20
Nvidia driver version : 550.127.08
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2001.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr avx512_fp16 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.3.1
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cudnn-frontend==1.14.1
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu
|
https://github.com/vllm-project/vllm/issues/29944
|
open
|
[
"usage"
] | 2025-12-03T07:03:49Z
| 2025-12-03T07:04:37Z
| 0
|
wenba0
|
vllm-project/vllm
| 29,940
|
[Usage]: QWen2-Audio-7B support
|
### Your current environment
We encountered numerous peculiar issues during the QWen2-Audio-7B conversion process. Do we currently support Qwen2-Audio-7B? If so, could you provide a demo?
Thank you very much!
### 🐛 Describe the bug
Refer to Whisper's demo
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/29940
|
closed
|
[
"usage"
] | 2025-12-03T06:04:07Z
| 2025-12-04T14:23:05Z
| 1
|
freedom-cui
|
huggingface/datasets
| 7,893
|
push_to_hub OOM: _push_parquet_shards_to_hub accumulates all shard bytes in memory
|
## Summary
Large dataset uploads crash or hang due to memory exhaustion. This appears to be the root cause of several long-standing issues.
### Related Issues
This is the root cause of:
- #5990 - Pushing a large dataset on the hub consistently hangs (46 comments, open since 2023)
- #7400 - 504 Gateway Timeout when uploading large dataset
- #6686 - Question: Is there any way for uploading a large image dataset?
### Context
Discovered while uploading the [Aphasia Recovery Cohort (ARC)](https://openneuro.org/datasets/ds004884) neuroimaging dataset (~270GB, 902 sessions) to HuggingFace Hub using the `Nifti()` feature.
Working implementation with workaround: [arc-aphasia-bids](https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids)
## Root Cause
In `_push_parquet_shards_to_hub` (arrow_dataset.py), the `additions` list accumulates every `CommitOperationAdd` with full Parquet bytes in memory:
```python
additions = []
for shard in shards:
parquet_content = shard.to_parquet_bytes() # ~300 MB per shard
shard_addition = CommitOperationAdd(path_or_fileobj=parquet_content)
api.preupload_lfs_files(additions=[shard_addition])
additions.append(shard_addition) # THE BUG: bytes stay in memory forever
```
For a 902-shard dataset: **902 × 300 MB = ~270 GB RAM requested → OOM/hang**.
The bytes are held until the final `create_commit()` call, preventing garbage collection.
## Reproduction
```python
from datasets import load_dataset
# Any large dataset with embedded files (Image, Audio, Nifti, etc.)
ds = load_dataset("imagefolder", data_dir="path/to/large/dataset")
ds.push_to_hub("repo-id", num_shards=500) # Watch memory grow until crash
```
## Workaround
Process one shard at a time, upload via `HfApi.upload_file(path=...)`, delete before next iteration:
```python
from huggingface_hub import HfApi
import pyarrow.parquet as pq
api = HfApi()
for i in range(num_shards):
shard = ds.shard(num_shards=num_shards, index=i, contiguous=True)
# Write to disk, not memory
shard.to_parquet(local_path)
# Upload from file path (streams from disk)
api.upload_file(
path_or_fileobj=str(local_path),
path_in_repo=f"data/train-{i:05d}-of-{num_shards:05d}.parquet",
repo_id=repo_id,
repo_type="dataset",
)
# Clean up before next iteration
local_path.unlink()
del shard
```
Memory usage stays constant (~1-2 GB) instead of growing linearly.
## Suggested Fix
After `preupload_lfs_files` succeeds for each shard, release the bytes:
1. Clear `path_or_fileobj` from the `CommitOperationAdd` after preupload
2. Or write to temp file and pass file path instead of bytes
3. Or commit incrementally instead of batching all additions
## Environment
- datasets version: main branch (post-0.22.0)
- Platform: macOS 14.x ARM64
- Python: 3.13
- PyArrow: 18.1.0
- Dataset: 902 shards, ~270 GB total embedded NIfTI files
|
https://github.com/huggingface/datasets/issues/7893
|
closed
|
[] | 2025-12-03T04:19:34Z
| 2025-12-05T22:45:59Z
| 2
|
The-Obstacle-Is-The-Way
|
pytorch/torchtitan
| 2,101
|
EP in latest main is slow
|
Hi team,
I tried to duplicate the EP implementation in my model. But I find it's running much slowly with EP.
I find there is a written cpu-gpu synchronization at the beginning of all2all in token dispatch, for input_split and output_split, which is kinda a blocker. Is it possible to avoid it without symmetric memory all2all?
Besides, could you help to share which part of EP workflow needs torch.compile? I noticed the usage of torch.gather and torch.scatter_add may not be optimal. I guess they may need to be optimized by torch.compile.
Thanks!
|
https://github.com/pytorch/torchtitan/issues/2101
|
open
|
[] | 2025-12-03T00:10:46Z
| 2025-12-03T00:10:46Z
| 0
|
goldhuang
|
pytorch/torchtitan
| 2,100
|
symmetric memory all2all integration for EP
|
Hi team,
I find https://github.com/pytorch/torchtitan/tree/main/torchtitan/experiments/moe_symm_mem_kernels. But seems there is no progress update for a while according to
Experiment | Test Status | Owners
-- | -- | --
[moe_symm_mem_kernels](https://github.com/pytorch/torchtitan/blob/main/torchtitan/experiments/moe_symm_mem_kernels)|TBA|[@kwen2501](https://github.com/kwen2501)
Is there a plan for the integration? Is there any known issue that stops the release?
Thanks!
|
https://github.com/pytorch/torchtitan/issues/2100
|
open
|
[] | 2025-12-03T00:02:55Z
| 2025-12-03T00:02:55Z
| 0
|
goldhuang
|
vllm-project/vllm
| 29,920
|
[Feature]: Add support for fused fp8 output to FlashAttention 3
|
### 🚀 The feature, motivation and pitch
On Hopper, we use FlashAttention as the default attention backend. When o-proj is quantized to fp8, we are leaving performance on the table as FA3 does not support fused output fp8 quant. With Triton/ROCm/AITER backends we saw up to 8% speedups with attention+quant fusion.
vLLM already maintains our own fork of FA, adding output quant support should be pretty non-intrusive. Subtasks:
- vllm-flash-attn:
- add `output_scale` parameter to attention forward functions
- plumb parameter through all layers of the interface
- compare branching at runtime/compile-time for performance and binary size (Hopper)
- vllm:
- integrate new FA version
- add support for attention+quant fusion to FA attention backend
- check FA version, hardware version
- should be as easy as modifying the `supports_fused_output_quant` method and plumbing `output_scale` from `FlashAttentionImpl.forward()` to the kernel call
### Additional context
cc @LucasWilkinson
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/29920
|
open
|
[
"help wanted",
"performance",
"feature request",
"torch.compile"
] | 2025-12-02T20:16:31Z
| 2026-01-05T20:53:11Z
| 4
|
ProExpertProg
|
vllm-project/vllm
| 29,917
|
[Feature]: VLLM_DISABLE_COMPILE_CACHE should be a config flag
|
### 🚀 The feature, motivation and pitch
`vllm serve` does a nice printout of non-default config flags. VLLM_DISABLE_COMPILE_CACHE gets used enough that it should have an equivalent config flag for it
Offline @ProExpertProg mentioned we can treat it like VLLM_DEBUG_DUMP_PATH where we have both and the env var overrides the config option by overwriting it directly
### Alternatives
none
### Additional context
n/a
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/29917
|
open
|
[
"help wanted",
"feature request",
"torch.compile"
] | 2025-12-02T20:06:01Z
| 2025-12-05T05:19:12Z
| 6
|
zou3519
|
pytorch/xla
| 9,726
|
How is GetOutputShardings supposed to work for PJRT Implementers?
|
We have a custom shardy + stablehlo pipeline manage shard propagation inside our compiler stack. We're having trouble **communcating the correct output sharding back to the framework**, and cannot find any obvious interface to do so, and wanted to ask what the intended path for this looks like.
To be clear, this is the path our compiler takes:
1. We get the SHLO in Shardy dialect from the torch-xla framework
2. We run Shardy to solve the SHLO graph
3. We lower it to our own custom dialect and execute from there.
We do _not_ convert the SHLO graph back to HLO (as Jax does). After the graph is solved in step 2, we would like to tell torch-xla what the correct output shardings are.
## Observed Behavior
In torch_xla, we observe that output shardings are retrieved during compilation in ths path:
torch_xla::XLAGraphExecutor::Compile -> torch_xla::runtime::PjRtComputationClient::Compile -> [PjRtComputation constructor](https://github.com/tenstorrent/pytorch-xla/blob/a5be1f82e7906e09aa004cb99b08e29d3c102478/torch_xla/csrc/runtime/pjrt_computation_client.h#L329-L336) -> `output_shardings_ = this->executable->GetOutputShardings();`
This eventually calls into the base PJRTExecutable implementation of [GetOutputShardings](https://github.com/openxla/xla/blob/4ae2ec6f162569750c76dbdbe12071d7091f1988/xla/pjrt/pjrt_executable.cc#L350-L361).
The mechanism by which output shardings seem to be extracted from the implementer side is by calling `PJRT_Executable_OptimizedProgram` to retrieve the post-compile MLIR from our PJRT implementation in [xla::PjRtCApiExecutable::GetHloModules()](https://github.com/openxla/xla/blob/main/xla/pjrt/c_api_client/pjrt_c_api_client.cc#L2001-L2061).
The MLIR is then converted to an xla-internal HLO module construct and output shardings are [eventually extracted from that construct inside PjRtExecutable::GetOutputShardings()](https://github.com/openxla/xla/blob/4ae2ec6f162569750c76dbdbe12071d7091f1988/xla/pjrt/pjrt_executable.cc#L340-L361)
## How should this work?
This existing path would suggest that the way a PJRT implementer "communicates" output shardings back to the framework post-compilation is by generating IR with output shardings in some format compatible with how they are ingested in XLA. This seems both complex and unidiomatic, because other paths to return data from compilation to the framework involve well defined interfaces in PJRT (like PJRT_Executable_OutputDimensions) and [PjRtCApi overrides](https://github.com/openxla/xla/blob/main/xla/pjrt/c_api_client/pjrt_c_api_client.cc#L1898-L1921) to use those interfaces and cast the result to xla internal types.
What is the recommended way to communicate output shardings to the framework from a lower-level compiler?
|
https://github.com/pytorch/xla/issues/9726
|
open
|
[
"question",
"runtime",
"stablehlo"
] | 2025-12-02T19:26:20Z
| 2025-12-15T13:47:09Z
| null |
jameszianxuTT
|
huggingface/inference-playground
| 102
|
How to know when a model is outdated ?
|
I'm testing https://huggingface.co/chat/models/openai/gpt-oss-20b and there I asked this:
```
do you know any github repository created in 2025?
<p>Sure! Here are a few GitHub repositories that were created in 2025 (all with their public “created date” and a short description):</p>
Repository | Created | Short description | Link
open‑thoughts/open‑thoughts | 2025‑01‑27 | An open‑source platform for curating datasets that can be used to train and evaluate reasoning‑oriented language models. | https://github.com/open-thoughts/open-thoughts
...
```
But on https://huggingface.co/playground:
```
do you know any github repository created in 2025?
I don’t have any information about repositories that were created in 2025. My training data only goes up to September 2023, so I can’t see or reference anything that was added to GitHub after that date. If you’re looking for recent projects, you could search GitHub directly or use the GitHub API to filter repositories by creation date.
```
I'm asking it here because I don't know where else to ask, I also opened an issue here https://github.com/ggml-org/llama.cpp/discussions/15396#discussioncomment-15136920 .
I've also downloaded the https://huggingface.co/openai/gpt-oss-20b and running locally it doesn't know anything from 2025.
**Based on this I suspect that the model running here https://huggingface.co/chat/models/openai/gpt-oss-20b is not the one that's here https://huggingface.co/openai/gpt-oss-20b .**
**How/Where can we get the version running here https://huggingface.co/chat/models/openai/gpt-oss-20b ?**
|
https://github.com/huggingface/inference-playground/issues/102
|
open
|
[] | 2025-12-02T17:10:51Z
| 2025-12-02T17:10:51Z
| null |
mingodad
|
pytorch/executorch
| 16,041
|
CORTEX_M: Memory optimization
|
No work has been done looking into optimizing memory of the runtime. This ticket covers a broad investigation into what can be done in this space:
1. Can we optimize scratch buffer allocation (e.g. is it reused between kernels currently?)
2. Can we strip away anything from the elf to minimize runtime size?
3. Any other ideas to optimize performance related to memory
|
https://github.com/pytorch/executorch/issues/16041
|
open
|
[] | 2025-12-02T14:24:20Z
| 2025-12-15T12:01:21Z
| 0
|
AdrianLundell
|
pytorch/executorch
| 16,039
|
CORTEX_M: Target configuration
|
CMSIS-NN requires slightly different lowerings for different architecture extensions (scalar/DSP/vector). Currently
vector extension is assumed, so we might need to add a way to configure this and do modifications in the pass lowering where required.
For example, the linear operator currently only pass the kernel_sum scratch buffer and no bias to the operator call, which only works for the MVE implementation of the operator. To run this on another Cortex-M would involve passing the target CPU to the ConvertToCortexMPass which lowers the operator and add the bias as an argument if it does not have MVE support.
Alternatively, it might not be worth the effort to do this in the lowering and it is better to do the target configuration in the runtime flow only, then the scratch buffer would need to be computed in the runtime. This decision of how to best do this is part of the ticket.
|
https://github.com/pytorch/executorch/issues/16039
|
open
|
[] | 2025-12-02T14:19:34Z
| 2025-12-03T15:34:04Z
| 0
|
AdrianLundell
|
pytorch/pytorch
| 169,371
|
C++ Generator API is platform dependent
|
When creating a tensor with the C++ API, one can do something like this:
```
try {
Tensor t = torch::ones({200, 1, 28, 28});
t.to(torch::DeviceType::MPS);
} catch(const std::exception& e) {
...
}
```
This code is going to compile and run on all platforms, obviously going into the `catch` block if not on macOS. The same thing happens for `t.to(torch::DeviceType::CUDA)`
On the other hand, Generators offer the utilities `at::cuda::detail::createCUDAGenerator` and `at::mps::detail::createMPSGenerator` which are not defined in libtorch unless the library was built with CUDA support and on macOS respectively, which needs special care at both compile time and run time (using macros to exclude code, force compilation with unresolved external symbols, checking `torch::cuda::is_available()`/`torch::mps::is_available()` before making the calls, ...).
Is there a platform-independent way to deal with Generators just like there is with Tensors?
cc @jbschlosser @albanD @guangyey @EikanWang
|
https://github.com/pytorch/pytorch/issues/169371
|
open
|
[
"module: cpp",
"triaged",
"module: accelerator"
] | 2025-12-02T12:53:52Z
| 2025-12-04T02:07:26Z
| 1
|
matteosal
|
vllm-project/vllm
| 29,875
|
[Usage]: Is there a way to inject the grammar into the docker directly
|
### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 3.28.0
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.19 | packaged by conda-forge | (main, Oct 22 2025, 22:29:10) [GCC 14.3.0] (64-bit runtime)
Python platform : Linux-6.8.0-1030-azure-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to :
GPU models and configuration : GPU 0: NVIDIA H100 NVL
Nvidia driver version : 535.247.01
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.10.2
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9V84 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 40
Socket(s): 1
Stepping: 1
BogoMIPS: 4800.05
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves user_shstk avx512_bf16 clzero xsaveerptr rdpru arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 160 MiB (5 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-39
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.5.2
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
|
https://github.com/vllm-project/vllm/issues/29875
|
open
|
[
"usage"
] | 2025-12-02T12:30:56Z
| 2025-12-03T11:53:43Z
| 1
|
chwundermsft
|
vllm-project/vllm
| 29,871
|
[Usage]: Extremly low token input speed for DeepSeek-R1-Distill-Llama-70B
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (GCC) 14.2.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.4 (main, Aug 29 2025, 09:21:27) [GCC 14.2.0] (64-bit runtime)
Python platform : Linux-5.15.0-118-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.61
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version : 570.158.01
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4793.01
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95
NUMA node1 CPU(s): 96-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIB
|
https://github.com/vllm-project/vllm/issues/29871
|
open
|
[
"usage"
] | 2025-12-02T11:25:25Z
| 2025-12-02T15:30:53Z
| 2
|
muelphil
|
vllm-project/vllm
| 29,866
|
[Doc]:
|
### 📚 The doc issue
# Installation des bibliothèques XAI
!pip install shap
!pip install lime
!pip install alibi
!pip install interpret
!pip install dalex
!pip install eli5
### Suggest a potential alternative/fix
# Installation des bibliothèques XAI
!pip install shap
!pip install lime
!pip install alibi
!pip install interpret
!pip install dalex
!pip install eli5
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/29866
|
closed
|
[
"documentation"
] | 2025-12-02T10:43:04Z
| 2025-12-02T10:50:10Z
| 0
|
hassaballahmahamatahmat5-cpu
|
vllm-project/vllm
| 29,865
|
[Doc]:
|
### 📚 The doc issue
# Installation des bibliothèques XAI
!pip install shap
!pip install lime
!pip install alibi
!pip install interpret
!pip install dalex
!pip install eli5
### Suggest a potential alternative/fix
# Installation des bibliothèques XAI
!pip install shap
!pip install lime
!pip install alibi
!pip install interpret
!pip install dalex
!pip install eli5
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/29865
|
closed
|
[
"documentation"
] | 2025-12-02T10:43:01Z
| 2025-12-02T10:50:00Z
| 0
|
hassaballahmahamatahmat5-cpu
|
vllm-project/vllm
| 29,864
|
[Usage]: I am unable to run the GLM-4.5-Air-REAP-82B-A12B-nvfp4 model on an RTX 5090.
|
### Your current environment
I am unable to run the GLM-4.5-Air-REAP-82B-A12B-nvfp4 model on an RTX 5090.
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 12.3.0-1ubuntu1~22.04.2) 12.3.0
Clang version : Could not collect
CMake version : version 4.2.0
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.10.0.dev20251124+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.12 (main, Nov 4 2025, 08:48:33) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-6.8.0-87-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA GeForce RTX 5090
GPU 1: NVIDIA GeForce RTX 5090
GPU 2: NVIDIA GeForce RTX 5090
GPU 3: NVIDIA GeForce RTX 5090
Nvidia driver version : 570.172.08
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
Stepping: 6
CPU max MHz: 3100.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 2.6 MiB (56 instances)
L1i cache: 1.8 MiB (56 instances)
L2 cache: 70 MiB (56 instances)
L3 cache: 84 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Vulnerability Vmscape: Not affected
|
https://github.com/vllm-project/vllm/issues/29864
|
open
|
[
"usage"
] | 2025-12-02T10:13:31Z
| 2025-12-05T17:06:30Z
| 2
|
east612-ai
|
huggingface/diffusers
| 12,772
|
How to convert diffusers model to wan2.2 format
|
I see convert_wan_to_diffusers.py in diffusers repo, but no convert_diffusers_to_wan.py. Do you have plan to upload a convert scripts?
|
https://github.com/huggingface/diffusers/issues/12772
|
open
|
[] | 2025-12-02T09:19:29Z
| 2025-12-02T09:19:29Z
| null |
wikiwen
|
pytorch/executorch
| 16,034
|
How to add a new backend?
|
### 🚀 The feature, motivation and pitch
Hi, I've already seen that some backends already supported from [here](https://docs.pytorch.org/executorch/main/backends-overview.html). Is there a convenient way to add a new backend, like [CANN](https://developer.huawei.com/consumer/en/doc/hiai-guides/introduction-0000001051486804) or OpenCL, into executorch? BTW, if executorch will support OpenCL backend in the future?
|
https://github.com/pytorch/executorch/issues/16034
|
open
|
[] | 2025-12-02T03:13:18Z
| 2025-12-02T18:50:18Z
| null |
JingliangGao
|
huggingface/diffusers
| 12,764
|
When will the img2img pipeline of FLUX.2-dev be released?
|
I see that the current version(0.36.0-dev) only updated the text-to-image pipeline for Flux2. We are looking forward to the update of the image-to-image pipeline!
|
https://github.com/huggingface/diffusers/issues/12764
|
open
|
[] | 2025-12-01T11:25:35Z
| 2025-12-01T11:41:56Z
| 1
|
guanxyu
|
huggingface/smolagents
| 1,890
|
Question: how to use sever-side tools provided by Google Gemini or OpenAI GPT?
|
Gemini has some server-side tools like google_search (https://ai.google.dev/gemini-api/docs/google-search) or google_map. OpenAI also has server-side tools like web_search. Does Smolagents support using such server-side tools from agents? If so, how?
|
https://github.com/huggingface/smolagents/issues/1890
|
open
|
[] | 2025-12-01T05:16:01Z
| 2025-12-23T10:49:45Z
| null |
victorx-deckard
|
huggingface/agents-course
| 623
|
Message: Submission received, but no valid/matching task IDs were found in the 1 answers provided. Score did not improve previous record, leaderboard not updated.
|
I am correctly downloading the GAIA 2023 Level 1 validation dataset using snapshot_download and load_dataset. This submission is for Unit 4 Agent Course.
data_dir = snapshot_download(
repo_id="gaia-benchmark/GAIA",
repo_type="dataset"
)
dataset = load_dataset(data_dir, "2023_level1", split="validation")
subset = dataset.select(range(20))
for item in subset:
task_id = item.get("task_id")
question_text = item.get("Question")
file_name = item.get("file_name")
I experience failures when trying to run the first 20 questions i received only 5 task ids are valid.. When I specifically tried to isolate and run the task ID '935e2cff-ae78-4218-b3f5-115589b19dae' using the filtering method, the evaluation system reported.
<img width="1388" height="668" alt="Image" src="https://github.com/user-attachments/assets/f4e9fe1b-8608-4ab4-84cc-1b196a601694" />
'Submission received, but no valid/matching task IDs were found in the 1 answers provided.' This occurred even though I was confident the answer was correct
|
https://github.com/huggingface/agents-course/issues/623
|
open
|
[
"question"
] | 2025-12-01T02:09:21Z
| 2025-12-01T02:09:21Z
| null |
ShwetaBorole
|
huggingface/tokenizers
| 1,902
|
Guide: Compiling `tokenizers` on Android/Termux
|
Hello Hugging Face team and fellow developers,
This is a guide for anyone trying to install `tokenizers` (or packages that depend on it, like `transformers` or `docling`) on an Android device using [Termux](https://termux.dev/). Currently, there are no other issues mentioning Termux, so hopefully, this guide can help others.
### The Problem
When running `pip install tokenizers` in a standard Termux environment, the installation fails during the compilation of a C++ dependency with an error similar to this:
```
error: use of undeclared identifier 'pthread_cond_clockwait'
```
This happens because the build system is targeting an Android API level where this function is not available in the C library headers.
### The Solution
The solution is to force the compilation from source and pass specific flags to the C++ compiler to set the correct Android API level and link the required libraries.
Here is a step-by-step guide:
#### Step 1: Install Build Dependencies
You will need the Rust toolchain and other build essentials. You can install them in Termux using `pkg`:
```bash
pkg update && pkg install rust clang make maturin
```
#### Step 2: Find Your Android API Level
The fix requires telling the compiler which Android API level you are using. You can get this number by running the following command in your Termux shell:
```bash
getprop ro.build.version.sdk
```
This will return a number, for example `29`, `30`, `33`, etc. This function (`pthread_cond_clockwait`) was introduced in API level 21, so your device's level should be higher than that.
#### Step 3: Compile and Install `tokenizers`
Now, you can install the package using `pip`. The command below will automatically use the API level from the previous step.
```bash
# This command automatically gets your API level and uses it to compile tokenizers
ANDROID_API_LEVEL=$(getprop ro.build.version.sdk)
CXXFLAGS="-lpthread -D__ANDROID_API__=${ANDROID_API_LEVEL}" pip install tokenizers --no-binary :all:
```
After this, `pip install tokenizers` (and packages that depend on it) should succeed.
#### Explanation of the Flags:
* `CXXFLAGS="..."`: This sets environment variables to pass flags to the C++ compiler.
* `-lpthread`: This flag explicitly tells the linker to link against the POSIX threads library.
* `-D__ANDROID_API__=${ANDROID_API_LEVEL}`: This is the critical part. It defines a macro that tells the C++ headers to expose functions available for your specific Android version, making `pthread_cond_clockwait` visible to the compiler.
* `--no-binary :all:`: This forces `pip` to ignore pre-compiled wheels and build the package from the source code, which is necessary for the flags to be applied.
Hope this helps other developers working in the Termux environment!
|
https://github.com/huggingface/tokenizers/issues/1902
|
open
|
[] | 2025-12-01T00:46:42Z
| 2025-12-01T00:46:42Z
| 0
|
Manamama-Gemini-Cloud-AI-01
|
pytorch/pytorch
| 169,269
|
cannot import name 'get_num_sms' from 'torch._inductor.utils'
|
### 🐛 Describe the bug
I'm trying to run [nano-vllm](https://github.com/GeeeekExplorer/nano-vllm), and there is an error:
```
File "/mnt/petrelfs/fengyuan/anaconda3/envs/qwen_copy/lib/python3.12/site-packages/torch/_inductor/kernel/mm_grouped.py", line 20, in <module>
from ..utils import (
ImportError: cannot import name 'get_num_sms' from 'torch._inductor.utils'
```
I reviewed the source code of torch-2.5.1, and there is a reference to `get_num_sms` in `mm_grouped.py`, but this function is not defined in `_inductor/utils.py`.
For some reason, I can't update my torch-2.5.1 to the latest version. How can I fix it without updating? Can I simply copy the `get_num_sms()` and related functions from torch-2.9.1?
By the way, you can't get the information about CUDA and GPUs in the following texts because I'm running my code in a cluster, but `nvcc -V` shows:
```
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Feb__7_19:32:13_PST_2023
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0
```
Thanks a lot!
### Versions
```
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (Anaconda gcc) 11.2.0
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.32
Python version: 3.12.11 | packaged by Anaconda, Inc. | (main, Jun 5 2025, 13:09:17) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.32
Is CUDA available: False
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7H12 64-Core Processor
Stepping: 0
CPU MHz: 2600.000
CPU max MHz: 2600.0000
CPU min MHz: 1500.0000
BogoMIPS: 5200.14
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov succor smca
Versions of relevant libraries:
[pip3] mkl_fft==1.3.11
[pip3] mkl_random==1.2.8
[pip3] mkl-service==2.4.0
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-nccl-cu12==2.27.5
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl defaults
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl
|
https://github.com/pytorch/pytorch/issues/169269
|
closed
|
[] | 2025-11-30T19:57:48Z
| 2025-12-02T20:54:05Z
| 2
|
WangHaoZhe
|
vllm-project/vllm
| 29,747
|
[Bug]: --scheduling-policy=priority & n>1 crashes engine
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
When running with priority scheduling, e.g.:
```bash
vllm serve Qwen/Qwen3-0.6B --scheduling-policy=priority
```
and using `n` > 1 in the request, like:
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="dummy")
res = client.chat.completions.create(
model=client.models.list().data[0].id,
messages=[{"role": "user", "content": "What is the meaning of life?"}],
n=2
)
print(res)
```
vllm crashes with:
```python
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] EngineCore encountered a fatal error.
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] Traceback (most recent call last):
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File "/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 835, in run_engine_core
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] engine_core.run_busy_loop()
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File "/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 860, in run_busy_loop
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] self._process_input_queue()
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File "/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 885, in _process_input_queue
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] self._handle_client_request(*req)
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File "/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 907, in _handle_client_request
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] self.add_request(req, request_wave)
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File "/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 291, in add_request
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] self.scheduler.add_request(request)
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File "/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/core/sched/scheduler.py", line 1242, in add_request
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] self.waiting.add_request(request)
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File "/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/core/sched/request_queue.py", line 150, in add_request
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] heapq.heappush(self._heap, (request.priority, request.arrival_time, request))
(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] TypeError: '<' not supported between instances of 'Request' and 'Request'
(EngineCore_DP0 pid=207394) Process EngineCore_DP0:
(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] AsyncLLM output_handler failed.
(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] Traceback (most recent call last):
(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] File "/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/async_llm.py", line 477, in output_handler
(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] outputs = await engine_core.get_output_async()
(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] File "/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core_client.py", line 883, in get_output_async
(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] raise self._format_exception(outputs) from None
(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] vllm.v1.engine.exceptions.EngineDeadError: EngineCore encountered an issue. See stack trace (above) for the root cause.
(EngineCore_DP0 pid=207394) Traceback (most recent call last):
(EngineCore_DP0 pid=207394) File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_DP0 pid=207394) self.run()
(EngineCore_DP0 pid=207394) File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
(EngineCore_DP0 pid=207394) self._target(*self._args, **self._kwargs)
(EngineCore_DP0 pid=207394) File "/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 846, in run_engine_core
(EngineCore_DP0 pid=207394) raise e
(EngineCore_DP0 pid=207394) File "/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 835, in run_engine_core
(EngineCore_DP0 pid=207394) engine_core.run_busy_loop()
(EngineCore_DP0 pid=207394) File "/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 860, in
|
https://github.com/vllm-project/vllm/issues/29747
|
closed
|
[
"bug"
] | 2025-11-30T13:20:23Z
| 2025-12-02T22:42:30Z
| 3
|
hibukipanim
|
pytorch/executorch
| 16,010
|
How to run add operator in executorch ?
|
The result of the following code is "Segmentation fault: 11" ...
```
using executorch::aten::ScalarType;
using executorch::aten::Tensor;
using executorch::aten::TensorImpl;
int main() {
executorch::runtime::runtime_init();
// Create our input tensor.
float data[14465 * 3] = { 1 };
TensorImpl::SizesType sizes[] = { 14465, 3 };
TensorImpl impl(
ScalarType::Float, // dtype
2, // number of dimensions
sizes,
data);
Tensor input_tensor(&impl);
Tensor output_tensor(&impl);
torch::executor::KernelRuntimeContext context_;
torch::executor::native::add_out(context_, input_tensor, input_tensor, 1.0, output_tensor);
return 0;
}
```
cc @larryliu0820 @JacobSzwejbka @lucylq
|
https://github.com/pytorch/executorch/issues/16010
|
open
|
[
"module: runtime"
] | 2025-11-30T10:49:22Z
| 2025-12-01T17:50:32Z
| null |
rscguo
|
vllm-project/vllm
| 29,735
|
[Usage]:Accessing free_blocks count from LLMEngine or LLM ?
|
### Your current environment
```text
None
```
### How would you like to use vllm
I'm doing research on key-value caching optimization. I want to know how to determine the number of free blocks during runtime. I tried manually creating the engine, but I couldn't find the method after searching through the code.
AI keeps providing methods that have already been abandoned.
I would be very grateful for any help, as this has been puzzling me for hours.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/29735
|
closed
|
[
"usage"
] | 2025-11-29T19:21:50Z
| 2025-12-05T14:01:42Z
| 4
|
H-T-H
|
vllm-project/vllm
| 29,722
|
[RFC]: Add Balance Scheduling
|
### Motivation.
**Limitations of the current vLLM v1 scheduling strategy**
vLLM v1 scheduling currently enables chunkedprefill by default, which processes prefill and decode requests simultaneously in a single scheduling session. This can impact the overall system throughput and performance in some scenarios.
Balance scheduling addresses this issue by synchronizing the number of running queues across all schedulers to delay the scheduling of new requests, thereby improving the overall system's steady-state decoding time. This achieves:
✅Adding `balance_gather` to the scheduler synchronizes the number of requests in the running queues between DPs.
✅Balance scheduling improves the decode steady-state time, thereby increasing the overall output throughput of the inference system.
### Proposed Change.
**1.Feature Overview**
In the vLLM scheduler, running requests (i.e., requests that are already undergoing pre-filled computation) have the highest priority, followed by waiting requests (i.e., requests that have not yet been computed).
As shown in the diagram above, when the entire inference system exits from a steady state, the scheduler will schedule a batch of new requests for prefill operations and then synchronize them among the dynamic programming (DP) models. This can cause some DP models that are entirely decoded to synchronize with the number of prefilled tokens. Frequent prefill scheduling by certain DP models can lead to a deterioration in the overall system output throughput.
Balance scheduling synchronizes the number of running queue requests across different DPs, and only schedules new requests for prefilling when at least every scheduler has fewer than max_nun_requst.
**2.Implementation Design**
**3.Experiment Results**
- Fixed-length input scenario: In the performance test scenario with 3.5K fixed-length input and 1.5K fixed-length output, the throughput performance was improved by approximately **18%** after adding balance scheduling.
| Method | Model | Input Len | Request Count | Output Len | BatchSize | Average TTFT | Average TPOT | e2e duration | Input Token Throughput | Output Token Throughput | Request Throughput
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| Baseline | DeepSeekV3.1 | 3500 | 512 | 1500 | 128 | 6600 | 86.85 | 591.9s | 3030.5 | 1297.3 | 0.86 |
| Balance scheduling | DeepSeekV3.1 | 3500 | 512 | 1500 | 128 | 7012 | 70.63 | 501.7s | 3575.7 | 1530.7 | 1.02 |
**4.Demo PR**
[#29721 ](https://github.com/vllm-project/vllm/pull/29721)
### Feedback Period.
No response
### CC List.
No response
### Any Other Things.
No response
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/29722
|
open
|
[
"RFC"
] | 2025-11-29T09:28:43Z
| 2025-12-02T08:23:33Z
| 0
|
GDzhu01
|
vllm-project/vllm
| 29,707
|
[Usage]: Workaround to run model on GPUs with Compute Capability < 8.0?
|
### Your current environment
Problem:
I am unable to run the Qwen3-VL-32B-Instruct-AWQ-4bit model due to a CUDA compute capability requirement. My hardware consists of two NVIDIA QUADRO RTX 5000 cards (16GB each, 32GB total) with a compute capability of 7.5. The software framework (likely a recent version of PyTorch or a specific library) raises an error:
"GPUs with compute capability < 8.0 are not supported."
Question:
Are there any workarounds to run this model on my older QUADRO RTX 5000 GPUs? Thanks in advance.
```
vllm collect-env
INFO 11-29 20:49:15 [__init__.py:216] Automatically detected platform cuda.
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 3.30.3
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 | packaged by Anaconda, Inc. | (main, Jun 5 2025, 13:09:17) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-6.14.0-27-generic-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.0.140
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: Quadro RTX 5000
GPU 1: Quadro RTX 5000
Nvidia driver version : 580.65.06
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 7
CPU(s) scaling MHz: 28%
CPU max MHz: 4700.0000
CPU min MHz: 1200.0000
BogoMIPS: 7399.70
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities
L1d cache: 320 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 10 MiB (10 instances)
L3 cache: 19.3 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Vulnerable
Vulnerability Ghostwrite: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Aut
|
https://github.com/vllm-project/vllm/issues/29707
|
closed
|
[
"usage"
] | 2025-11-29T00:47:39Z
| 2025-11-30T06:04:29Z
| 5
|
seasoncool
|
pytorch/torchtitan
| 2,091
|
question of `_op_sac_save_list` for op-sac
|
Hi, I have a noob question, is there any particular reason we dont put `torch.ops.aten._scaled_dot_product_cudnn_attention.default` (and maybe some other SDPA variants) into `_op_sac_save_list` to avoid recompute?
|
https://github.com/pytorch/torchtitan/issues/2091
|
closed
|
[] | 2025-11-28T23:29:02Z
| 2025-12-02T20:52:33Z
| 4
|
rakkit
|
pytorch/FBGEMM
| 5,176
|
How to apply gradient clip in fused optimizer?
|
I noticed that my embedding bag parameters exploded. Is there a way I could apply gradient clip.
I'm using `EmbOptimType.EXACT_ROWWISE_ADAGRAD`
Here is the code
```
sharder_with_optim_params = EmbeddingBagCollectionSharder(
fused_params={
'optimizer': EmbOptimType.EXACT_ROWWISE_ADAGRAD,
'learning_rate': 0.01,
'eps': 1e-8,
},
)
```
|
https://github.com/pytorch/FBGEMM/issues/5176
|
open
|
[] | 2025-11-28T16:07:57Z
| 2025-11-28T16:07:57Z
| null |
acmilannesta
|
vllm-project/vllm
| 29,679
|
[Usage]: Get request total time
|
### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 3.28.0
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.19 | packaged by conda-forge | (main, Oct 22 2025, 22:29:10) [GCC 14.3.0] (64-bit runtime)
Python platform : Linux-6.8.0-1030-azure-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to :
GPU models and configuration : GPU 0: NVIDIA H100 NVL
Nvidia driver version : 535.247.01
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.10.2
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9V84 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 40
Socket(s): 1
Stepping: 1
BogoMIPS: 4800.05
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves user_shstk avx512_bf16 clzero xsaveerptr rdpru arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 160 MiB (5 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-39
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.5.2
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
|
https://github.com/vllm-project/vllm/issues/29679
|
closed
|
[
"usage"
] | 2025-11-28T14:03:16Z
| 2025-12-01T09:34:12Z
| 5
|
chwundermsft
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.