repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/lerobot
| 2,259
|
Clarifications on fine-tuning on different envs and embodiments
|
Hi everyone,
I’m currently working on fine-tuning SmolVLA and π₀ using **[RLBench](https://github.com/stepjam/RLBench)**. The robot setup is a Franka Emika Panda (7DoF + gripper), and I’ve already collected custom LeRobot datasets for a pick-and-place task ([available on my Hugging Face](https://huggingface.co/RonPlusSign)) with 500 demo episodes.
I’ve successfully fine-tuned [OpenVLA](https://github.com/openvla/openvla) using its official repository, where the action space is defined as ΔEEF pose (Euler rotation) + gripper, and the state as ΔEEF pose (quaternion rotation) + gripper, using a single observation image (left shoulder), reaching around 22% success rate.
However, when trying to fine-tune SmolVLA, despite the training running without issues (loss converges and wandb plots look fine), the evaluation yields 0% success. I suspect I’m misunderstanding how to correctly define the state and action spaces for SmolVLA in this context.
Since RLBench is not one of the officially supported envs, I created an evaluation script (you can find it [here](https://github.com/RonPlusSign/RLBench/blob/master/test_smolvla.py)), similar to the examples provided in [Robot Learning: A Tutorial](https://github.com/fracapuano/robot-learning-tutorial/blob/main/snippets/ch5/02_using_smolvla.py) (thanks @fracapuano for the amazing work!).
<img width="1207" height="393" alt="Image" src="https://github.com/user-attachments/assets/59175cf4-c458-49f3-96b0-e96bc414e333" />
For example, I started the finetuning using:
```sh
python src/lerobot/scripts/lerobot_train.py \
--policy.path=HuggingFaceVLA/smolvla_libero \
--policy.repo_id=RonPlusSign/smolvla_PutRubbishInBin \
--dataset.repo_id=RonPlusSign/RLBench-LeRobot-v3-PutRubbishInBin \
--batch_size=32 \
--output_dir=outputs/train/smolvla_finetuned_rubbish \
--policy.device=cuda \
--wandb.enable=true \
--save_freq=10000 \
--steps=60000
```
I also tested smaller finetunings (e.g. 5k, 10k, 20k steps).
Here are some specific points I’d like to clarify:
1. What are the exact action and state spaces used in SmolVLA and π₀ pretraining? (ΔEEF pose, absolute EEF pose, joint positions, joint velocities, ... and angle representations e.g. quaternion or Euler).
2. Regarding camera inputs: does the naming or number of cameras affect the model performance? Should I stick to the _exact_ names provided in the `config.json` file, such as `observation.images.image` and `observation.images.image2` (front/wrist), similar to pretraining? Or is it fine to use different camera names and/or add extra views? Is there a way to override the existing input and output features or this means that the pretrain would be wasted?
3. The base model [lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base) is pretrained on the SO100/SO101 robot, so I assume it might not transfer well to Franka Panda tasks — is that correct?
4. Would it make more sense to start from a model trained on Franka, e.g. [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero), or it's still a different type of embodiment (it seems with 6DoF+gripper, which is not my case)?
5. Are the datasets [HuggingFaceVLA/libero](https://huggingface.co/datasets/HuggingFaceVLA/libero) and/or [HuggingFaceVLA/smol-libero](https://huggingface.co/datasets/HuggingFaceVLA/smol-libero) the ones used for pretraining [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero)?
6. In [HuggingFaceVLA/smol-libero](https://huggingface.co/datasets/HuggingFaceVLA/smol-libero) the actions have dimension 7, which doesn’t clearly map to 7 joint angles + gripper. Are these absolute joint positions, EEF poses, or something else? Does LIBERO use a 6DoF or 7DoF Franka setup? If 6DoF, which joint is excluded?
Any guidance on these points (or pointers to where this information is documented) would be very helpful — I’ve been trying to align my setup with the pretrained models but haven’t found clear references for these details.
Thanks a lot for your time and for maintaining this project!
|
https://github.com/huggingface/lerobot/issues/2259
|
open
|
[
"question",
"policies",
"simulation"
] | 2025-10-20T13:24:22Z
| 2025-12-23T10:37:31Z
| null |
RonPlusSign
|
pytorch/pytorch
| 165,902
|
torchcodec in pytorch url
|
### 🚀 The feature, motivation and pitch
Is it possible to have torchcodec in pytorch url?
pip3 install torch torchvision torchaudio torchcodec--index-url https://download.pytorch.org/whl/cu130
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @atalman
|
https://github.com/pytorch/pytorch/issues/165902
|
open
|
[
"module: binaries",
"triaged"
] | 2025-10-20T12:11:01Z
| 2025-10-20T14:27:16Z
| 0
|
johnnynunez
|
pytorch/pytorch
| 165,900
|
Converting weights `.pt` content between `dict` and `RecursiveScriptModule`
|
When using PyTorch inside Isaac Lab to train RL policies, the program saves weights `.pt` file as a Python dict (policy, value, and optimizer keys). It can be further loaded with `torch.load` function.
However, Isaac Sim's policy loader expects a `torch.jit._script.RecursiveScriptModule` object to be loaded with `torch.jit.load` and attempting `torch.jit.load` leads to errors like:
`RuntimeError: PytorchStreamReader failed locating file constants.pkl: file not found`
Is there any way to convert between these file content formats? This may be the crucial issue regarding usage of PyTorch inside Isaac Lab / Sim, so I posted the original thread also on their repo if you find this useful: https://github.com/isaac-sim/IsaacLab/issues/3697
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
|
https://github.com/pytorch/pytorch/issues/165900
|
open
|
[
"oncall: jit"
] | 2025-10-20T11:25:01Z
| 2025-10-20T14:27:26Z
| 0
|
PsorTheDoctor
|
vllm-project/vllm
| 27,184
|
[Doc]: Multi-Modal Benchmark is too simple
|
### 📚 The doc issue
The latest doc about Multi-Modal Benchmark shows :
- 1、download sharegpt4v_instruct_gpt4-vision_cap100k.json and COCO's 2017 Train images
- 2、vllm serve and vllm bench serve
But there is so much details to concern:
- 1、delete all json that not is coco`s in sharegpt4v_instruct_gpt4-vision_cap100k.json
- 2、place COCO's 2017 Train images in /root directory like /train2017/,
- 3、 vllm serve --allowed-local-media-path /train2017/ , because vllm use the condition:
```
if allowed_local_media_path not in filepath.resolve().parents
```
the ` filepath.resolve().parents` is ["/train2017", "/"], so the easiest way is to place the images in /train2017/ and set `--allowed-local-media-path /train2017/`
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27184
|
open
|
[
"documentation"
] | 2025-10-20T06:24:18Z
| 2025-10-20T16:44:17Z
| 2
|
BigFaceBoy
|
vllm-project/vllm
| 27,182
|
[Feature]: INT8 Support in Blackwell Arch
|
### 🚀 The feature, motivation and pitch
hello, I want to use w8a8(int8) in blackwell gpus, and when I read the source code, it says, the int8 is not support by sm120. According to the nvidia-ptx-instructions, blackwell series gpus still have a int8 tensor, is there another way we use w8a8 int8 in rtx5090 by vllm now
<img width="1165" height="1109" alt="Image" src="https://github.com/user-attachments/assets/42546583-4124-4d3c-a1ad-ea3fb19d70cf" />
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27182
|
open
|
[
"feature request"
] | 2025-10-20T06:04:03Z
| 2025-10-20T06:04:03Z
| 0
|
nhanngoc94245
|
huggingface/optimum
| 2,376
|
Support qwen2_5_vl for ONNX export
|
### Feature request
I would like to be able to convert [this model](https://huggingface.co/prithivMLmods/DeepCaption-VLA-V2.0-7B) which is based on Qwen 2.5 VL architecture using optimum. Right now, I get the error:
```
ValueError: Trying to export a qwen2_5_vl model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type qwen2_5_vl to be supported natively in the ONNX export.
```
I read the documentation but I have no idea how I'd go about setting the custom onnx config up.
### Motivation
Qwen 2.5 VL is a SOTA architecture that is already being used in downstream models (see my example), so it is worth supporting.
### Your contribution
I can do research but I don't have enough experience with this codebase and ML code to contribute a PR.
|
https://github.com/huggingface/optimum/issues/2376
|
open
|
[] | 2025-10-19T22:08:28Z
| 2026-01-06T08:03:39Z
| 8
|
ayan4m1
|
pytorch/pytorch
| 165,861
|
Reflect padding: CUDA errror when one of the batch dimensions is larger than uint16 max value (2**16)
|
### 🐛 Describe the bug
Reflect padding breaks when one of the batch dimensions is larger than uint16 max value (2**16).
The total memory footprint is not important, as when the tensor holds more numbers, but all but last dimension is within the uint16 range everything is fine.
Other padding modes behave fine, the problem is only with the reflection one.
## why is this important?
`torch.stft` only accepts 2D tensors (B, L), requiring flattening of higher dimensions into the batch dimension. This commonly produces batch sizes > 65536 for large batches or multi-dimensional audio/signal data.
## reproduce
```python
import torch
import torch.nn.functional as F
# these break cuda
x = torch.rand(2**16, 2, device="cuda")
# x = torch.rand(1, 2**16, 2, device="cuda")
# x = torch.rand(2**16, 1, 2, device="cuda")
# these are fine even if the total number of samples is more than 2**16, but not along a single dimension
# x = torch.rand(2**16 - 1, 200, device="cuda") # everything ok
# x = torch.rand(8, 2**16 - 1, 200, device="cuda") # everything ok
# x = torch.rand(2**16 - 1, 8, 200, device="cuda") # everything ok
# x = torch.rand(2, 2**18, device="cuda") # everything ok
F.pad(x, (1, 1), mode="constant")
print("constant pad ok")
F.pad(x, (1, 1), mode="circular")
print("circular pad ok")
F.pad(x, (1, 1), mode="replicate")
print("replicate pad ok")
F.pad(x, (1, 1), mode="reflect")
print("this won't print")
```
output (error message):
```
constant pad ok
circular pad ok
replicate pad ok
Traceback (most recent call last):
File "/home/milu10/src/temp/torch-pad-cuda-bug.py", line 23, in <module>
F.pad(x, (1, 1), mode="reflect")
File "/home/milu10/src/temp/.pixi/envs/default/lib/python3.12/site-packages/torch/nn/functional.py", line 5294, in pad
return torch._C._nn.pad(input, pad, mode, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.AcceleratorError: CUDA error: invalid configuration argument
Search for `cudaErrorInvalidConfiguration' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
`python collect_env.py`:
Collecting environment information...
PyTorch version: 2.9.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Rocky Linux 9.5 (Blue Onyx) (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.12.12 | packaged by conda-forge | (main, Oct 13 2025, 14:34:15) [GCC 14.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-503.14.1.el9_5.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7502 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 97%
CPU max MHz: 2500.0000
CPU min MHz: 1500.0000
BogoMIPS: 4990.34
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthres
|
https://github.com/pytorch/pytorch/issues/165861
|
closed
|
[
"module: cuda",
"triaged",
"module: edge cases"
] | 2025-10-19T12:07:23Z
| 2025-10-22T21:53:53Z
| 2
|
michal-lukomski
|
huggingface/transformers
| 41,731
|
transformers CLI documentation issue
|
### System Info
- `transformers` version: 5.0.0.dev0
- Platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.12.9
- Huggingface_hub version: 1.0.0.rc6
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA GeForce RTX 3050 Laptop GPU
### Who can help?
@stevhliu
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] Update the documentation for the transformers-cli
- [ ] Set the default --fixed flag to "pipe" in place of "infer"
### Reproduction
echo -e "Plants create [MASK] through a process known as photosynthesis." | transformers run --task fill-mask --model google-bert/bert-base-uncased --device 0
(as shown in documentation)
**output:-**
<img width="1089" height="354" alt="Image" src="https://github.com/user-attachments/assets/0722d782-748b-4ecf-afa9-e4e6dbe67126" />
### Expected behavior
**output:**
<img width="1087" height="287" alt="Image" src="https://github.com/user-attachments/assets/1169bfea-8473-48c4-bdd1-f623d16e2f28" />
**Fix/updated command:**
echo -e "Plants create [MASK] through a process known as photosynthesis." | transformers run fill-mask --model google-bert/bert-base-uncased --device 0 --format pipe
This indicates the current working format is:-
transformers run <task_name> --model <model_name> --format <format_name> [options]
**update**
we could let the default --format flag be "pipe" instead of "infer" which is deprecated. so we could also write command as follows for most models :-
transformers run <task_name> --model <model_name>
**Action Needed:** (documentation change)
All documentation for similar models should be updated for the transformer CLI inference
I would like to confirm if my understanding is correct: should I go ahead and raise a PR to update the documentation and set the default as "pipe" for --format flag? I am relatively new to open source and would greatly appreciate any guidance or tips you could provide to ensure my contribution is appropriate and follows best practices.
|
https://github.com/huggingface/transformers/issues/41731
|
closed
|
[
"bug"
] | 2025-10-19T09:31:46Z
| 2025-12-22T08:03:09Z
| 14
|
ArjunPimpale
|
huggingface/chat-ui
| 1,947
|
HuggingChat MoM (Mixture-of-Models) Integration Proposal 🤗
|
# **HuggingChat MoM (Mixture-of-Models) Integration Proposal 🤗**
**Status:** Proposal
**Date:** 2025-10-19
**Version:** 1.0
**Authors**: vLLM-SR Team
---
## Executive Summary
This proposal outlines the integration of **vLLM Semantic Router** into HuggingChat as a new **MoM (Mixture-of-Models)** routing option. The integration will enable advanced intelligent routing capabilities including semantic caching, PII detection, and chain-of-thought (CoT) transparency, while maintaining full backward compatibility with the existing Omni (Arch router) implementation.
---
## 1. Motivation
### Current State
- HuggingChat currently supports **Omni** routing via the Arch router (`src/lib/server/router/arch.ts`)
- Arch router provides basic route selection using LLM-based decision-making
- Limited visibility into routing decisions and no semantic caching capabilities
### Desired State
- Support **MoM (Mixture-of-Models)** routing via vLLM Semantic Router
- Enable advanced features: semantic caching, PII detection, intelligent routing
- Provide transparent chain-of-thought (CoT) information for routing decisions
- Maintain coexistence of both Omni and MoM routers for gradual rollout
### Business Value
1. **Performance**: Semantic caching reduces latency for repeated queries
2. **Security**: PII detection protects user privacy
3. **Transparency**: CoT information builds user trust
4. **Flexibility**: Users can choose between Omni and MoM routing strategies
5. **Dashboard Integration**: vLLM-SR dashboard provides monitoring and analytics
### About vLLM Semantic Router
**vLLM Semantic Router** is an intelligent routing system that embodies the **Mixture-of-Models (MoM)** philosophy, with modelName (**MoM**):
```shell
curl -X POST http://localhost:8801/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "MoM",
"messages": [
{"role": "user", "content": "What is the derivative of x^2?"}
]
}'
```
- **Intelligent Routing**: Routes requests to the optimal model based on semantic understanding of the query, not just keyword matching
- **Semantic Caching**: Leverages semantic similarity to cache responses, dramatically reducing latency for similar queries (not just exact matches)
- **Semantic Chain Architecture**: Evolving toward a composable semantic chain where all stages are orchestrated in an extensible pipeline, enabling future enhancements and custom stage integration in work-in-progress "SemanticChain".
- **Three-Stage Pipeline** (Extensible & Composable):
- **Stage 1 - Prompt Guard**: Security-first approach with jailbreak detection and PII protection
- **Stage 2 - Router Memory**: Intelligent semantic caching for performance optimization
- **Stage 3 - Smart Routing**: Multi-level intelligent routing combining three complementary strategies:
- **Domain Understanding**: Semantic classification of queries into domains (math, coding, general, etc.)
- **Similarity-Based Routing**: Semantic similarity matching to route similar queries to optimal models
- **Keyword-Based Routing**: Keyword pattern matching for explicit intent detection
- These three routing strategies work together to provide comprehensive query understanding and optimal model selection
- Future stages can be added to the pipeline without disrupting existing functionality
- **Mixture-of-Models Philosophy**: Recognizes that no single model is optimal for all tasks. By intelligently routing different types of queries to different specialized models, it achieves:
- Better accuracy through task-specific model selection
- Cost optimization by using smaller models for simple tasks
- Performance improvement through semantic understanding
- Transparency via chain-of-thought visibility
- **Production-Ready**: Battle-tested with comprehensive error handling, monitoring, and dashboard support
- **Open Source**: vLLM Community-driven development with active maintenance and feature additions
---
## 2. Goals
### Primary Goals
- ✅ Integrate vLLM Semantic Router as a new MoM routing option
- ✅ Extract and store chain-of-thought (CoT) metadata from vLLM-SR responses
- ✅ Support both Omni and MoM routers coexisting in the same system
- ✅ Expose CoT information to frontend for visualization
### Secondary Goals
- ✅ Support A/B testing between Omni and MoM routers
- ✅ Integrate with vLLM-SR dashboard for monitoring
---
## 3. Non-Goals
- ❌ Replace Omni router entirely (maintain coexistence)
- ❌ Modify vLLM Semantic Router codebase
- ❌ Implement custom semantic caching in HuggingChat (use vLLM-SR's caching)
- ❌ Create new dashboard (integrate with existing vLLM-SR dashboard)
- ❌ Support non-OpenAI-compatible endpoints for MoM
---
## 4. Design Principles
### 1. **Backward Compatibility**
- Existing Omni router functionality remains unchanged
- No breaking changes to current APIs or configurations
- Both routers can be configured independently
### 2. **Transparency**
- CoT inf
|
https://github.com/huggingface/chat-ui/issues/1947
|
open
|
[
"enhancement"
] | 2025-10-19T08:17:14Z
| 2025-10-20T11:12:30Z
| 3
|
Xunzhuo
|
pytorch/xla
| 9,681
|
Improve PyTorch/XLA Documentation and Clarify SPMD Usage
|
## 📚 Documentation
### [Feature Request / Documentation Improvement] Improve PyTorch/XLA Documentation and Clarify SPMD Usage
Hello PyTorch/XLA team,
During my TPU grant I encountered many undocumented pitfalls and unclear behaviors, which made the setup process very time-consuming and confusing.
I’d like to ask for clarification and improvement on several key points that caused me significant confusion and wasted time.
Perhaps the documentation seems clear to experienced users, but when reading it for the first time, there are many implicit assumptions and missing explanations.
---
### General Request
Please improve the documentation — make it more **explicit** and **practical**, especially for multi-host and SPMD setups.
For example, while it’s indeed mentioned in the [*Running on TPU Pods*](https://docs.pytorch.org/xla/master/learn/pytorch-on-xla-devices.html#running-on-tpu-pods) section that the code must be launched on all hosts, this information is **buried too deep** and is **not referenced** in other critical sections like “Troubleshooting Basics.”
It would be much clearer if you placed a visible note near the top of documentation saying something like:
> ⚠️ For multi-host TPU setups, you must launch the code on all hosts simultaneously.
> See [Running on TPU Pods (multi-host)](...) for details.
This would help avoid confusion, since right now it’s easy to miss and leads to situations where the code just hangs with no clear reason.
---
### Specific Questions and Issues
1. What is recommended to use — `.launch` or `spmd`?
2. Should SPMD be started on all hosts as well?
3. In SPMD, is the batch size **global** or **per-host**?
- How is data distributed if each process sees all devices and I have 4 hosts with 4 devices each?
- If the batch size is global, what is the purpose of having multiple hosts? Only for data loading?
- How does XLA decide what data goes to which device — does it shard across all devices globally or only locally per host?
4. How to correctly use `scan/scan_layers` if the transformer block takes multiple arguments and one of them is of type `torch.bool`?
5. `assume_pure` seems to break if the model contains `nn.Parameter`. Is it even correct to use it like that?
- Can I reuse “params and buffers” between steps, or should I retrieve them every time before a training pass?
6. `syncfree.AdamW(model.parameters(), lr=lr, betas=(0.9, 0.95), weight_decay=0)` seems to trigger recompilation around step ~323 (possibly due to `beta2`, not sure).
7. In SPMD, how to correctly get the process ID? `world_size` and `global_ordinal` don’t work. Should I use `process_index`? `is_master_ordinal(local=False)` also doesn’t work.
8. Please add a note to the docs: when logging, it’s better to use `flush=True`, otherwise logs might not appear (which is confusing). Also, wrap training code in `try/except`, since exceptions sometimes don’t log either.
9. How can I perform **sampling and logging** in SPMD mode if I want **only one host** to handle these tasks (not all hosts)?
10. Please provide **fully explicit examples** — with comments, no abstractions, step-by-step explanations of what each part does and how it can be modified.
11. Compilation caching seems broken — when trying to load, it says “not implemented.”
12. Can I pass only one `input_sharding=xs.ShardingSpec(mesh, ('fsdp', None))` to `MpDeviceLoader` if my dataset returns a tuple of 10 tensors with different shapes?
13. `xm.rendezvous` seems to do nothing in SPMD mode (at least before the training loop).
14. How to verify that all hosts are actually training **one shared model**, and not each training separately?
15. In the docs, `HybridMesh(ici_mesh_shape, dcn_mesh_shape, ('data','fsdp','tensor'))` is shown,
but in practice it only works if you pass named arguments like `ici_mesh_shape=ici_mesh_shape`, otherwise it errors out.
16. How to correctly do **gradient checkpointing** per layer with FSDP?
17. How to correctly do **gradient clipping**?
18. If model weights are expected to remain in FP32 when using `autocast`, please **explicitly state that in the training docs** — it would help avoid second-guessing.
19. What is a **reasonable compilation time** during training? Mine can take **20–30 minutes**.
20. What are the actual intended purposes of `torch_xla.step()` and `torch_xla.compile()`?
- Since PyTorch/XLA already compiles and executes lazily, it’s unclear when and why these should be used explicitly.
---
All of this was tested on `v4-32 TPU`.
Maybe some of it is covered somewhere in the docs and I just missed it, but I hope you can clarify and improve the documentation.
Thank you for your time and support.
|
https://github.com/pytorch/xla/issues/9681
|
open
|
[
"distributed",
"documentation"
] | 2025-10-19T04:58:44Z
| 2025-10-20T13:27:34Z
| 1
|
Muinez
|
huggingface/tokenizers
| 1,877
|
encode bytes directly
|
Is there a way to directly encode bytes with a bpe based HF tokenizer without having to decode the string first?
|
https://github.com/huggingface/tokenizers/issues/1877
|
open
|
[] | 2025-10-19T03:30:39Z
| 2025-11-28T07:43:18Z
| 2
|
tsengalb99
|
vllm-project/vllm
| 27,154
|
[Installation]: How to reduce the vllm image
|
### Your current environment
Hi,
I looked at docker pull vllm/vllm-openai:latest — the image is around 12 GB. I’m exploring ways to reduce the vLLM image size specifically for NVIDIA L40s (i use linux amd64). any ideas?
does building vllm from source help to reduce the image?
Here’s what I’ve tried so far (but not sure how to install flashinfer):
```
FROM nvidia/cuda:12.1.0-runtime-ubuntu22.04
# Install Python and pip
RUN apt-get update && apt-get install -y python3 python3-pip && \
apt-get clean && rm -rf /var/lib/apt/lists/*
# Install only vLLM and production dependencies
RUN pip3 install --no-cache-dir vllm
# Set CUDA arch for A100 (8.0)
ENV TORCH_CUDA_ARCH_LIST="8.9+PTX"
# Expose API port
EXPOSE 8000
ENTRYPOINT ["python3", "-m", "vllm.entrypoints.openai.api_server"]
```
more infos:
https://discuss.vllm.ai/t/current-vllm-docker-image-size-is-12-64gb-how-to-reduce-it/1204/4
https://docs.vllm.ai/en/latest/deployment/docker.html#building-vllm-s-docker-image-from-source
pr: https://github.com/vllm-project/vllm/pull/22377
### How you are installing vllm
```sh
pip install -vvv vllm
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27154
|
open
|
[
"installation"
] | 2025-10-18T17:52:07Z
| 2025-10-20T17:45:39Z
| 4
|
geraldstanje
|
vllm-project/vllm
| 27,153
|
[Feature]: Allow vllm bench serve in non-streaming mode with /completions API
|
### 🚀 The feature, motivation and pitch
vLLM’s bench serve currently supports recording benchmark results only in the streaming mode - recording metrics like TTFT, TPOT, ITL etc. For my use case benchmarking [llm-d ](https://github.com/llm-d/llm-d)which uses vLLM, I would like to enable vllm bench serve in non-streaming mode for the openai backend, recording only non-streaming latency metrics like E2E Latency. Overall, the changes required would be as follows:
* Add a new Async Request Function - `async_request_openai_completions_non_streaming()` function in [`vllm/vllm/benchmarks/lib/endpoint_request_func.py`](https://github.com/vllm-project/vllm/blob/main/vllm/benchmarks/lib/endpoint_request_func.py) to support parsing of non-streaming vllm outputs.
* Add a new benchmark argument: `benchmark_streaming`. If `benchmark_streaming` is set to False for the `openai` backend, then the above function `async_request_openai_completions_non_streaming()` is called instead of `async_request_openai_completions`.
* Either modify [`vllm/benchmarks/serve.py`](https://github.com/vllm-project/vllm/blob/main/vllm/benchmarks/serve.py) or design a new benchmark script to calculate and save metrics, excluding streaming-only metrics like TTFT, TPOT and ITL.
Happy to discuss and create PRs for the above implementation. Looking forward to thoughts and feedback.
### Alternatives
Another option I'm considering is using [benchmark_throughput.py](https://github.com/vllm-project/vllm/blob/main/benchmarks/benchmark_throughput.py). However, it relies on the offline LLM library which does not serve my use-case of benchmarking the vllm server in non-streaming mode.
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27153
|
open
|
[
"feature request"
] | 2025-10-18T17:47:44Z
| 2025-10-18T20:50:49Z
| 0
|
susiejojo
|
huggingface/candle
| 3,137
|
Strategic Discussion: Flicker's Hybrid Architecture for Lightweight Inference + Advanced Training
|
# Strategic Discussion: Flicker's Hybrid Architecture Evolution
## Overview
This issue proposes a comprehensive strategic discussion about flicker's positioning and architecture evolution. The detailed proposal is documented in `STRATEGIC_DISCUSSION_PROPOSAL.md`.
## Context
During analysis of flicker's capabilities vs PyTorch, a critical strategic question emerged: Should flicker be primarily a **lightweight inference engine** or evolve into a **comprehensive training framework**?
## Proposed Solution: Hybrid Architecture
Instead of choosing one direction, we propose a dual-track approach:
- **flicker-core**: Lightweight inference (current focus)
- **flicker-train**: Advanced training features
- **Feature Gates**: Granular control for specific capabilities
## Key Strategic Questions
### 1. Technical Feasibility
- Is zero-copy gradient system feasible with Rust ownership?
- How do we implement compile-time training validation?
- What's the best approach for async-distributed training?
### 2. Market Positioning
- Does hybrid approach make sense for flicker's goals?
- How do we balance inference vs training development resources?
- Will this attract both inference and training users?
### 3. Implementation Priority
- Which advanced training features should we implement first?
- How do we ensure seamless transition from inference to training?
- What performance targets should we set vs PyTorch?
## Revolutionary Differentiators
The proposal identifies 4 major areas where Rust could revolutionize ML:
1. **Zero-Copy Gradient Systems** - Gradients as views, not copies
2. **Compile-Time Training Validation** - Catch training errors at compile time
3. **Async-First Training Infrastructure** - True concurrency without GIL
4. **SIMD-Optimized Research Features** - Hand-optimized kernels impossible in Python
## Benefits
✅ Preserves current lightweight inference advantages
✅ Enables advanced training capabilities unique to Rust
✅ Creates natural upgrade path for users
✅ Positions flicker as both practical tool and research platform
## Next Steps
1. **Enable GitHub Discussions** to facilitate community input
2. **Review detailed proposal** in `STRATEGIC_DISCUSSION_PROPOSAL.md`
3. **Gather feedback** from community on strategic direction
4. **Validate technical feasibility** of proposed features
5. **Create implementation roadmap** based on consensus
## Discussion Document
📋 **Full Proposal**: See `STRATEGIC_DISCUSSION_PROPOSAL.md` for comprehensive analysis including:
- Current state analysis
- PyTorch comparison
- Technical implementation details
- Code examples of revolutionary features
- Trade-offs and considerations
- Community input questions
## Call for Input
This represents a potential major evolution for flicker. Community input is essential to validate:
- Strategic direction alignment with user needs
- Technical feasibility of proposed features
- Implementation priority and resource allocation
- Market positioning effectiveness
**Please review the detailed proposal and share your thoughts on flicker's strategic future.**
---
*This issue will be converted to a GitHub Discussion once discussions are enabled on the repository.*
|
https://github.com/huggingface/candle/issues/3137
|
closed
|
[] | 2025-10-18T17:27:24Z
| 2025-10-21T16:18:51Z
| 1
|
jagan-nuvai
|
huggingface/lerobot
| 2,245
|
release 0.4.0 and torch 2.8.0
|
Hello Lerobot Team! :)
Quick question, do you have a time estimate for:
- lerobot release 0.4.0 (ie next stable release using the new v30 data format)
- bumping torch to 2.8
Thanks a lot in advance!
|
https://github.com/huggingface/lerobot/issues/2245
|
closed
|
[
"question",
"dependencies"
] | 2025-10-18T16:57:07Z
| 2025-10-19T18:34:47Z
| null |
antoinedandi
|
pytorch/torchtitan
| 1,920
|
Potentially incorrect attention flop calculation due to wrong head_dim?
|
### Bug description
https://github.com/pytorch/torchtitan/blob/a8899e4b2cab74eadbe4b9a2ca2776ceb8829db3/torchtitan/models/utils.py#L432-L437
However, `head_dim` is not necessarily equal to `dim / n_heads`
e.g. Qwen3-4B, dim=2560, n_heads=32, head_dim=128
### Versions
latest main
|
https://github.com/pytorch/torchtitan/issues/1920
|
closed
|
[
"high priority",
"triage review"
] | 2025-10-18T15:56:57Z
| 2025-10-29T22:03:17Z
| 4
|
gau-nernst
|
pytorch/pytorch
| 165,836
|
[ROCm][CI] Machines under the label linux.rocm.gpu.2 are undergoing maintenance.
|
> NOTE: Remember to label this issue with "`ci: sev`"
> If you want autorevert to be disabled, keep the ci: disable-autorevert label
<!-- Add the `merge blocking` label to this PR to prevent PRs from being merged while this issue is open -->
## Current Status
*Status could be: preemptive, ongoing, mitigated, closed. Also tell people if they need to take action to fix it (i.e. rebase)*.
ongoing
## Error looks like
*Provide some way users can tell that this SEV is causing their issue.*
We may expect higher queue times for PyTorch ROCm linux.rocm.gpu.2 workflows.
## Incident timeline (all times pacific)
*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*
10/18/2025
## User impact
*How does this affect users of PyTorch CI?*
We may expect higher queue times for PyTorch ROCm linux.rocm.gpu.2 workflows.
## Root cause
*What was the root cause of this issue?*
Maintenance
## Mitigation
*How did we mitigate the issue?*
Will be resolve by EOD 10/19/2025
## Prevention/followups
*How do we prevent issues like this in the future?*
N/A
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
|
https://github.com/pytorch/pytorch/issues/165836
|
closed
|
[
"module: rocm",
"ci: sev"
] | 2025-10-18T12:54:28Z
| 2025-10-20T16:09:25Z
| 0
|
amdfaa
|
huggingface/lerobot
| 2,242
|
Is it no longer possible to fine-tune the previously used π0 model?
|
I previously trained a model using the following command for fine-tuning:
`lerobot-train --dataset.repo_id=parkgyuhyeon/slice-clay --policy.path=lerobot/pi0 --output_dir=outputs/train/pi0_slice-clay --job_name=pi0_slice-clay --policy.device=cuda --wandb.enable=false --wandb.project=lerobot --log_freq=10 --steps=50000 --policy.repo_id=parkgyuhyeon/pi0_slice-clay --policy.push_to_hub=false`
However, after the release of π0.5, I noticed that the new example command includes additional arguments like:
```
--policy.repo_id=your_repo_id \
--policy.compile_model=true \
--policy.gradient_checkpointing=true \
--policy.dtype=bfloat16 \
```
It seems that some new options have been added.
Does this mean the model I fine-tuned earlier using π0 can no longer be used?
|
https://github.com/huggingface/lerobot/issues/2242
|
closed
|
[
"question",
"policies"
] | 2025-10-18T08:42:35Z
| 2025-10-20T00:18:03Z
| null |
pparkgyuhyeon
|
huggingface/lerobot
| 2,239
|
Models trained using openpi pi0.5 on Lerobot's pi0.5
|
Hi, can I check if models trained using the [pytorch port of openpi's pi0.5](https://github.com/Physical-Intelligence/openpi?tab=readme-ov-file#pytorch-support) are compatible with lerobot's defination of pi0.5?
Thanks!
|
https://github.com/huggingface/lerobot/issues/2239
|
open
|
[
"question",
"policies"
] | 2025-10-18T02:01:45Z
| 2025-10-18T10:54:06Z
| null |
brycegoh
|
pytorch/pytorch
| 165,811
|
[RFC] A Python backend registration API
|
In this dev post (https://dev-discuss.pytorch.org/t/embrace-tensor-subclass-as-a-python-device-registration-api/2771) I have talked about creating a PyTorch backend purely in Python. After chatting with few folks (@FFFrog @gabrieldemarmiesse), we decided that it's a good idea to formalize APIs around registering Backend in Python.
Please take a look, looking forward on any feedbacks.
https://github.com/pytorch/rfcs/pull/83
Thanks!
cc @bdhirsh @albanD
|
https://github.com/pytorch/pytorch/issues/165811
|
open
|
[
"triaged",
"module: backend",
"module: python frontend"
] | 2025-10-18T00:46:37Z
| 2025-10-27T17:28:37Z
| 1
|
qihqi
|
pytorch/pytorch
| 165,799
|
`torch.where` does not accept scalar argument when `out=` is passed
|
### 🐛 Describe the bug
`torch.where` accepts scalar arguments as per documentation. This works fine for the most part, but when the `out` argument is provided, then a `TypeError` is raise complaining that scalar arguments are not accepted.
To reproduce the error, run
```
import torch
x = torch.tensor([1.0, 2.0])
cond = torch.tensor([True, False])
print(torch.where(cond, x, 3.0)) # works fine, prints `tensor([1., 3.])`
print(torch.where(cond, x, 3.0, out=x))
```
which raises error
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: where(): argument 'other' (position 3) must be Tensor, not float
```
I have tested this both on Linux and MacOS.
### Versions
```
PyTorch version: 2.9.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.6.1 (arm64)
GCC version: Could not collect
Clang version: 12.0.0 (clang-1200.0.32.28)
CMake version: version 3.31.3
Libc version: N/A
Python version: 3.11.11 | packaged by conda-forge | (main, Mar 3 2025, 20:44:07) [Clang 18.1.8 ] (64-bit runtime)
Python platform: macOS-15.6.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] torch==2.9.0
[conda] numpy 2.2.4 pypi_0 pypi
[conda] torch 2.9.0 pypi_0 pypi
```
cc @albanD
|
https://github.com/pytorch/pytorch/issues/165799
|
open
|
[
"triaged",
"module: python frontend"
] | 2025-10-17T22:30:10Z
| 2025-10-19T19:21:34Z
| null |
hchau630
|
pytorch/executorch
| 15,222
|
How to support custom LLMs with qualcomm backend?
|
``examples/qualcomm/oss_scripts/llama/llama.py`` gives an example on how to export LLMs.
I would like to know if there are any guidelines for supporting custom LLMs with architectures similar to LLaMA. Specifically, I have a huggingface-style checkpoint folder.
cc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin
|
https://github.com/pytorch/executorch/issues/15222
|
closed
|
[
"partner: qualcomm",
"module: qnn"
] | 2025-10-17T15:22:28Z
| 2025-10-30T21:20:11Z
| null |
xiaoxiaosuaxuan
|
huggingface/lerobot
| 2,228
|
Trossen WidowX AI model, depth cameras and tests
|
Hi,
Would you be open to receive pull requests to support more recent trossen robotics setups as well as depth cameras? I think for the robot part the pattern is quite well established. For depth cameras we solved it by tweaking a bit the dataset utils.
Our implementation is fairly tested.
|
https://github.com/huggingface/lerobot/issues/2228
|
closed
|
[
"question",
"robots"
] | 2025-10-17T09:32:22Z
| 2025-10-31T19:15:25Z
| null |
lromor
|
vllm-project/vllm
| 27,090
|
[Usage]: Does vLLM support a data-parallel group spanning multiple nodes when starting an online service?
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
Does vLLM support a data-parallel group spanning multiple nodes when starting an online service?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27090
|
open
|
[
"usage"
] | 2025-10-17T09:15:04Z
| 2025-10-20T02:37:19Z
| 2
|
KrisLu999
|
vllm-project/vllm
| 27,086
|
[Bug]: After enabling P-D Disaggregation, the final output results are not entirely identical.
|
### Your current environment
vllm VERSION: 0.10.1
### 🐛 Describe the bug
When I fixed the random seed and ensured all environment variables were consistent, I noticed that launching PD separation with the same configuration produced inconsistent final outputs. This phenomenon may require multiple attempts to fully manifest. I have a question: Is this behavior normal? (under temperature=0 conditions)
vllm startup script (D),The startup process for P nodes is almost identical, except for the use of “kv_producer”.
```
VLLM_CFG=(
--trust-remote-code
--data-parallel-size 1
--tensor-parallel-size 8
--no-enable-prefix-caching
--no-enable-chunked-prefill
--kv-transfer-config '{"kv_connector":"NixlConnector","kv_role":"kv_consumer"}'
)
```
When requested, temperature=0
```
curl -X POST -s http://${HOST_PORT}/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "base_model",
"prompt": "xxxx", # The prompt is identical for every request, and this prompt will also appear.
"max_tokens": 1000,
"temperature": 0,
"stream": true
}'
printf "\n"
```
My question is: Does the PD also have a probability of producing non-identical outputs at every step when temperature=0? If this is a normal phenomenon, what causes it? If this is a bug, what might be causing it?
Looking forward to your responses. Thank you.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27086
|
open
|
[
"bug"
] | 2025-10-17T07:56:41Z
| 2025-10-20T09:16:21Z
| 4
|
freedom-cui
|
huggingface/lerobot
| 2,227
|
How to easily run inference with a trained model
|
Hello, and thank you for sharing such an inspiring project!
I’m currently working with a 7-DoF robotic arm (6 joint axes + 1 gripper) and generating datasets through video recordings for training on smolVLA. Since there’s still some ongoing engineering work related to dataset generation, I’d like to start by understanding how the inference pipeline is implemented.
I have successfully verified the training workflow using the [lerobot/svla_so100_pickplace](https://huggingface.co/datasets/lerobot/svla_so100_pickplace) dataset and produced a trained model. Now, I’m wondering if there is a way to quickly load the trained model and perform inference, similar to how OpenVLA provides a simple demo on Hugging Face — where the model can be loaded and tested with just a few lines of code.
For OpenVLA example:
```
from transformers import AutoModelForVision2Seq, AutoProcessor
from PIL import Image
import torch
# Load Processor & VLA
processor = AutoProcessor.from_pretrained("openvla/openvla-7b", trust_remote_code=True)
vla = AutoModelForVision2Seq.from_pretrained(
"openvla/openvla-7b",
attn_implementation="flash_attention_2", # [Optional] Requires `flash_attn`
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).to("cuda:0")
# Grab image input & format prompt
image: Image.Image = get_from_camera(...)
prompt = "In: What action should the robot take to {<INSTRUCTION>}?\nOut:"
# Predict Action (7-DoF; un-normalize for BridgeData V2)
inputs = processor(prompt, image).to("cuda:0", dtype=torch.bfloat16)
action = vla.predict_action(**inputs, unnorm_key="bridge_orig", do_sample=False)
# Execute...
robot.act(action, ...)
```
I would be very grateful if you could share any related information or references.
|
https://github.com/huggingface/lerobot/issues/2227
|
open
|
[
"question"
] | 2025-10-17T05:41:15Z
| 2025-12-16T02:57:00Z
| null |
Biz-Joe
|
pytorch/torchtitan
| 1,903
|
Promlem with converting dcp ceckpoint to huggingface format
|
Hi ! I started a run with Llama_3_8b and saved the DCP checkpoint of step 0 (the original model). Then I used https://github.com/pytorch/torchtitan/blob/main/scripts/checkpoint_conversion/convert_to_hf.py
to convert the step-0 DCP checkpoint into .safetensors files, and copied the config.json and tokenizer from meta-llama/Llama-3.1-8B. Then I used the converted checkpoint to generate simple test but got unreadable results.
The result and the code for generation:
<img width="824" height="167" alt="Image" src="https://github.com/user-attachments/assets/322bbf60-79e1-43ed-b037-ce1d7af10d2e" />
<img width="425" height="489" alt="Image" src="https://github.com/user-attachments/assets/4406ea79-96c6-4320-9792-5dc9f52f6063" />
I wander if the issue is caused by incorrect config.json ? Thanks a lot !
|
https://github.com/pytorch/torchtitan/issues/1903
|
closed
|
[
"question"
] | 2025-10-17T03:00:52Z
| 2025-10-17T05:04:22Z
| null |
kv-wang
|
huggingface/lerobot
| 2,224
|
Can i just modify the json the pretrained policy to adapt it to my own robot?
|
I just want to know if i can just modify the config json(shape of state, size of image .etc) to adapt the model to inference in my modified robot(have different number of feetect and different image resolution)?
|
https://github.com/huggingface/lerobot/issues/2224
|
open
|
[
"question",
"policies"
] | 2025-10-17T01:33:32Z
| 2025-10-20T16:40:26Z
| null |
shs822
|
pytorch/torchtitan
| 1,900
|
checkpoint.initial_load_in_hf should overwrite everything and load from hf weights.
|
### Bug description
I have a `checkpoint` folder and I set `initial_load_in_hf: true` in yaml config like [this](https://github.com/meta-pytorch/forge/blob/main/apps/grpo/qwen3_1_7b.yaml#L78), when running `python -m apps.grpo.main --config apps/grpo/qwen3_1_7b.yaml`, I will get the error `step-1` not found. From the log I saw the warning :
```
[0] WARNING checkpoint.initial_load_path is provided but the checkpoint.folder exists. Checkpointer will use the checkpoints from the checkpoint.folder checkpoint.
[0] WARNING checkpoint.initial_load_in_hf is True but the checkpoint.folder exists. Checkpointer will not load from HF safetensors
```
Looking closer, I noticed that `If the checkpoint folder for the current run is not empty, located at {--job.dump_folder}/{--checkpoint.folder}` at this [line](https://github.com/pytorch/torchtitan/blob/main/torchtitan/config/job_config.py#L464). Since the checkpoint.folder will by default be `checkpoints`, it will check if `checkpoints` folder exist or not and try to search from `checkpoints` folder.. totally ignore the setting `initial_load_in_hf: true`.
I hope we can change it so that when `initial_load_in_hf=True` , it will load from HF weights not matter if `checkpoint.folder` exist or not. This is more user-friendly as the user already configured explicitly `initial_load_in_hf=True` and expect the program to load from HF weights.
### Versions
Latest main
|
https://github.com/pytorch/torchtitan/issues/1900
|
open
|
[
"question"
] | 2025-10-16T21:08:59Z
| 2025-10-16T21:33:32Z
| null |
wukaixingxp
|
pytorch/xla
| 9,679
|
PJRT Computation Client Teardown Function
|
## ❓ Questions and Help
Is there a teardown function that can be hooked from PJRT Plugin implementers for system teardown purposes? For example, graceful device closure at session termination?
It seems like the PJRT Computation Client is instantiated with a [leaky singleton](https://github.com/pytorch/xla/blob/d291621f583574f575888da33eaabe866056592c/torch_xla/csrc/runtime/runtime.cpp#L58-L60) pattern, so its destructor is not called, and we cannot leverage our PJRT Client's destructor.
Is there some client shutdown hook that can be used? It seems like [PJRT_Client_Destroy](https://github.com/openxla/xla/blob/71a4e6e6e4e9f0f8b8f25c07a32ad489aff19239/xla/pjrt/c/pjrt_c_api.h#L374-L375C21) would be a suitable candidate, except that I don't see it ever being called from pytorch/xla.
The reason for this is that we would like to have some automatic device cleanup / other system resource teardown implemented in our plugin that triggers at the end of a session. It would also be nice to have a user-accessible API that permits session teardown within PJRT, for example to reset devices between pytests within the same process.
|
https://github.com/pytorch/xla/issues/9679
|
open
|
[
"question"
] | 2025-10-16T20:21:27Z
| 2025-10-17T16:52:08Z
| null |
jameszianxuTT
|
huggingface/lerobot
| 2,221
|
Question about pre-trained weights usability and performance on Hugging Face models
|
Hello,
I would like to ask whether the weights provided on Hugging Face (for example, under the lerobot author page) can be directly downloaded and used for inference, or if they must be fine-tuned before achieving reasonable performance.
When I directly load and evaluate the models (e.g., lerobot/smolvla_base or lerobot/pi05_libero_base), the performance appears extremely poor, almost random. I’m wondering if this is expected behavior or if I might have made a mistake in my setup.
Here’s the list of models I found on Hugging Face:
lerobot/smolvla_base
lerobot/pi05_base
lerobot/diffusion_pusht
lerobot/pi0_base
lerobot/pi05_libero_base
lerobot/act_aloha_sim_transfer_cube_human
lerobot/vqbet_pusht
lerobot/diffusion_pusht_keypoints
lerobot/act_aloha_sim_insertion_human
lerobot/pi0_libero_base
lerobot/pi05_libero_finetuned
lerobot/pi05_libero_finetuned_quantiles
lerobot/pi0_libero_finetuned
Are the *_base models supposed to be general pre-trained checkpoints that require downstream fine-tuning (e.g., on LIBERO), while the *_finetuned ones are ready for evaluation?
Thank you in advance for your clarification!
|
https://github.com/huggingface/lerobot/issues/2221
|
closed
|
[
"question"
] | 2025-10-16T14:14:39Z
| 2025-10-31T16:26:45Z
| null |
MichaelWu99-lab
|
vllm-project/vllm
| 27,021
|
[Usage]: Need guidance reproducing benchmark results from PR #25337 — results differ significantly from reported data
|
## Background
Recently, we have been working on optimizing the position computation for multimodal models in vLLM.
During benchmarking, we noticed that our results were not as expected.
To investigate, we decided to reproduce the benchmark results from [PR #25337](https://github.com/vllm-project/vllm/pull/25337), comparing the performance before and after that PR was merged into the main branch.
- Before PR commit: cf56cf78b47e5f9b6a81ce0d50a94f9291922315
- After PR commit: 30d08911f7cf78287f8da003ddcc99f6ef196f9f
<img width="1380" height="712" alt="Image" src="https://github.com/user-attachments/assets/afca55db-c443-4c98-ba6b-f656b070af5f" />
However, our reproduced results differ **significantly** from the performance data reported in the PR.
We’d like to understand whether this discrepancy may be caused by hardware differences, model choice, or benchmark setup.
**Who can help guide me?**
## Model and Environment
- Model used: Qwen/Qwen3-VL-30B-A3B-Instruct-FP8(The modelQwen3-VL-4B used in the PR could not be found on Hugging Face.)
- GPU: NVIDIA A100 PCIe
- vLLM startup command:
```bash
vllm serve "Qwen/Qwen3-VL-30B-A3B-Instruct-FP8" \
--trust-remote-code \
--gpu-memory-utilization 0.9 \
--max-model-len 16384
```
## Benchmark Command
```bash
vllm bench serve \
--backend openai-chat \
--model "Qwen/Qwen3-VL-30B-A3B-Instruct-FP8" \
--base-url "http://localhost:8000" \
--endpoint "/v1/chat/completions" \
--dataset-name "hf" \
--dataset-path "lmarena-ai/VisionArena-Chat" \
--num-prompts 100 \
--request-rate 10 \
--save-result \
--result-dir benchmarks_results \
--result-filename test.json
```
## Our Benchmark Results
### Before PR #25337
```text
============ Serving Benchmark Result ============
Successful requests: 100
Request rate configured (RPS): 10.00
Benchmark duration (s): 16.91
Total input tokens: 5280
Total generated tokens: 11522
Request throughput (req/s): 5.91
Output token throughput (tok/s): 681.42
Peak output token throughput (tok/s): 2225.00
Peak concurrent requests: 97.00
Total Token throughput (tok/s): 993.68
---------------Time to First Token----------------
Mean TTFT (ms): 1176.13
Median TTFT (ms): 1185.79
P99 TTFT (ms): 2178.91
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 88.39
Median TPOT (ms): 78.68
P99 TPOT (ms): 392.01
---------------Inter-token Latency----------------
Mean ITL (ms): 77.30
Median ITL (ms): 42.31
P99 ITL (ms): 581.15
==================================================
```
### After PR #25337
```text
============ Serving Benchmark Result ============
Successful requests: 100
Request rate configured (RPS): 10.00
Benchmark duration (s): 16.89
Total input tokens: 5280
Total generated tokens: 11640
Request throughput (req/s): 5.92
Output token throughput (tok/s): 689.02
Peak output token throughput (tok/s): 2178.00
Peak concurrent requests: 97.00
Total Token throughput (tok/s): 1001.57
---------------Time to First Token----------------
Mean TTFT (ms): 1193.52
Median TTFT (ms): 1285.23
P99 TTFT (ms): 2111.41
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 88.84
Median TPOT (ms): 78.00
P99 TPOT (ms): 344.25
---------------Inter-token Latency----------------
Mean ITL (ms): 76.89
Median ITL (ms): 42.30
P99 ITL (ms): 597.42
==================================================
```
## Reference: Benchmark Results from PR #25337
### Main branch
```text
============ Serving Benchmark Result ============
Successful requests: 1000
Request rate configured (RPS): 10.00
Benchmark duration (s): 101.85
Total input tokens: 94327
Total generated tokens: 120882
Request throughput (req/s): 9.82
Output token throughput (tok/s): 1186.81
Peak output token throughput (tok/s): 2862.00
Peak concurrent requests: 133.00
Total Token throughput (tok/s): 2112.91
---------------Time to First Token----------------
Mean TTFT (ms): 229.53
Median TTFT (ms): 180.19
P99 TTFT (ms): 928.83
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):
|
https://github.com/vllm-project/vllm/issues/27021
|
open
|
[
"usage"
] | 2025-10-16T12:31:03Z
| 2025-10-17T05:46:32Z
| 5
|
deitxfge
|
vllm-project/vllm
| 27,017
|
[Doc]: KV Cache Memory allocations
|
### 📚 The doc issue
Hello,
When serving a model via vLLM for text(token) generation:
1. Before a new request gets scheduled, does vLLM check if KV cache for a sequence length of `max_model_len` is available for that new request or does it check if KV cache for a sequence length of `input prompt + max_tokens` (if it's less than _max_model_length_) is available for the request? In case the request does not specify a _max_tokens_ does it default to 16?
2. In case the required KV cache memory is not available, does the server wait until it is available to schedule that new request?
3. When exactly is the KV cache allocated for a particular request? Do the KV cache blocks get allocated after computing the number of new blocks required for all current requests after each generation step of the model, as mentioned in this [blog post](https://www.aleksagordic.com/blog/vllm)? i.e. the KV cache block is not fully allocated upfront based on the point [1] calculation instead incrementally allocated since the request could finish before it reaches the _max_tokens_ or _max_model_length_ limit?
4. I am trying to understand if the server concurrency can be more than the one specified in the server startup logs (based on the _max_model_len_) and get a clearer understanding of request scheduling.
example logs:
```
GPU KV cache size: {X} tokens
Maximum concurrency for {max_model_len} tokens per request: Y
```
5. The KV cache token and concurrency estimations vLLM gives in the start up logs for the **_Qwen-235B MoE_** model do not match the below formula for `tensor_parallel_size` of 8. It does match for `tensor_parallel_size` of 4 and in general for a different model like **_Llama-70B_**. Is the below formula missing something specifically for the Qwen-235B models at `tensor_parallel_size` of 8?
```
number of layers * number of KV heads * head dimension * precision/8 * 2 (for K & V) * seq_len bytes
OR
(number of layers * number of KV heads * head dimension * precision/8 * 2 (for K & V) * seq_len)/tensor_parallel_size bytes per GPU
i.e. for Qwen-235B MoE
(94 * 4 * 128 * 16/8 * 2 * seq_len)/8 bytes per GPU
```
Thanks!
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27017
|
closed
|
[
"documentation"
] | 2025-10-16T11:43:43Z
| 2025-11-04T11:08:02Z
| 7
|
sneha5gsm
|
vllm-project/vllm
| 27,011
|
[Usage]: Runnig GLM4.5-Air with Speculative Decoding
|
### Your current environment
```
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a [GLM-4.5-Air](https://huggingface.co/zai-org/GLM-4.5-Air-FP8) with speculative decoding. From [GLM 4.5](https://huggingface.co/zai-org/GLM-4.5) page, it mentioned `All models use MTP layers and specify --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4 to ensure competitive inference speed.`
They gave examples of how to use speculative decoding in sglang, but not in vLLM. I was wondering if it is being supported in vLLM
### Before submitting a new issue...
- [x]Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27011
|
open
|
[
"usage"
] | 2025-10-16T10:17:54Z
| 2025-10-16T10:23:01Z
| 0
|
aqx95
|
vllm-project/vllm
| 27,006
|
[Usage]: In vLLM version 0.8.5, when I send an HTTP image URL directly, the model cannot recognize the image content, but it works correctly when I use a base64-encoded image. I’d like to understand why this happens.
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/27006
|
open
|
[
"usage"
] | 2025-10-16T08:09:29Z
| 2025-10-16T10:33:49Z
| 4
|
Lislttt
|
huggingface/lerobot
| 2,218
|
image pad value in pi0/pi05
|
### System Info
```Shell
the latest lerobot version
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
def resize_with_pad_torch( # see openpi `resize_with_pad_torch` (exact copy)
images: torch.Tensor,
height: int,
width: int,
mode: str = "bilinear",
) -> torch.Tensor:
"""PyTorch version of resize_with_pad. Resizes an image to a target height and width without distortion
by padding with black. If the image is float32, it must be in the range [-1, 1].
Args:
images: Tensor of shape [*b, h, w, c] or [*b, c, h, w]
height: Target height
width: Target width
mode: Interpolation mode ('bilinear', 'nearest', etc.)
Returns:
Resized and padded tensor with same shape format as input
"""
# Check if input is in channels-last format [*b, h, w, c] or channels-first [*b, c, h, w]
if images.shape[-1] <= 4: # Assume channels-last format
channels_last = True
if images.dim() == 3:
images = images.unsqueeze(0) # Add batch dimension
images = images.permute(0, 3, 1, 2) # [b, h, w, c] -> [b, c, h, w]
else:
channels_last = False
if images.dim() == 3:
images = images.unsqueeze(0) # Add batch dimension
batch_size, channels, cur_height, cur_width = images.shape
# Calculate resize ratio
ratio = max(cur_width / width, cur_height / height)
resized_height = int(cur_height / ratio)
resized_width = int(cur_width / ratio)
# Resize
resized_images = F.interpolate(
images,
size=(resized_height, resized_width),
mode=mode,
align_corners=False if mode == "bilinear" else None,
)
# Handle dtype-specific clipping
if images.dtype == torch.uint8:
resized_images = torch.round(resized_images).clamp(0, 255).to(torch.uint8)
elif images.dtype == torch.float32:
resized_images = resized_images.clamp(-1.0, 1.0)
else:
raise ValueError(f"Unsupported image dtype: {images.dtype}")
# Calculate padding
pad_h0, remainder_h = divmod(height - resized_height, 2)
pad_h1 = pad_h0 + remainder_h
pad_w0, remainder_w = divmod(width - resized_width, 2)
pad_w1 = pad_w0 + remainder_w
# Pad
constant_value = 0 if images.dtype == torch.uint8 else -1.0
padded_images = F.pad(
resized_images,
(pad_w0, pad_w1, pad_h0, pad_h1), # left, right, top, bottom
mode="constant",
value=constant_value,
)
# Convert back to original format if needed
if channels_last:
padded_images = padded_images.permute(0, 2, 3, 1) # [b, c, h, w] -> [b, h, w, c]
return padded_images
### Expected behavior
image from lerobot range from 0 to 1 and dtype is float32 , so constant_value in this code is -1 not 0. -1*2-1=-3, so that there are '-3' in the input of siglip embedding
|
https://github.com/huggingface/lerobot/issues/2218
|
open
|
[
"bug",
"question",
"policies"
] | 2025-10-16T06:48:13Z
| 2025-10-17T09:58:49Z
| null |
Tgzz666
|
huggingface/transformers
| 41,640
|
AttributeError: BartTokenizerFast has no attribute image_token. Did you mean: 'mask_token'?
|
### System Info
Ubuntu
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
import requests
from PIL import Image
from transformers import AutoProcessor, Florence2ForConditionalGeneration
model = Florence2ForConditionalGeneration.from_pretrained(
"microsoft/Florence-2-large",
dtype=torch.bfloat16,
)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
task_prompt = "<OD>"
inputs = processor(text=task_prompt, images=image, return_tensors="pt").to(model.device, torch.bfloat16)
generated_ids = model.generate(
**inputs,
max_new_tokens=1024,
num_beams=3,
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
image_size = image.size
parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=image_size)
print(parsed_answer)
```
### Expected behavior
```
raise AttributeError(f"{self.__class__.__name__} has no attribute {key}")
AttributeError: BartTokenizerFast has no attribute image_token. Did you mean: 'mask_token'?
```
|
https://github.com/huggingface/transformers/issues/41640
|
closed
|
[
"bug"
] | 2025-10-16T06:34:02Z
| 2025-10-17T09:00:36Z
| 5
|
conceptofmind
|
huggingface/transformers.js
| 1,439
|
Integration to a CLI application created using PKG
|
### Question
I'm trying to bundle a Node.js CLI tool that uses `@xenova/transformers` into a single executable using [pkg](https://github.com/vercel/pkg).
The build works fine, but when I run the packaged executable, I get this error:
```
Error: Cannot find module '../bin/napi-v3/linux/x64/onnxruntime_binding.node'
Require stack:
- /snapshot/custom-cli/node_modules/onnxruntime-node/dist/binding.js
- /snapshot/custom-cli/node_modules/onnxruntime-node/dist/backend.js
- /snapshot/custom-cli/node_modules/onnxruntime-node/dist/index.js
- /snapshot/custom-cli/dist/custom-cli.cjs
```
**Build command:**
`webpack && pkg -t node18-linux -o custom-cli dist/custom-cli.cjs`
**pkg config:**
```
"pkg": {
"assets": [
"node_modules/onnxruntime-node/bin/napi-v3/**/onnxruntime_binding.node"
]
}
```
**Is it possible to give a custom absolute path for ONNX native bindings (something like this):**
```
import { env } from "@xenova/transformers";
env.backends.onnx.customBindingPath = "/custom-cli/onnxruntime_binding.node";
```
then the tool could:
- Extract prebuilt binaries (onnxruntime_binding.node) from a known location (or GitHub ZIP)
- Pass that custom path to @xenova/transformers / onnxruntime-node
- Load correctly even when packaged by pkg
|
https://github.com/huggingface/transformers.js/issues/1439
|
open
|
[
"question"
] | 2025-10-16T05:30:32Z
| 2025-10-26T23:32:41Z
| null |
JosephJibi
|
huggingface/lerobot
| 2,216
|
gpu memory required to finetune pi05
|
I tried to finetune pi05 with rxt a6000 (48GB) and get an insufficient memory error . Does anyone know how much GPU memory is needed to finetune a pi05 policy?
Thanks,
|
https://github.com/huggingface/lerobot/issues/2216
|
open
|
[
"question",
"policies",
"performance"
] | 2025-10-16T04:46:21Z
| 2025-12-22T07:42:45Z
| null |
jcl2023
|
pytorch/pytorch
| 165,612
|
RFC: Optionally accept NumPy dtypes in all APIs where torch dtypes are accepted
|
### 🚀 The feature, motivation and pitch
On behalf of the Python Data API Consortium / Python array API standard, to follow up with the conclusion we reached in the September 18 meeting I am filing this RFC for PyTorch stakeholders to consider 🙂
The Python array API standard currently specifies that each array library should make the supported dtypes available under the array library's namespace: https://data-apis.org/array-api/latest/API_specification/data_types.html. It does not specify how the dtype objects should be implemented, however, and in theory each library can have its own dtype object implementation. As a result, questions such as
- How to translate `libA.float32` to `libB.float32`? ([example](https://github.com/data-apis/array-api/issues/972))
- How to write portable library code without constantly checking which/whose dtype object to use?
do not have a definite answer today.
There exist workarounds, of course. For example, one could extract the string name of `libA.float32`, do a module lookup through [`__array_namespace__`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.array.__array_namespace__.html), and then `getattr` to map it to `libB.float32`. But generally speaking the current state remains challenging for writing array-library-agnostic code. This is [one example](https://github.com/NVIDIA/nvmath-python/blob/6bddfa71c39c07804127adeb23f5b0d2168ae38c/nvmath/internal/ndbuffer/package_utils.pyx#L25-L44) from `nvmath-python`, NVIDIA's Python math library.
After examining the Python ecosystem, however, we found that PyTorch is by far the only major Python array/tensor library that does not already use (alias) NumPy dtype objects; NumPy, CuPy, Jax, Dask, ndonnx, dpctl, ... all already do so.
As a result, one arguably "simple" solution to solve such interoperability/portability problems is to simply recognize NumPy dtype objects wherever a PyTorch dtype is accepted, including but not limited to `empty()`, `zeros()`, `.to()`, `.type_as()`, ...
A further step we should evaluate is whether to return `Tensor.dtype` as a NumPy dtype object, if a tensor was created with a NumPy dtype. This might require extra efforts to keep track of the input state, so based on discussions for this RFC we can decide whether we want to include this extra step.
The proposal seeks for **optional**, **backward compatible** support for NumPy dtype types (ex: `np.float32`) and objects (ex: `np.dtype(np.float32)`). PyTorch need not introduce NumPy as a required dependency, unless there are other strong reasons; such optional support can be easily hidden behind a try-import-except guard, and this RFC does not mean to introduce any new dependency to PyTorch.
The benefits of adding this optional support includes:
- Avoid ecosystem fragmentation
- Help the array API from having to standardize yet another protocol for dtype exchange (DLPack is not and should not be the solution)
- Allow writing array-library-agnostic code
- Centralize all efforts in hardware-accelerated exotic (narrow precision) dtypes behind [ml_dtypes](https://github.com/jax-ml/ml_dtypes), a dtype extension based on NumPy's dtype registration system (and therefore provides proper NumPy dtype types and objects)
Related past discussions: https://github.com/pytorch/pytorch/issues/40471, https://github.com/pytorch/pytorch/issues/40568
cc @albanD @rgommers (NumPy/SciPy) @lucascolley @ev-br (SciPy) @kmaehashi (CuPy) @aterrel @rparolin (CUDA Python) @jrhemstad (CUDA C++, aka CCCL) @kkraus14 (CUDA C++/Python) @seberg (NumPy) @brycelelbach (cuTile Python) @samaid (nvmath-python) @ptrblck (NVIDIA) @tqchen (DLPack) @jacobtomlinson (Dask) @jakevdp @hawkinsp (Jax/ml_dtype) @betatim (sklearn) @tomwhite (cubed) @kgryte @asmeurer (array API) for vis.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/165612
|
open
|
[
"triaged",
"enhancement",
"module: python frontend",
"module: floatx (formerly float8)"
] | 2025-10-16T04:24:52Z
| 2025-11-27T00:42:21Z
| 1
|
leofang
|
vllm-project/vllm
| 26,981
|
[Usage]: Does vllm support use TokensPrompt for Qwen3VL model
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
My truncation strategy differs slightly from the standard approach (I wish to preserve the system prompt and the final suffix, only truncating the middle portion). It seems that the current version of vLLM does not support this, so I attempted to pass pre-processed token IDs along with mm_data as input, for example: TokensPrompt(prompt_token_ids=text[:self.max_model_length] + self.suffix_tokens, multi_modal_data=mm_data, mm_processor_kwargs=video_kwargs).
However, I encountered an error. Could you please advise on the correct way to use this?
<img width="1555" height="351" alt="Image" src="https://github.com/user-attachments/assets/935cdcf5-59ff-480b-bbc5-a6426e48a12c" />
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26981
|
open
|
[
"usage"
] | 2025-10-16T03:22:09Z
| 2025-10-27T03:33:53Z
| 10
|
afalf
|
huggingface/lerobot
| 2,214
|
Potential Scale Imbalance in smolVLA Embedding Pipeline
|
Hi, I noticed a potential scale inconsistency in the embedding pipeline.
Specifically, state_emb is not normalized, while both img_emb and lang_emb are explicitly scaled by math.sqrt(emb_dim):
https://github.com/huggingface/lerobot/blob/a6ff3cfebb0304f2c378515dd30ea06fff8f473f/src/lerobot/policies/smolvla/modeling_smolvla.py#L591-L601
In practice, the numerical magnitude of img_emb tends to be much higher (often in the hundreds), while lang_emb and state_emb remain in the single-digit range. This discrepancy might cause the image features to dominate during multimodal fusion or attention.
Related code:
https://github.com/huggingface/lerobot/blob/a6ff3cfebb0304f2c378515dd30ea06fff8f473f/src/lerobot/policies/smolvla/modeling_smolvla.py#L561-L566
Suggestion:
Consider adding a LayerNorm after img_emb (or before the multimodal fusion stage) to align the scale across modalities. This could improve stability during training and quantization.
—
Reported by Tank @ iMotion AI
|
https://github.com/huggingface/lerobot/issues/2214
|
open
|
[
"question",
"policies"
] | 2025-10-16T02:11:24Z
| 2025-10-17T11:29:36Z
| null |
kkTkk012
|
vllm-project/vllm
| 26,964
|
[Bug]: Issue with Deepseek Reasoning parser with Qwen3 2507 chat templates
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
# wget https://raw.githubusercontent.com/vllm-project/vllm/main/vllm/collect_env.py
# For security purposes, please feel free to check the contents of collect_env.py before running it.
python collect_env.py
--2025-10-15 17:33:01-- https://raw.githubusercontent.com/vllm-project/vllm/main/vllm/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 28050 (27K) [text/plain]
Saving to: ‘collect_env.py.2’
collect_env.py.2 100%[===================================>] 27.39K --.-KB/s in 0s
2025-10-15 17:33:01 (65.0 MB/s) - ‘collect_env.py.2’ saved [28050/28050]
# # sh: 8: python: not found
```
</details>
### 🐛 Describe the bug
I'm running vLLM as a docker container on an Unraid server. It is a backend to Open WebUI chat interface. The issue I see is that the reasoning block for Open WebUI is closing too early. According to this discussion on the Open WebUI git, I think it is because of the deepseek parser used as recommended by the model card. See this link: https://github.com/open-webui/open-webui/pull/16687
Here is an example of the issue that I face:
<img width="1936" height="807" alt="Image" src="https://github.com/user-attachments/assets/eb2f6452-3df0-49f0-a1c5-5b99b56f578a" />
I think this is the place to raise this issue. Thanks so much!
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26964
|
open
|
[
"bug"
] | 2025-10-16T00:39:12Z
| 2025-10-20T17:47:02Z
| 1
|
MikeNatC
|
pytorch/pytorch
| 165,590
|
RuntimeError: non-positive groups is not supported
|
### 🐛 Describe the bug
torch==2.7.1
I got an RuntimeError: non-positive groups is not supported while using conv1d in my model. I tried to add more logs and asserts to find what is going wrong, but it didn't help. Even I set groups parameter to 128 the error remains
from output i got sizes if input tensors
```
torch.Size([4, 1, 128]) torch.Size([128, 1, 4]) torch.Size([128]) 4
```
code below
```
assert hasattr(self, 'd_inner'), "self.d_inner is not defined!"
assert self.d_inner > 0, f"self.d_inner must be positive, got {self.d_inner}"
if conv_weight.dim() == 3:
print(f'{x_proj_out.shape} {conv_weight.shape} {conv_bias.shape} {self.d_conv}')
assert 128 > 0, "groups must be > 0"
x_conv = F.conv1d(
x_proj_out.transpose(1, 2),
conv_weight, # (d_inner, 1, d_conv)
bias=conv_bias, # (d_inner,)
padding=self.d_conv - 1,
groups=128#self.d_inner
)
```
### Versions
Collecting environment information...
PyTorch version: 2.7.1+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.25.1
Libc version: glibc-2.36
Python version: 3.13.1 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:29:23) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.1.0-34-amd64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
GPU 4: NVIDIA A100-PCIE-40GB
GPU 5: NVIDIA A100-PCIE-40GB
GPU 6: NVIDIA A100-PCIE-40GB
Nvidia driver version: 570.133.20
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7742 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 0
Frequency boost: enabled
|
https://github.com/pytorch/pytorch/issues/165590
|
open
|
[
"needs reproduction",
"module: nn",
"triaged"
] | 2025-10-15T22:44:54Z
| 2025-10-17T18:59:35Z
| 1
|
st085318
|
vllm-project/vllm
| 26,949
|
[Bug]: RuntimeError: CUDA driver error: invalid device ordinal when symmetric memory (symm_mem) is enabled in multi-GPU vLLM setup with 4H100 PCIe
|
### My current environment
Environment:
Model: RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic
vLLM Version: latest main (installed via pip)
Hardware: 4× NVIDIA H100 PCIe (80GB)
Driver: 550.xx
CUDA: 12.2
PyTorch: 2.4.0
OS: Ubuntu 22.04
Launch Command:
python3 -m vllm.entrypoints.api_server \
--model /ephemeral/huggingface/models--RedHatAI--Llama-4-Scout-17B-16E-Instruct-FP8-dynamic/snapshots/... \
--tensor-parallel-size 4 \
--gpu-memory-utilization 0.85 \
--kv-cache-dtype fp8_e4m3 \
--max-model-len 4000000 \
--max-num-seqs 16 \
--enable-prefix-caching \
--kv-events-config '{"enable_kv_cache_events": true, "publisher": "zmq", "endpoint": "tcp://*:5557"}'
### bug
RuntimeError: CUDA driver error: invalid device ordinal
(EngineCore_DP0 pid=11546) ERROR [symm_mem.py:88] handle = torch_symm_mem.rendezvous(self.buffer, self.group.group_name)
(EngineCore_DP0 pid=11546) ERROR WorkerProc failed to start
RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {'EngineCore_DP0': 1}
Behavior:
When symm_mem is enabled (default) → fails with invalid device ordinal
When symm_mem is disabled via --disable-symm-mem →
✅ vLLM engine starts
❌ No KV cache event logs (BlockStored, BlockRemoved, etc.)
❌ No prefix cache hit metrics
What I’ve Tried
Verified all 4 GPUs visible via nvidia-smi
Confirmed correct CUDA device indexing
Reduced tensor-parallel-size to 2 → same error
Checked for NCCL initialization issues — none
Manually set CUDA_VISIBLE_DEVICES=0,1,2,3
Rebuilt PyTorch + vLLM from source with USE_SYMMETRIC_MEMORY=1 — same result
Question:
Is there a known compatibility issue between symmetric memory (torch_symm_mem) and H100 PCIe devices in multi-GPU setups?
If so, is there a fallback mechanism to preserve KV event publishing (--kv-events-config) when symmetric memory is disabled?
Thanks for looking into it.
|
https://github.com/vllm-project/vllm/issues/26949
|
open
|
[
"bug"
] | 2025-10-15T22:08:34Z
| 2025-12-25T03:42:49Z
| 2
|
vadapallij
|
pytorch/pytorch
| 165,578
|
Out of tree backend documentation does not seem accurate
|
### 📚 The doc issue
Looking at the "How does this mechanism apply to out-of-tree extensions" section of [the autoloading tutorial](https://docs.pytorch.org/tutorials/unstable/python_extension_autoload.html#how-to-apply-this-mechanism-to-out-of-tree-extensions), it looks to me like importing setting a backend `torch_foo = torch_foo:_autoload` is going to automagically attach either the `torch_foo` or `torch_foo.foo` module to `torch` directly, since there is no code in there that manipulates the `torch` namespace, but when I try to do this, like in [this MWE](https://github.com/pganssle-google/torch-backend-mwe), it doesn't work.
### Suggest a potential alternative/fix
Either this documentation is inaccurate and should be made accurate to show how to attach your backend to the `torch` namespace or maybe the problem is that the namespace attachment is smuggled in under the assumption that `foo` is "a backend" (and the assumption is that backends show up in that namespace). Preferably the relevant code would be extracted into this tutorial to show how it works, but failing that it would be nice to get a link to something showing the essential elements of "a backend" that make this code example work.
cc @svekars @sekyondaMeta @AlannaBurke @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens @albanD
|
https://github.com/pytorch/pytorch/issues/165578
|
open
|
[
"module: docs",
"triaged",
"module: PrivateUse1"
] | 2025-10-15T20:49:12Z
| 2025-10-17T04:30:01Z
| 1
|
pganssle-google
|
pytorch/pytorch
| 165,577
|
CI: What is the purpose of `slow.yml`
|
### 🐛 Describe the bug
What is the purpose of `slow.yml` job, when we can shard more and probably can rely on TD to skip slow tests if they are not needed?
In the past `slow.yml` job was a way of keeping time to signal low, while running some tests post commit, but now that we have TD we probably can get rid of concept of slow test and just run them conditionally on TD's decision
At the very least, I think this job should be viable/strict blocking, as it just runs a subset of tests from pull request, that are decorated with `@slowTest` or take more than 90 sec to finish
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra
|
https://github.com/pytorch/pytorch/issues/165577
|
open
|
[
"module: ci",
"triaged",
"needs research"
] | 2025-10-15T20:33:34Z
| 2025-10-16T09:24:45Z
| null |
malfet
|
vllm-project/vllm
| 26,940
|
[Feature]: Support `inf` value for burstiness in benchmarks
|
### 🚀 The feature, motivation and pitch
In the benchmarks, the burstiness value is used in a gamma distribution to sample the delays between consecutive requests.
```
theta = 1.0 / (current_request_rate * burstiness)
delay_ts.append(np.random.gamma(shape=burstiness, scale=theta))
```
[Theoretically ](https://en.wikipedia.org/wiki/Gamma_distribution)(and this is also what is observed in practice), the generated delays have as mean `1.0 / current_request_rate` and the spread is controlled by the burstiness. When the burstiness is high, we observe lower variance in the delay values, all values being closer to the mean `1.0 / current_request_rate`. When burstiness tends to infinity, we should observe a single generated delay, which is `1.0 / current_request_rate`. In practice, the `np.random.gamma` function generates `nan` as results, so we need to manually condition on `burstiness` value and append `1.0 / current_request_rate` to the list of delays when burstiness becomes infinite.
See attached image as mathematical proof
<img width="1323" height="1672" alt="Image" src="https://github.com/user-attachments/assets/455cfd00-ea8f-44c8-874f-7fdac4faae6d" />
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26940
|
closed
|
[
"feature request"
] | 2025-10-15T19:39:03Z
| 2025-11-03T18:33:19Z
| 0
|
sducouedic
|
vllm-project/vllm
| 26,914
|
[Usage]: 为什么在采集的profiling中看不到通信算子?
|
### Your current environment
```text
The output of `python collect_env.py`
```
通过llm.start_profile和stop_profile,我采集到了profiling,但kernel_details里面看不到通信算子。
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26914
|
open
|
[
"usage"
] | 2025-10-15T13:38:14Z
| 2025-10-15T13:38:14Z
| 0
|
sheep94lion
|
pytorch/rl
| 3,197
|
[Question] How to handle MultiDiscrete action spaces in TorchRL
|
I have created a custom Parallel API PettingZoo environment with **MultiDiscrete action spaces**. The _env.action_spec()_ function succeeds.
I am using the **Multi-Agent PPO tutorial of TorchRL**, but I’m struggling to understand how to modify the architecture so it supports **MultiDiscrete action spaces**. Specifically, I’d like to know how to correctly adapt the `MultiAgentMLP`, `TensorDictModule`, and `ProbabilisticActor` so that the policy network outputs a `MultiDiscrete` (or equivalently, `MultiCategorical`) action distribution for each agent.
Should I create number of `ProbabilisticActor` modules as the length of the MultiDiscrete action space? In the case where a single `ProbabilisticActor` module is used, which distribution class should replace `Categorical` to support a MultiDiscrete action space? Is there an existing script or tutorial in TorchRL that demonstrates how to handle `MultiDiscrete` action spaces (or `MultiCategorical` distributions) in a multi-agent setup?
|
https://github.com/pytorch/rl/issues/3197
|
open
|
[] | 2025-10-15T11:56:00Z
| 2025-10-16T19:38:12Z
| null |
AnastasiaPsarou
|
vllm-project/vllm
| 26,903
|
[Usage]: vLLM for video input
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of qwen2.5-vl or qwen2.5-omni.
When I convert the video to base64 for api calls (e.g. openai format), I found that vLLM seems to use all the video frames by checking the number of prompt tokens.
Is there any parameter similar to fps to control the sampling rate?
Or do I need to sample the video externally well in advance, save it as video and then convert to base64?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26903
|
open
|
[
"usage"
] | 2025-10-15T09:29:23Z
| 2025-12-11T03:26:33Z
| 6
|
King-king424
|
huggingface/diffusers
| 12,492
|
module transformers has no attribute CLIPFeatureExtractor
|
### System Info
latest main
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from diffusers import AnimateDiffPipeline
pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism")
```
error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/venv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/jiqing/diffusers/src/diffusers/pipelines/pipeline_utils.py", line 1024, in from_pretrained
loaded_sub_model = load_sub_model(
^^^^^^^^^^^^^^^
File "/home/jiqing/diffusers/src/diffusers/pipelines/pipeline_loading_utils.py", line 752, in load_sub_model
class_obj, class_candidates = get_class_obj_and_candidates(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jiqing/diffusers/src/diffusers/pipelines/pipeline_loading_utils.py", line 419, in get_class_obj_and_candidates
class_obj = getattr(library, class_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jiqing/transformers/src/transformers/utils/import_utils.py", line 1920, in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module transformers has no attribute CLIPFeatureExtractor
```
### Expected behavior
As transformers deprecated FeatureExtractor classes in favor of ImageProcessor classes for image preprocessing. How to handle models that already set FeatureExtractor in model hub like [emilianJR/epiCRealism](https://huggingface.co/emilianJR/epiCRealism/blob/main/feature_extractor/preprocessor_config.json#L11)?
|
https://github.com/huggingface/diffusers/issues/12492
|
closed
|
[
"bug"
] | 2025-10-15T08:26:05Z
| 2025-11-03T05:02:54Z
| 3
|
jiqing-feng
|
pytorch/xla
| 9,678
|
Heterogeneous execution across multiple PJRT clients (GPU + custom accelerator)
|
## ❓ Questions and Help
Hi, I’m developing a PJRT plugin for a custom accelerator, and I’m exploring whether PyTorch/XLA can support heterogeneous execution across multiple PJRT clients — for example, splitting a model or HLO module between GPU, CPU, and the custom accelerator.
Concretely, I’d like to enable availability-aware, cost-driven partitioning so that:
1. If only CPU + accelerator are available, the model runs using those.
2. If a GPU is also available, certain subgraphs can automatically offload to the accelerator when it’s beneficial.
I have a few questions:
Does PyTorch/XLA or its PJRT integration layer support running a single model using multiple PJRT clients/devices (e.g., GPU + custom accelerator) at the same time?
If not, is there any supported or recommended way to partition the computation graph manually and execute subgraphs on different PJRT backends?
Would implementing this orchestration externally (via multiple PJRT clients) be more realistic today, or can PyTorch/XLA’s runtime be extended to handle multi-client coordination?
Any pointers to examples, design discussions, or relevant code paths would be really helpful.
Thanks!
|
https://github.com/pytorch/xla/issues/9678
|
closed
|
[
"question"
] | 2025-10-15T02:56:43Z
| 2025-10-16T14:52:40Z
| null |
milinbhade1214
|
vllm-project/vllm
| 26,858
|
[RFC]: Top-level CLI interface for KV cache offloading
|
### Motivation.
CPU (and tier-2 storage) offloading is an important feature in many cases (multi-round QA, document analysis, agent workflow, and reinforcement learning). With the recent advancement in the offloading connector, we already have the vLLM native CPU offloading implemented via the connector API. Also, there are multiple community efforts to provide other offloading implementations (e.g., LMCache, Nixl storage, mooncake) via the same set of APIs.
However, there is no clear documentation about how to configure the CPU offloading from the user's perspective. Right now, in order to enable CPU offloading, the user needs to pass a JSON string to `--kv-transfer-config`, which may create a huge mental barrier for new users. Therefore, it would be better to have a simple & clear user interface for users to enable CPU offloading.
### Proposed Change.
This proposal contains two new command-line arguments:
- `--kv-offloading-size`: a numeric value to control a global offloading buffer size (in GB). When TP > 1, this number should be the total size summed across all the TP ranks. (An alternative is the buffer size for each TP rank.)
- `--kv-offloading-backend`: a string that specifies which offloading backend to use, such as "native", "lmcache", "mooncake", "3fs", or "nixl".
This will give enough clarity to most of the users who want to use the offloading feature, and should be extensible enough to new offloading backends and tier-2 storage.
## Required changes
To implement this proposal, the following things are needed:
- Add logic to parse the new CLI argument and store it into vllm config.
- Add a new module to translate the `--kv-offloading-size` and `--kv-offloading-backend` to the corresponding KV connector config.
- Add the documentation to the vLLM user guide.
### Feedback Period.
1~2 weeks
### CC List.
@simon-mo @orozery @njhill
### Any Other Things.
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26858
|
closed
|
[
"RFC"
] | 2025-10-15T00:11:15Z
| 2025-11-01T07:17:08Z
| 8
|
ApostaC
|
huggingface/diffusers
| 12,485
|
How to enable Context Parallelism for training
|
Hi @a-r-r-o-w , I would like to ask you for tips on using Context Parallelism for distributed training.
**Is your feature request related to a problem? Please describe.**
Here is the minimal code for adapting Context Parallelism into diffusion model training
```python
# Diffusers Version: 0.36.0.dev0
from diffusers.models._modeling_parallel import ContextParallelConfig
# I have 8 GPUs in total
cp_config = ContextParallelConfig(ring_degree=1, ulysses_degree=8)
flux_transformer.enable_parallelism(config=cp_config)
loss = train(flux_transformer)
accelerator.backward(loss)
grad_norm = accelerator.clip_grad_norm_(flux_transformer.parameters(), args.max_grad_norm)
```
However, there is a bug:
```bash
[rank5]: Traceback (most recent call last):
[rank5]: File "/home/code/diffusers/flux/sft_flux.py", line 1494, in <module>
[rank5]: main_with_cleanup(args)
[rank5]: File "/home/code/diffusers/flux/sft_flux.py", line 1460, in main_with_cleanup
[rank5]: main(args)
[rank5]: File "/home/code/diffusers/flux/sft_flux.py", line 1216, in main
[rank5]: grad_norm = accelerator.clip_grad_norm_(flux_transformer.parameters(), args.max_grad_norm)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/accelerate/accelerator.py", line 2863, in clip_grad_norm_
[rank5]: return torch.nn.utils.clip_grad_norm_(
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py", line 36, in _no_grad_wrapper
[rank5]: return func(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py", line 222, in clip_grad_norm_
[rank5]: _clip_grads_with_norm_(parameters, max_norm, total_norm, foreach)
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py", line 36, in _no_grad_wrapper
[rank5]: return func(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py", line 155, in _clip_grads_with_norm_
[rank5]: clip_coef = max_norm / (total_norm + 1e-6)
[rank5]: ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/_tensor.py", line 39, in wrapped
[rank5]: return f(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/_tensor.py", line 1101, in __rdiv__
[rank5]: return self.reciprocal() * other
[rank5]: ^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/_compile.py", line 53, in inner
[rank5]: return disable_fn(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 929, in _fn
[rank5]: return fn(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_api.py", line 350, in __torch_dispatch__
[rank5]: return DTensor._op_dispatcher.dispatch(
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_dispatch.py", line 166, in dispatch
[rank5]: self.redistribute_local_args(
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_dispatch.py", line 303, in redistribute_local_args
[rank5]: resharded_local_tensor = redistribute_local_tensor(
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_redistribute.py", line 208, in redistribute_local_tensor
[rank5]: new_local_tensor = partial_spec._reduce_value(
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_ops/_math_ops.py", line 126, in _reduce_value
[rank5]: reduced_tensor = super()._reduce_value(tensor, mesh, mesh_dim)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/placement_types.py", line 679, in _reduce_value
[rank5]: return funcol.all_reduce(
[rank5]: ^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/_functional_collectives.py", line 175, in all_reduce
[rank5]: group_name = _resolve_group_name(group, tag)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/_functional_collectives.py", line 783, in _resolve_group_name
[rank5]: return dmesh._dim_group_names[dim]
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^
[rank5]: AttributeError: 'DeviceMesh' obj
|
https://github.com/huggingface/diffusers/issues/12485
|
closed
|
[] | 2025-10-14T21:48:35Z
| 2025-10-15T20:33:30Z
| null |
liming-ai
|
vllm-project/vllm
| 26,840
|
[Doc]: Update AWQ Guide
|
### 📚 The doc issue
Situation: AutoAWQ functionality was adopted by llm-compressor but vllm [docs](https://docs.vllm.ai/en/latest/features/quantization/auto_awq.html) point to AutoAWQ which is deprecated
### Suggest a potential alternative/fix
1) Update the [AutoAWQ guide](https://github.com/vllm-project/vllm/blob/main/docs/features/quantization/auto_awq.md) to use the [llm-compressor](https://github.com/vllm-project/llm-compressor/tree/2a6a0a34c8a57b6090b5fbac9c0659edf982185c/examples/awq) apis/flow
2) Make sure to also update links in [quantization doc](https://github.com/vllm-project/vllm/blob/main/docs/features/quantization/README.md)
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26840
|
closed
|
[
"documentation"
] | 2025-10-14T20:02:21Z
| 2025-11-03T15:39:12Z
| 0
|
HDCharles
|
vllm-project/vllm
| 26,838
|
[Performance]: RTX 6000 PRO - FP8 in sglang is faster
|
### Proposal to improve performance
Can we have a discussion about the sglang FP8 performance vs VLLM performance -
I'm able to get 133 tokens/sec with sglang GLM-4.5-Air-FP8 vs 78 tokens/sec in VLLM
```PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True USE_TRITON_W8A8_FP8_KERNEL=1 SGL_ENABLE_JIT_DEEPGEMM=0 python -m sglang.launch_server --model /mnt/GLM-4.5-FP8/ --tp 4 --host 0.0.0.0 --port 5000 --mem-fraction-static 0.93 --context-length 128000 --enable-metrics --attention-backend flashinfer --tool-call-parser glm45 --reasoning-parser glm45 --served-model-name glm-4.5-air --chunked-prefill-size 8092 --enable-mixed-chunk --cuda-graph-max-bs 32 --kv-cache-dtype fp8_e5m2```
It is using TRITON
I'm not able to achieve the same speed with VLLM with any methods - neither flashinfer, nor triton etc. - the maximum is always around 78 tokens/sec
1) Any idea how to achieve the same 133tokens/sec in VLLM using triton and same configuration like in sglang?
2) is it cutlass design that it is not that fast as triton?
### Report of performance regression
_No response_
### Misc discussion on performance
_No response_
### Your current environment (if you think it is necessary)
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26838
|
open
|
[
"performance"
] | 2025-10-14T19:41:14Z
| 2025-12-29T14:52:57Z
| 10
|
voipmonitor
|
pytorch/pytorch
| 165,444
|
AOTInductor not updating buffers inplace
|
Hey all,
I'd like to double check whether updating buffers inplace is currently supported with AOTInductor? Based on the answers on this issue https://github.com/pytorch/pytorch/issues/159124 I think it should be, but it does not seem to work when I load the module from file. If not, is there any workaround we can use at this time (short of making the function pure)? I'm currently on libtorch 2.8.0.
```
class DummyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.register_buffer("counter", torch.ones(1))
def forward(self):
self.counter = self.counter + 1.0
return self.counter
ep = torch.export.export(DummyModel(), tuple())
so_path = torch._inductor.aoti_compile_and_package(
ep,
inductor_configs={"always_keep_tensor_constants": True}
)
loaded_module = torch._inductor.aoti_load_package(so_path)
print(ep.module()())
print(ep.module()())
print(ep.module()())
print(loaded_module())
print(loaded_module())
print(loaded_module())
```
```
tensor([2.])
tensor([3.])
tensor([4.])
tensor([2.])
tensor([2.])
tensor([2.])
```
@desertfire @ezyang
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1
|
https://github.com/pytorch/pytorch/issues/165444
|
open
|
[
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 2025-10-14T16:48:34Z
| 2025-10-19T23:42:43Z
| 2
|
olarucatalin
|
vllm-project/vllm
| 26,817
|
[Feature]: Add process_weights_after_loading to AttentionImpl
|
### 🚀 The feature, motivation and pitch
Currently, in the `Attention` layer, we check if `process_weights_after_loading` exists and then call it conditionally, and after that we apply flashinfer-specific logic.
Instead, we should just add a `process_weights_after_loading` method to AttentionImpl (no-op) by default, call it from `Attention.process_weights_after_loading`, and override it in `FlashInferAttentionImpl`.
### Alternatives
_No response_
### Additional context
https://github.com/vllm-project/vllm/pull/23016#discussion_r2414787224
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26817
|
closed
|
[
"help wanted",
"good first issue",
"feature request"
] | 2025-10-14T15:59:54Z
| 2025-10-16T15:02:31Z
| 2
|
ProExpertProg
|
vllm-project/vllm
| 26,806
|
[Usage]: MCP-USE with VLLM gpt-oss:20b via ChatOpenAI
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
i am trying to create an agent using gpt-oss:20B with mcp-use
most times the model returns "Agent completed the task successfully.", and sometimes the proper output which is required
### code
`vllm serve openai/gpt-oss-20b --max-model-len 100000 --gpu-memory-utilization 0.9 --port 8000 --tool-call-parser openai --enable-auto-tool-choice`
client = MCPClient.from_dict(config)
llm = ChatOpenAI(
model="openai/gpt-oss-20b",
base_url="http://127.0.0.1:8000/v1",
api_key="not-needed",
temperature=0.8,
max_tokens=2048
)
agent = MCPAgent(llm=llm, client=client, max_steps=30)
also raising this on mcp-use
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26806
|
open
|
[
"usage"
] | 2025-10-14T13:00:38Z
| 2025-11-20T06:33:29Z
| 2
|
Tahirc1
|
pytorch/pytorch
| 165,428
|
Using NCCL for Global Group and MPI for Sub-Groups in torch.distributed
|
### 🚀 The feature, motivation and pitch
I want to mix NCCL and MPI backends in the `torch.distributed` package. Does torch.distributed support using NCCL as the backend when initializing the global process group with `torch.distributed.init_process_group()`, and then using MPI as the backend when creating a sub-process group with `torch.distributed.new_group()`? Or is the opposite operation supported? I encountered errors when I tried this myself.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/165428
|
closed
|
[] | 2025-10-14T09:26:47Z
| 2025-10-15T13:44:42Z
| 11
|
cq-eng
|
vllm-project/vllm
| 26,786
|
[Usage]: cuda12.8 docker 0.11.0 Error occurs when launching the model, NCCL error: unhandled cuda error.
|
When I use only a single graphics card, the system can start up normally.
Below are Docker configuration files, logs, and environment information.
I encountered this issue when upgrading from version 10.1.1 to 10.2.
[The system generates an error when using dual graphics cards; version 10.1.1 functions correctly, but version 10.2 triggers an error upon execution.](https://github.com/vllm-project/vllm/issues/25813)
### Your current environment
```text
# vllm collect-env
INFO 10-14 19:07:58 [__init__.py:216] Automatically detected platform cuda.
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 4.1.0
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version : 571.96
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 1
Stepping: 4
BogoMIPS: 4788.75
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti ssbd ibrs ibpb stibp fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves md_clear flush_l1d arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 640 KiB (20 instances)
L1i cache: 640 KiB (20 instances)
L2 cache: 20 MiB (20 instances)
L3 cache: 27.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-39
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.3.1
[pip3] numpy==
|
https://github.com/vllm-project/vllm/issues/26786
|
closed
|
[
"usage"
] | 2025-10-14T09:01:39Z
| 2025-11-07T17:17:32Z
| 3
|
ooodwbooo
|
pytorch/pytorch
| 165,419
|
[RFC] Make PyTorch Expandable Segments interoperate with CUDA VMM-based allocators (NCCL ncclMemAlloc)
|
## Summary
PyTorch’s expandable segments reduce fragmentation by using CUDA Virtual Memory Management (VMM) to grow/shrink virtual segments instead of relying on cudaMalloc blocks.
Separately, NCCL’s user buffer registration—including NVLS, General (intra-node) buffer registration, and Window Registration—expects buffers to come from VMM-backed allocators (e.g., ncclMemAlloc or any allocator that produces VMM handles with the documented properties). These registrations lower memory pressure and [can improve overlap/latency for collectives](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/bufferreg.html).
Today these two don’t compose in PyTorch #147851 enabling expandable segments can prevent custom/VMM allocators from being used (breaking NCCL registration flows), and the current NCCL mem-pool registration path is brittle when expandable segments is on.
This RFC proposes incremental changes so expandable segments and VMM-based allocators interoperate cleanly, enabling users to (a) keep expandable segments on to reduce fragmentation and (b) opt into NCCL registration (NVLS / General / Window) for zero-copy and better communication–computation overlap.
Solving this problem could lead to other beneficial outcomes, #158029 maybe related
## Design overview
We propose two compatible tracks. Plan 2 is minimal risk and unblocks NCCL users quickly; Plan 1 is a deeper integration that generalizes expandable segments to any VMM source, including ncclMemAlloc.
### Plan 2 (near-term): Make NCCL registerMemPool fully work when expandable segments is on
When users register tensors with ncclCommRegister, it should just work with expandable segments enabled.
[NCCL’s docs](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/bufferreg.html#mem-allocator) explicitly allow buffer registration with any VMM-based allocator so long as the allocation/handle/align rules are met (recommended granularity, shared handle types for NVLS, etc.). Expandable segments already use CUDA VMM under the hood; the missing piece is to ensure the buffers we hand to NCCL truly originate from CUDACachingAllocator that NCCL can recognize/retain, and that our allocator bookkeeping & snapshots remain consistent with expandable segments.
We currently depend on c10::cuda::CUDACachingAllocator::snapshot to dump segments for ncclCommRegister import. We must ensure this process functions correctly.
### Plan 1 (mid-term): Let expandable segments adopt external VMM allocations (generalize expandable segments to any VMM allocator)
If users (or plugins) allocate memory via ncclMemAlloc or another VMM allocator, expandable segments can “import” those physical allocations by retaining the underlying VMM handle and mapping them into expandable segments virtual address ranges. Then expandable segments can manage growth/shrink and all the usual segment lifecycle while keeping NCCL registration happy.
We can use cuMemRetainAllocationHandle to recover the CUmemGenericAllocationHandle from any mapped address the external allocator returned. The API guarantees the returned handle equals the one used for mapping; any address within the mapped range works. Once we have the handle, expandable segments can unmap/remap subranges into its own reserved VA space (cuMemAddressReserve, cuMemMap, cuMemSetAccess) and track page-level occupancy in expandable segments bookkeeping, enabling co-existence with expandable segments growth policies and freeing fully unused pages.
cc @ptrblck @msaroufim @eqy @jerryzh168 @ngimel @syed-ahmed
|
https://github.com/pytorch/pytorch/issues/165419
|
closed
|
[
"module: cuda",
"triaged",
"module: nccl",
"module: CUDACachingAllocator"
] | 2025-10-14T07:53:01Z
| 2025-12-10T17:12:45Z
| 14
|
eee4017
|
vllm-project/vllm
| 26,774
|
[Usage]: how to use vllm on CUDA 12.9
|
### Your current environment
```text
Traceback (most recent call last):
File "/vllm-workspace/collect_env.py", line 825, in <module>
main()
File "/vllm-workspace/collect_env.py", line 804, in main
output = get_pretty_env_info()
^^^^^^^^^^^^^^^^^^^^^
File "/vllm-workspace/collect_env.py", line 799, in get_pretty_env_info
return pretty_str(get_env_info())
^^^^^^^^^^^^^^
File "/vllm-workspace/collect_env.py", line 619, in get_env_info
cuda_module_loading=get_cuda_module_loading_config(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/vllm-workspace/collect_env.py", line 540, in get_cuda_module_loading_config
torch.cuda.init()
File "/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py", line 339, in init
_lazy_init()
File "/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py", line 372, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available
root@test2222-7dcd6b94b7-wl6w4:/vllm-workspace# python3 --version
Python 3.12.1
```
### How would you like to use vllm
My node CUDA version is 12.9, and the running pod image CUDA variable is 12.8. Will this cause the No CUDA GPUs are available error? Is 12.9 compatible with version 12.8? Should we upgrade the VLLM version or lower the CUDA version of the node to 12.8
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26774
|
open
|
[
"usage"
] | 2025-10-14T07:30:56Z
| 2025-10-14T07:40:08Z
| 1
|
Mrpingdan
|
vllm-project/vllm
| 26,772
|
[Feature]: Option kv_event default config
|
### 🚀 The feature, motivation and pitch
Current kv_event config publisher is null, but endpoint is zmq endpoint, so when not set publisher config, vllm cannot start, got a error: `EventPublisher.__init__() got an unexpected keyword argument 'endpoint'`.
Can we change this default publisher to zmq, when start enable_kv_cache_events after use can direct use.
https://github.com/vllm-project/vllm/blob/d32c611f455766c9d67034b5e0f8e66f28f4a3ba/vllm/config/kv_events.py#L20-L24
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26772
|
closed
|
[
"feature request"
] | 2025-10-14T07:08:58Z
| 2025-10-22T19:19:34Z
| 5
|
lengrongfu
|
vllm-project/vllm
| 26,762
|
[Usage]: about curl http://ip:8000/metrics
|
### Your current environment
When I run this command, I get the following results:
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 12286.0
python_gc_objects_collected_total{generation="1"} 1244.0
python_gc_objects_collected_total{generation="2"} 1326.0
# HELP python_gc_objects_uncollectable_total Uncollectable objects found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 1378.0
python_gc_collections_total{generation="1"} 124.0
python_gc_collections_total{generation="2"} 9.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="12",patchlevel="11",version="3.12.11"} 1.0
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.1701968896e+010
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.045848064e+09
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.76036994809e+09
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 148.44
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 69.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP http_requests_total Total number of requests by method, status and handler.
# TYPE http_requests_total counter
http_requests_total{handler="none",method="GET",status="4xx"} 1.0
# HELP http_requests_created Total number of requests by method, status and handler.
# TYPE http_requests_created gauge
http_requests_created{handler="none",method="GET",status="4xx"} 1.7604160309440813e+09
# HELP http_request_size_bytes Content length of incoming requests by handler. Only value of header is respected. Otherwise ignored. No percentile calculated.
# TYPE http_request_size_bytes summary
http_request_size_bytes_count{handler="none"} 1.0
http_request_size_bytes_sum{handler="none"} 0.0
# HELP http_request_size_bytes_created Content length of incoming requests by handler. Only value of header is respected. Otherwise ignored. No percentile calculated.
# TYPE http_request_size_bytes_created gauge
http_request_size_bytes_created{handler="none"} 1.7604160309442668e+09
# HELP http_response_size_bytes Content length of outgoing responses by handler. Only value of header is respected. Otherwise ignored. No percentile calculated.
# TYPE http_response_size_bytes summary
http_response_size_bytes_count{handler="none"} 1.0
http_response_size_bytes_sum{handler="none"} 22.0
# HELP http_response_size_bytes_created Content length of outgoing responses by handler. Only value of header is respected. Otherwise ignored. No percentile calculated.
# TYPE http_response_size_bytes_created gauge
http_response_size_bytes_created{handler="none"} 1.7604160309445088e+09
# HELP http_request_duration_highr_seconds Latency with many buckets but no API specific labels. Made for more accurate percentile calculations.
# TYPE http_request_duration_highr_seconds histogram
http_request_duration_highr_seconds_bucket{le="0.01"} 1.0
http_request_duration_highr_seconds_bucket{le="0.025"} 1.0
http_request_duration_highr_seconds_bucket{le="0.05"} 1.0
http_request_duration_highr_seconds_bucket{le="0.075"} 1.0
http_request_duration_highr_seconds_bucket{le="0.1"} 1.0
http_request_duration_highr_seconds_bucket{le="0.25"} 1.0
http_request_duration_highr_seconds_bucket{le="0.5"} 1.0
http_request_duration_highr_seconds_bucket{le="0.75"} 1.0
http_request_duration_highr_seconds_bucket{le="1.0"} 1.0
http_request_duration_highr_seconds_bucket{le="1.5"} 1.0
http_request_duration_highr_seconds_bucket{le="2.0"} 1.0
http_request_duration_highr_seconds_bucket{le="2.5"} 1.0
http_request_duration_highr_seconds_bucket{le="3.0"} 1.0
http_request_duration_highr_seconds_bucket{le="3.5"} 1.0
http_request_duration_highr_seconds_bucket{le="4.0"} 1.0
http_request_duration_highr_seconds_bucket{le="4.5"} 1.0
http_request_duration_highr_seconds_bucket{le="5.0"} 1.0
http_request_duration_highr_seconds_bucket{le="7.5"} 1.0
http_request_duration_highr_seconds_bucket{le="10.0"} 1.0
http_request_duration_highr_seconds_bucket{le="30.0"} 1.0
http_request_duration_highr_seconds_bucket{le="60.0"} 1.0
http_request_duration_highr_se
|
https://github.com/vllm-project/vllm/issues/26762
|
open
|
[
"usage"
] | 2025-10-14T05:13:30Z
| 2025-10-14T05:13:30Z
| 0
|
Renoshen
|
huggingface/lerobot
| 2,194
|
During training with PI0, the loss is very low. Is this normal, and is the training proceeding correctly?
|
I am currently training with PI05.
<img width="1039" height="355" alt="Image" src="https://github.com/user-attachments/assets/5ab3f3e0-82bc-403c-8124-416b330dab14" />
`INFO 2025-10-14 04:57:11 ot_train.py:299 step:10 smpl:320 ep:0 epch:0.00 loss:0.468 grdn:3.522 lr:1.6e-07 updt_s:4.906 data_s:4.874 INFO 2025-10-14 04:57:59 ot_train.py:299 step:20 smpl:640 ep:0 epch:0.00 loss:0.467 grdn:3.936 lr:4.1e-07 updt_s:4.807 data_s:0.008 INFO 2025-10-14 04:58:48 ot_train.py:299 step:30 smpl:960 ep:0 epch:0.01 loss:0.508 grdn:3.973 lr:6.6e-07 updt_s:4.815 data_s:0.009 INFO 2025-10-14 04:59:36 ot_train.py:299 step:40 smpl:1K ep:1 epch:0.01 loss:0.513 grdn:3.805 lr:9.1e-07 updt_s:4.841 data_s:0.009`
The loss is very low right from the start of training. Is it training normally?
|
https://github.com/huggingface/lerobot/issues/2194
|
closed
|
[
"question",
"policies"
] | 2025-10-14T05:04:31Z
| 2025-10-14T08:19:29Z
| null |
pparkgyuhyeon
|
huggingface/peft
| 2,832
|
Gradient checkpoint with multiple adapters
|
I'm not sure if it can be considered as a bug since I might be using the library differently from how it's supposed to be used.
**Context:**
I have a PeftModel that need to be infered with 2 different inputs.
For each input I have a pretrained adapter that is frozen and a new adapter for finetuning.
My forward does:
```
for name, x in inputs:
mypeft_model.base_model.set_adapter([name+'pretrain',name+'ft'])
custom_set_pretrain_grad_false_ft_true() #Doing it because set_adapter force gradients to True cf 2759#issue-3363985341
feature = mypeft_model(x)
```
(https://github.com/huggingface/peft/issues/2759#issue-3363985341)
**Issue:**
1) if mypeft_model contains cp.checkpoint(mymodule, x), the backpropagation will not update properly the weight of the LoRA layers in my module either because it did not 'see the set_adapter' or it did not 'see the force grad'
2) A work around I have found is to wrap the whole code inside the loop with a cp.checkpoint but it's super heavy on the memory as I have to store all in GPU until the end of the backbone (ViT-G 40 blocks transformers)
**Question:**
Is there anyway to 'provide' the context to the backpropagation even using gradient checkpointing when switching adapters in the forward?
I have not explored huggingface transformers.enable_gradient_checkpointing() since I'm using a custom model and I'm unsure if it fits for my problem.
|
https://github.com/huggingface/peft/issues/2832
|
closed
|
[] | 2025-10-14T03:53:10Z
| 2025-12-15T08:24:03Z
| 3
|
NguyenRichard
|
huggingface/lerobot
| 2,192
|
how to test PI0's output
|
i use this code to test pi0's output:
def main():
# Create a directory to store the training checkpoint.
output_directory = Path("outputs/example_aloha_static_coffee")
output_directory.mkdir(parents=True, exist_ok=True)
# # Select your device
device = torch.device("cuda")
# Number of offline training steps (we'll only do offline training for this example.)
# Adjust as you prefer. 5000 steps are needed to get something worth evaluating.
training_steps = 500
log_freq = 1
# When starting from scratch (i.e. not from a pretrained policy), we need to specify 2 things before
# creating the policy:
# - input/output shapes: to properly size the policy
# - dataset stats: for normalization and denormalization of input/outputs
dataset_metadata = LeRobotDatasetMetadata("lerobot/aloha_static_coffee")
print(dataset_metadata.features.keys())
features = dataset_to_policy_features(dataset_metadata.features)
output_features = {key: ft for key, ft in features.items() if ft.type is FeatureType.ACTION}
input_features = {key: ft for key, ft in features.items() if key not in output_features}
# Policies are initialized with a configuration class, in this case `PI0Config`. For this example,
# we'll just use the defaults and so no arguments other than input/output features need to be passed.
cfg = PI0Config(input_features=input_features, output_features=output_features)
print(cfg)
# We can now instantiate our policy with this config and the dataset stats.
policy = PI0Policy(cfg)
policy.train()
policy.to(device)
preprocessor, postprocessor = make_pre_post_processors(cfg, dataset_stats=dataset_metadata.stats)
# We can then instantiate the dataset with these delta_timestamps configuration.
dataset = LeRobotDataset("lerobot/aloha_static_coffee")
# 取一条数据进行试验
state = dataset[20]["observation.state"]
image_cam_high = dataset[20]["observation.images.cam_high"]
image_cam_left_wrist = dataset[20]["observation.images.cam_left_wrist"]
image_cam_low = dataset[20]["observation.images.cam_low"]
image_cam_right_wrist = dataset[20]["observation.images.cam_right_wrist"]
effort = dataset[20]["observation.effort"]
state = state.unsqueeze(0).to(device)
image_cam_high = image_cam_high.unsqueeze(0).to(device)
image_cam_left_wrist = image_cam_left_wrist.unsqueeze(0).to(device)
image_cam_low = image_cam_low.unsqueeze(0).to(device)
image_cam_right_wrist = image_cam_right_wrist.unsqueeze(0).to(device)
effort = effort.unsqueeze(0).to(device)
print("State size: ", state.size())
print("Image size: ", image_cam_high.size())
print("Effort size: ", effort.size())
observation = {
"observation.state": state,
"observation.images.cam_high": image_cam_high,
"observation.images.cam_left_wrist": image_cam_left_wrist,
"observation.images.cam_low": image_cam_low,
"observation.images.cam_right_wrist": image_cam_right_wrist,
"observation.effort": effort,
}
# 输出action
with torch.inference_mode():
action = policy.select_action(observation)
numpy_action = action.squeeze(0).to("cpu").numpy()
print("Action: ", numpy_action)
but got an error:
Traceback (most recent call last):
File "/home/wjg/trainpi0.py", line 140, in <module>
main()
File "/home/wjg/trainpi0.py", line 129, in main
action = policy.select_action(observation)
File "/data/wjg_files/anaconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 1144, in select_action
actions = self.predict_action_chunk(batch)[:, : self.config.n_action_steps]
File "/data/wjg_files/anaconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 1157, in predict_action_chunk
lang_tokens, lang_masks = batch[f"{OBS_LANGUAGE_TOKENS}"], batch[f"{OBS_LANGUAGE_ATTENTION_MASK}"]
KeyError: 'observation.language.tokens'
how to solve it?
|
https://github.com/huggingface/lerobot/issues/2192
|
open
|
[
"question",
"policies"
] | 2025-10-14T03:36:43Z
| 2025-10-17T09:56:46Z
| null |
Addog666
|
vllm-project/vllm
| 26,749
|
[Bug]: InternVL: passing image embeddings triggers TypeError: can only concatenate tuple (not "Tensor") to tuple in get_multimodal_embeddings, and v1 sanity check then expects a sequence of 2D tensors
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
# Title
InternVL: passing image **embeddings** triggers `TypeError: can only concatenate tuple (not "Tensor") to tuple` in `get_multimodal_embeddings`, and v1 sanity check then expects a sequence of 2D tensors
## Environment
- vLLM: 0.10.2 (also reproducible on 0.10.1)
- Python: 3.11.x
- Model: `InternVL3_5-1B` (HF, `trust_remote_code=True`)
## Minimal Repro (image **embeddings** input)
```python
from vllm import LLM
import torch
llm = LLM(model="InternVL3_5-1B", trust_remote_code=True)
prompt = "USER: <image>\nWhat is this image?\nASSISTANT:"
# 3D embeddings: [B, T, H] just to illustrate the bug (B=1 here)
# H equals the LM hidden_size for the given weight; using 1024 to reproduce.
image_embeds = torch.randn(1, 16, 1024)
out = llm.generate({
"prompt": prompt,
"multi_modal_data": {"image": image_embeds}, # or {"images": image_embeds}
})
print(out[0].outputs[0].text)
```
## Actual Behavior / Stack
On 0.10.2:
```
File ".../vllm/model_executor/models/internvl.py", line 1328, in get_multimodal_embeddings
multimodal_embeddings += vision_embeddings
TypeError: can only concatenate tuple (not "Tensor") to tuple
```
If we monkey-patch around the above concat, the engine soon asserts:
```
vllm/v1/worker/utils.py", line 155, in sanity_check_mm_encoder_outputs
AssertionError: Expected multimodal embeddings to be a sequence of 2D tensors,
but got tensors with shapes [torch.Size([1, 16, 1024])] instead.
This is most likely due to incorrect implementation of the model's `get_multimodal_embeddings` method.
```
So there are **two inconsistencies**:
1) `get_multimodal_embeddings` sometimes returns a **Tensor** (3D) but the code path later concatenates assuming a **tuple** of tensors.
2) v1 expects a **sequence of 2D tensors `[T, H]`**, but the current image-embeddings path can yield a **3D** `[B, T, H]` tensor (batch dimension not flattened), which fails the sanity check.
## Expected Behavior
- Passing embeddings should **not crash**, whether provided as:
- a single 2D tensor `[T, H]` (one image), or
- a 3D tensor `[B, T, H]` (batch of images), or
- a list/tuple of 2D tensors.
- `get_multimodal_embeddings` should normalize its outputs to a **sequence of 2D tensors** to satisfy `sanity_check_mm_encoder_outputs`.
## Why this matters
InternVL supports both pixel inputs and precomputed **embeddings**. The embedding path is useful in production pipelines (pre-encode vision on different hardware, caching, etc.). Currently in 0.10.1/0.10.2 this path is broken due to type/shape inconsistencies, blocking these use-cases.
## Proposed Fix (minimal)
Normalize to a sequence of 2D tensors before concatenation. For example, in `vllm/model_executor/models/internvl.py` inside `get_multimodal_embeddings(...)`:
```diff
@@
- vision_embeddings = self._process_image_input(image_input)
- if torch.is_tensor(vision_embeddings):
- vision_embeddings = (vision_embeddings,)
- multimodal_embeddings += vision_embeddings
+ vision_embeddings = self._process_image_input(image_input)
+
+ # Normalize to tuple[Tensor[T,H], ...]
+ def _to_2d_seq(x):
+ import torch
+ if torch.is_tensor(x):
+ if x.ndim == 3: # [B, T, H] -> B * [T,H]
+ return tuple(x.unbind(0))
+ elif x.ndim == 2: # [T, H]
+ return (x,)
+ raise TypeError(f"vision embeddings must be 2D/3D, got shape {tuple(x.shape)}")
+ elif isinstance(x, (list, tuple)):
+ out = []
+ for e in x:
+ out.extend(_to_2d_seq(e))
+ return tuple(out)
+ else:
+ raise TypeError(f"unexpected type for vision embeddings: {type(x)}")
+
+ vision_embeddings = _to_2d_seq(vision_embeddings)
+ multimodal_embeddings += vision_embeddings
```
Additionally, consider accepting both `"image"` and `"images"` as modality keys (a few code paths assume `"images"`), or clarify in docs which key is canonical.
## Workarounds we tried
- Wrapping the returned tensor into a tuple (avoids the first `TypeError`), but the v1 sanity check still fails because the output remains 3D.
- Providing embeddings as a list of 2D tensors `[T, H]` works, but many upstream encoders naturally produce `[B, T, H]`, so normalizing in the model executor is safer.
- Pixel input path works and can be used as a temporary fallback, but defeats the purpose of passing precomputed embeddings.
## Version Matrix
- ✅ Pixel input: OK on 0.10.1 and 0.10.2
- ❌ Embedding input: crashe
|
https://github.com/vllm-project/vllm/issues/26749
|
closed
|
[
"bug"
] | 2025-10-14T03:01:33Z
| 2025-10-14T09:36:22Z
| 1
|
BlueBlueFF
|
huggingface/transformers
| 41,554
|
model.from_pretrained( . . . ) not loading needed weights/parameters
|
I am performing quantization of a PatchTSTForPrediction model and attempting to load a saved quantized model for testing. Model is saved using `model.save_pretrained( . . . )`. Testing proceeds perfectly once performed immediately after QAT (Hugging face trainer's handles loading at the end of training); however, when attempting to load a saved quantized (trained) model, the error below occurs. I perform all the pre-quantization preparation so that the model contains all the necessary parameters (untrained) and then try to load the saved checkpoint. How can I force `from_pretrained( . . . )` to load ALL required weights?
`Some weights of the model checkpoint at ./checkpoints/ . . . were not used when initializing PatchTSTForPrediction: ['head.projection.calib_counter', 'head.projection.num_module_called', 'head.projection.obsrv_clipval', 'head.projection.obsrv_clipvaln', 'head.projection.obsrv_w_clipval', 'head.projection.quantize_feature.clip_val', 'head.projection.quantize_feature.clip_valn', 'head.projection.quantize_weight.clip_val', 'model.encoder.layers.0.ff.0.calib_counter', 'model.encoder.layers.0.ff.0.num_module_called', 'model.encoder.layers.0.ff.0.obsrv_clipval', 'model.encoder.layers.0.ff.0.obsrv_clipvaln', 'model.encoder.layers.0.ff.0.obsrv_w_clipval', 'model.encoder.layers.0.ff.0.quantize_feature.clip_val', 'model.encoder.layers.0.ff.0.quantize_feature.clip_valn', 'model.encoder.layers.0.ff.0.quantize_weight.clip_val', 'model.encoder.layers.0.ff.3.calib_counter', 'model.encoder.layers.0.ff.3.num_module_called', 'model.encoder.layers.0.ff.3.obsrv_clipval', 'model.encoder.layers.0.ff.3.obsrv_clipvaln', 'model.encoder.layers.0.ff.3.obsrv_w_clipval', 'model.encoder.layers.0.ff.3.quantize_feature.clip_val', 'model.encoder.layers.0.ff.3.quantize_feature.clip_valn', 'model.encoder.layers.0.ff.3.quantize_weight.clip_val', 'model.encoder.layers.0.self_attn.QBmm52.num_module_called', 'model.encoder.layers.0.self_attn.QBmm52.quantize_m1.clip_val', 'model.encoder.layers.0.self_attn.QBmm52.quantize_m1.clip_valn', 'model.encoder.layers.0.self_attn.QBmm52.quantize_m2.clip_val', 'model.encoder.layers.0.self_attn.QBmm52.quantize_m2.clip_valn', 'model.encoder.layers.0.self_attn.QBmm62.num_module_called', 'model.encoder.layers.0.self_attn.QBmm62.quantize_m1.clip_val', 'model.encoder.layers.0.self_attn.QBmm62.quantize_m1.clip_valn', 'model.encoder.layers.0.self_attn.QBmm62.quantize_m2.clip_val', 'model.encoder.layers.0.self_attn.QBmm62.quantize_m2.clip_valn', 'model.encoder.layers.0.self_attn.k_proj.calib_counter', 'model.encoder.layers.0.self_attn.k_proj.num_module_called', 'model.encoder.layers.0.self_attn.k_proj.obsrv_clipval', 'model.encoder.layers.0.self_attn.k_proj.obsrv_clipvaln', 'model.encoder.layers.0.self_attn.k_proj.obsrv_w_clipval', 'model.encoder.layers.0.self_attn.k_proj.quantize_feature.clip_val', 'model.encoder.layers.0.self_attn.k_proj.quantize_feature.clip_valn', 'model.encoder.layers.0.self_attn.k_proj.quantize_weight.clip_val', 'model.encoder.layers.0.self_attn.out_proj.calib_counter', . . .]
This IS expected if you are initializing PatchTSTForPrediction from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing PatchTSTForPrediction from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).`
NB: QAT is simulated. Additional parameters are added to the model after qmodel_prep is called and QAT proceeds as normal. I am using IBM's fms-model-optimizer.
|
https://github.com/huggingface/transformers/issues/41554
|
closed
|
[] | 2025-10-13T23:20:20Z
| 2025-11-24T08:03:05Z
| 5
|
lorsonblair
|
pytorch/pytorch
| 165,324
|
How to enable Bfloat16 when using torch.func.jvp
|
### 🐛 Describe the bug
```python
model_partial = partial(model_fn, **inputs)
jvp_args = (
lambda z, t, r: model_partial(latents=z, timestep=t, r_timestep=r),
(z, t, r),
(v_hat, torch.ones_like(t).to(x.dtype), torch.zeros_like(r).to(x.dtype)),
)
with torch.autocast(device_type="cuda", dtype=torch.bfloat16, enabled=False):
if self.create_graph:
u, dudt = self.jvp_fn(*jvp_args, create_graph=True)
else:
u, dudt = self.jvp_fn(*jvp_args)
```
I was training models in mixed precision of bfloat16. And I used deepspeed stage3, and enabled gradient checkpointing. But while calling the torch.func.jvp, the input seems to be automatically converted to float type. The error is as follows:
```
File "/datadrive/DiffSynth-Studio/diffsynth/models/qwen_image_dit.py", line 283, in forward
img_q, img_k, img_v = self.to_q(image), self.to_k(image), self.to_v(image)
File "/home/t2vg-a100-G4-42/.conda/envs/qwenimage/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/t2vg-a100-G4-42/.conda/envs/qwenimage/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/t2vg-a100-G4-42/.conda/envs/qwenimage/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 758, in forward
result = self.base_layer(x, *args, **kwargs)
File "/home/t2vg-a100-G4-42/.conda/envs/qwenimage/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/t2vg-a100-G4-42/.conda/envs/qwenimage/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/t2vg-a100-G4-42/.conda/envs/qwenimage/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 127, in forward
return F.linear(input.to(self.weight.dtype), self.weight, self.bias)
RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::BFloat16
```
How to force the jvp function to use bfloat16 type in its calculation?
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.18 (main, Jun 5 2025, 13:14:17) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1017-azure-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 4
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7V13 64-Core Processor
Stepping: 1
CPU MHz: 2445.434
BogoMIPS: 4890.86
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 3 MiB
L1i cache: 3 MiB
L2 cache: 48 MiB
L3 cache: 384 MiB
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
NUMA node2 CPU(s): 48-71
NUMA node3 CPU(s): 72-95
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid
|
https://github.com/pytorch/pytorch/issues/165324
|
open
|
[
"triaged",
"module: amp (automated mixed precision)",
"release notes: torch.func"
] | 2025-10-13T14:52:15Z
| 2025-10-27T15:23:35Z
| null |
pnotp
|
pytorch/pytorch
| 165,319
|
Memory leak when converting from numpy array
|
### 🐛 Describe the bug
Just faced a weird memory leak in my code that uses both numpy and pytorch on cpu (to exploit some scipy functionalities first, before using pytorch ones). Here is a minimal example that reproduces the leak on my laptop. I faced it on python 3.10 and then python 3.13 with pytorch 2.8.0.
```python
from typing import List, Tuple
import time
import numpy as np
import torch
import tqdm
import psutil
def minimal_leak(n: int, shape: Tuple[int, ...]) -> List:
L = []
for _ in tqdm.trange(n):
time.sleep(0.001) # Let's not be to fast to see how memory explodes
x = np.zeros(shape, dtype=np.float64)
# x = f(x) # Use some numpy fct
# Convert to torch to use some torch fct
x_pt = torch.from_numpy(x).to(torch.float32)
L.append(x_pt) # Add x_pt = f_pt(x_pt)
# But in fact, let's remove x_pt and keep only something related to it
# The memory for x/x_pt should be released at some point
L[-1] = torch.ones(10, dtype=torch.int64)
return L
def minimal_no_leak(n: int, shape: Tuple[int, ...]) -> List:
"""Same as minimal link, but store a float instead of a tensor in L"""
L = []
for _ in tqdm.trange(n):
time.sleep(0.001) # Let's not be to fast to see how memory explodes
x = np.zeros(shape, dtype=np.float64)
# x = f(x) # Use some numpy fct
# Convert to torch to use some torch fct
x_pt = torch.from_numpy(x).to(torch.float32)
L.append(x_pt) # Add x_pt = f_pt(x_pt)
# But in fact, let's remove x_pt and keep only something related to it
# The memory for x/x_pt should be released at some point
L[-1] = 0.0
return L
def minimal_no_leak_2(n: int, shape: Tuple[int, ...]) -> List:
"""Same as minimal link, but don't clone (or move to another dtype) x"""
L = []
for _ in tqdm.trange(n):
time.sleep(0.001) # Let's not be to fast to see how memory explodes
x = np.zeros(shape, dtype=np.float64)
# x = f(x) # Use some numpy fct
# Convert to torch to use some torch fct
x_pt = torch.from_numpy(x)
L.append(x_pt) # Add x_pt = f_pt(x_pt)
# But in fact, let's remove x_pt and keep only something related to it
# The memory for x/x_pt should be released at some point
L[-1] = torch.ones(10, dtype=torch.int64)
return L
process = psutil.Process()
results = []
for i in tqdm.trange(50):
tqdm.tqdm.write(f"Memory used: {process.memory_info().rss / 1024**3}")
results.extend(minimal_leak(1000, (1, 500, 500))) # Leak
# results.extend(minimal_leak(1000, (50, 500, 500))) # With large tensors the leak vanishes (probably from specific reuse of "small" tensors by torch?)
# results.extend(minimal_no_leak(1000, (1, 500, 500))) # No leak
# results.extend(minimal_no_leak_2(1000, (1, 500, 500))) # No leak
```
Clearly minimal_leak should not leak memory (though I agree my example is a bit far-fetched, my code somehow does something similar, but going through more complex structure and operations). I provided two similar version of the code that do not leak memory, which clearly shows that something weird is happening.
### Versions
Collecting environment information...
PyTorch version: 2.8.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 4.1.2
Libc version: glibc-2.35
Python version: 3.13.7 | packaged by Anaconda, Inc. | (main, Sep 9 2025, 19:59:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-153-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Laptop GPU
Nvidia driver version: 535.247.01
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz
CPU family: 6
Model: 141
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU max MHz: 4600,0000
CPU min MHz: 800,0000
BogoMIPS: 4608.00
Flags:
|
https://github.com/pytorch/pytorch/issues/165319
|
open
|
[
"module: memory usage",
"triaged",
"module: numpy"
] | 2025-10-13T13:58:22Z
| 2025-10-14T08:06:37Z
| 4
|
raphaelreme
|
huggingface/lerobot
| 2,186
|
how to load pi0?
|
i use this code to load pi0:
```python
from lerobot.policies.pi0.modeling_pi0 import PI0Policy
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pretrained_policy_path = "lerobot/pi0_libero_base"
policy = PI0Policy.from_pretrained(pretrained_policy_path).to(device)
```
but throws an error:
```bash
Traceback (most recent call last):
File "/home/wjg/pi0.py", line 16, in <module>
policy = PI0Policy.from_pretrained(pretrained_policy_path).to(device)
File "/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 923, in from_pretrained
model = cls(config, **kwargs)
File "/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 872, in __init__
self.model = PI0Pytorch(config)
File "/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 513, in __init__
self.paligemma_with_expert = PaliGemmaWithExpertModel(
File "/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 337, in __init__
vlm_config_hf = CONFIG_MAPPING["paligemma"]()
TypeError: 'NoneType' object is not subscriptable
```
how can i load pi0?
|
https://github.com/huggingface/lerobot/issues/2186
|
closed
|
[
"question",
"policies",
"python"
] | 2025-10-13T12:24:32Z
| 2025-10-17T09:53:02Z
| null |
Addog666
|
huggingface/accelerate
| 3,812
|
RuntimeError during load_state
|
### System Info
This issue is related to [prior issue 3101](https://github.com/huggingface/accelerate/issues/3101), but it hasn’t been fully resolved yet. The current workaround is to avoid using `safetensors`.
@Narsil suggested using [`load_file/save_file`](https://github.com/huggingface/safetensors/issues/657#issuecomment-3396215002). However, I noticed that accelerate currently uses [save_file](https://github.com/huggingface/accelerate/blob/main/src/accelerate/utils/other.py#L373) for saving and use [load_model](https://github.com/huggingface/accelerate/blob/main/src/accelerate/checkpointing.py#L238) for loading.
Is there any known workaround or recommended fix for this inconsistency?
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [x] My own task or dataset (give details below)
### Reproduction
Please see the [prior issue 3101](https://github.com/huggingface/accelerate/issues/3101).
### Expected behavior
Please see the [prior issue 3101](https://github.com/huggingface/accelerate/issues/3101).
|
https://github.com/huggingface/accelerate/issues/3812
|
closed
|
[] | 2025-10-13T11:25:17Z
| 2025-11-21T15:07:49Z
| 2
|
Silverster98
|
huggingface/lerobot
| 2,185
|
Has the lerobot data format been modified after June this year?
|
Has the lerobot data format been modified after June this year? The original data can no longer be used.
|
https://github.com/huggingface/lerobot/issues/2185
|
closed
|
[
"question",
"dataset"
] | 2025-10-13T10:07:41Z
| 2025-10-14T08:05:04Z
| null |
Addog666
|
huggingface/transformers
| 41,539
|
All POETRY operations fail on latest version 4.57.0
|
### System Info
I import transformers (always latest) in my poetry project.
I use poetry 2.1.2
After this transformers release (4.57.0) I regenerated the poetry lock with command: `poetry lock`
Then when retrying to generate the lock again after other updates - it fails with message:
`Could not parse constrains version: <emtpy>`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Doing a simple search in the poetry.lock file I found out that transformers latest package needs `optax (<empty>)`
which produces this failure because poetry does not know how to parse this type of version.
Note I am sure that this is the problem because commenting out the transformers the lock works fine, and also by using 4.56.2 from September it also works fine and that `optax (<empty>)` cannot be found in the lock in this case.
### Expected behavior
A developer should be able to use the latest transformers package version with poetry.
|
https://github.com/huggingface/transformers/issues/41539
|
closed
|
[
"bug"
] | 2025-10-13T08:40:49Z
| 2025-10-13T14:18:02Z
| 1
|
bfuia
|
vllm-project/vllm
| 26,692
|
[Usage]: How to release KVCache?
|
### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 | packaged by conda-forge | (main, Jun 4 2025, 14:45:31) [GCC 13.3.0] (64-bit runtime)
Python platform : Linux-5.15.0-25-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA L20
GPU 1: NVIDIA L20
GPU 2: NVIDIA L20
GPU 3: NVIDIA L20
Nvidia driver version : 550.127.05
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr avx512_fp16 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3
|
https://github.com/vllm-project/vllm/issues/26692
|
open
|
[
"usage"
] | 2025-10-13T08:28:20Z
| 2025-10-13T08:28:20Z
| 0
|
shenxf1205
|
huggingface/lerobot
| 2,184
|
How to let an episode realize it has finished the task?
|
I have successfully trained my real-world lerobot to do several simple tasks from human demonstrations. Say, push an object from point A to point B. I noticed that after the robot arm has finished the task, it would return to its initial pose (same as the human demonstration) and stay idle for the remainder of the episode, until time finishes.
Of course, if I manually move the cup back to point A from point B before the time finishes, it would attempt to finish the job again. But I just wanted to know if there's any way the episode can finish itself, or at least yield a signal, after the first successful attempt?
I'm using lerobot_record.py with specified policy file path. The policy is act.
Thank you
|
https://github.com/huggingface/lerobot/issues/2184
|
open
|
[] | 2025-10-13T06:27:36Z
| 2025-12-22T07:56:00Z
| null |
genkv
|
pytorch/ao
| 3,157
|
Is there no tutorial for dynamic quantization of BERT model in torch.ao?
|
I saw that some quant related tutorials in [the PyTorch tutorials repo](https://github.com/pytorch/tutorials) have been deleted, and [the PR](https://github.com/pytorch/tutorials/pull/3432) stated that these tutorials will be moved to torchao. However, I can't find [the BERT dynamic quantization tutorial](https://github.com/pytorch/tutorials/pull/3432/files#diff-ffe2cf0ed3702611468c41af499f514e6fb0d4e5497a296df75e99422a200353) in the torchao repository. Where can I find it?
|
https://github.com/pytorch/ao/issues/3157
|
open
|
[
"triaged"
] | 2025-10-12T15:47:06Z
| 2026-01-03T14:43:56Z
| 6
|
Esttelle
|
vllm-project/vllm
| 26,660
|
[Usage]: Is there any way to enable beam search in online inference?
|
### Your current environment
Is there any way to enable beam search in the `vllm serve` command? Or beam search is only available in offline inference code?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26660
|
closed
|
[
"usage"
] | 2025-10-12T13:55:07Z
| 2025-10-17T17:12:45Z
| 1
|
tiesanguaixia
|
huggingface/transformers
| 41,533
|
Add_Specifical_tokens and resize_toked_embeddings result in an error
|
### System Info
I want to add a few special tokens to my Qwen2.5VL model as separators, and after executing the following code, he received the following error message. I don't know how to solve this problem.
``` bash
[rank1]: Traceback (most recent call last):
[rank1]: RuntimeError: shape '[-1, 151936]' is invalid for input of size 329273399
[rank0]: Traceback (most recent call last):
[rank0]: RuntimeError: shape '[-1, 151936]' is invalid for input of size 217038339
[rank3]: Traceback (most recent call last):
[rank3]: RuntimeError: shape '[-1, 151936]' is invalid for input of size 116936799
[rank2]: Traceback (most recent call last):
[rank2]: RuntimeError: shape '[-1, 151936]' is invalid for input of size 215673318
Traceback (most recent call last):
File "/home/hk-project-p0022189/tum_yvc3016/miniconda3/envs/qwen2_5-VL/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 355, in wrapper
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
qwenvl/train/train_livecc.py FAILED
Failures:
<NO_OTHER_FAILURES>
Root Cause (first observed failure):
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
``` python
import os
import logging
import pathlib
import torch
import transformers
import json
from typing import Dict
import shutil
import sys
from pathlib import Path
project_root = Path(__file__).parent.parent.parent
sys.path.append(str(project_root))
import qwenvl.train.trainer
from trainer import replace_qwen2_vl_attention_class
from transformers import (
Qwen2VLForConditionalGeneration,
)
from model_code.modeling_qwen2_5_vl import Qwen2_5_VLForConditionalGeneration
# from qwenvl.data.data_qwen import make_supervised_data_module
from qwenvl.data.lmm_dataset_for_batch import make_supervised_data_module
from qwenvl.train.argument import (
ModelArguments,
DataArguments,
TrainingArguments,
)
from transformers import AutoTokenizer, AutoProcessor, Qwen2VLImageProcessor, Trainer
local_rank = None
os.environ["TOKENIZERS_PARALLELISM"] = "false"
def rank0_print(*args):
if local_rank == 0:
print(*args)
def add_special_tokens_safely(tokenizer, new_tokens):
"""
安全地向 tokenizer 添加新的 special tokens,保留原有的 additional_special_tokens。
Args:
tokenizer: Hugging Face tokenizer
model: 对应的语言模型
new_tokens: list of str, 要添加的新 token
Returns:
bool: 是否有新 token 被添加
"""
# 获取当前词表中的所有 token
current_vocab = set(tokenizer.get_vocab().keys())
# 过滤出真正需要添加的 token
tokens_to_add = [t for t in new_tokens if t not in current_vocab]
if not tokens_to_add:
rank0_print("🟢 所有指定的 token 已存在于词表中,无需添加。")
return False
# 获取原有 additional_special_tokens(如 <image>, <ref> 等)
orig_special_tokens = tokenizer.special_tokens_map.get(
"additional_special_tokens", []
)
# 合并:保留原有 + 新增
updated_special_tokens = orig_special_tokens + [
t for t in tokens_to_add if t not in orig_special_tokens
]
rank0_print(f"📌 正在添加新 token: {tokens_to_add}")
rank0_print(f"🔧 更新后的 additional_special_tokens 总数: {len(updated_special_tokens)}")
# 使用 add_special_tokens API(会自动去重)
num_added = tokenizer.add_special_tokens(
{"additional_special_tokens": updated_special_tokens}
)
if num_added > 0:
rank0_print(f"✅ 成功添加 {num_added} 个新 token 到词表")
return num_added > 0
def safe_save_model_for_hf_trainer(trainer: transformers.Trainer, output_dir: str):
"""Collects the state dict and dump to disk."""
if trainer.deepspeed:
torch.cuda.synchronize()
trainer.save_model(output_dir)
return
state_dict = trainer.model.state_dict()
if trainer.args.should_save:
cpu_state_dict = {key: value.cpu() for key, value in state_dict.items()}
del state_dict
trainer._save(output_dir, state_dict=cpu_state_dict) # noqa
def set_model(model_args, model):
if model_args.tune_mm_vision:
for n, p in model.visual.named_parameters():
p.requires_grad = True
else:
for n, p in model.visual.named_parameters():
p.requires_grad = False
if model_args.tune_mm_mlp:
for n, p in model.visual.merger.named_parameters():
p.requires_grad = True
else:
for n, p in model.visual.merger.named_parameters():
p.requires_grad = False
if model_args.tune_mm_llm:
for n, p in model.model.named_parameters():
p.requires_grad = True
model.lm_head.requires_grad = True
else:
for n, p in model.model.named_parameters():
p.requir
|
https://github.com/huggingface/transformers/issues/41533
|
closed
|
[
"bug"
] | 2025-10-12T13:50:40Z
| 2025-10-13T14:09:29Z
| 3
|
jialiangZ
|
huggingface/lerobot
| 2,181
|
How to chage SmolVLA action_chunk_size?
|
I want to change 'action_chunk_size' from 50 to 10. I ran the command like this :
'''
python lerobot/scripts/train.py --policy.path=lerobot/smolvla_base --dataset.repo_id=Datasets/grasp_put --batch_size=16 --steps=40000 --output_dir=outputs/train/vla_chunk10 --job_name=smolvla_training --policy.device=cuda --policy.push_to_hub=false --policy.action_chunk_size=10
'''
but it doesn't work
'train.py: error: unrecognized arguments: --action_chunk_size=10'
and I found it can enter this parameter in the terminal :
usage: train.py [-h] [--policy.action_chunk_size str]
How should I resolve this problem?
|
https://github.com/huggingface/lerobot/issues/2181
|
closed
|
[
"question",
"policies",
"python"
] | 2025-10-12T13:29:35Z
| 2025-10-17T11:25:55Z
| null |
CCCY-0304
|
huggingface/transformers
| 41,532
|
where is examples/rag from original paper?
|
### System Info
https://arxiv.org/pdf/2005.11401 mentions https://github.com/huggingface/transformers/blob/main/examples/rag but it is not there. Add redirect if possible
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Go to https://github.com/huggingface/transformers/blob/main/examples/rag
### Expected behavior
some example instead of 404
|
https://github.com/huggingface/transformers/issues/41532
|
closed
|
[
"bug"
] | 2025-10-12T13:17:53Z
| 2025-10-17T09:34:15Z
| null |
IgorKasianenko
|
vllm-project/vllm
| 26,653
|
[Usage]: Qwen3VL image coordinates issue
|
### Your current environment
Hi, i found same image, same prompt, the vLLM serving qwen3vl always have wrong cooridnates back.
this is vllm return:
Response: "{\"click_type\": \"left_click\", \"coordinate\": [815, 961]}"
<img width="1093" height="549" alt="Image" src="https://github.com/user-attachments/assets/f55cb990-03a1-4ac7-912b-e2796c8b854a" />
As you can see, when visualize, the VLLM returned x offset is totally far wrong.
Qwen3 official return. Same A3B model.
Does the input were cropped or something?
My server side just used:
```
vllm serve checkpoints/Qwen3-VL-30B-A3B-Instruct \
--dtype auto --max-model-len 4096 \
--api-key token-abc123 \
--gpu_memory_utilization 0.9 \
--trust-remote-code \
--port 8000 \
--served-model-name 'qwen3-vl' \
--max-model-len 8k \
--limit-mm-per-prompt '{"video": 3}' \
--enable-auto-tool-choice \
--tool-call-parser hermes
```
**note**: when visualize i have already mapping the cordiantes to image space, here just compare raw output, it still biased much on x-axis.
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
dfwr
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26653
|
closed
|
[
"usage"
] | 2025-10-12T07:02:29Z
| 2025-10-13T03:56:53Z
| 2
|
lucasjinreal
|
huggingface/accelerate
| 3,811
|
ValueError: Could not find the transformer layer class QwenImageTransformerBlock in the model.
|
Hi, I am trying to fine-tuning qwen-image-edit using accelerate in FSDP mode. I want to warp the ``QwenImageTransformerBlock`` in transformer and ``Qwen2_5_VLVisionBlock,Qwen2_5_VLDecoderLayer`` in text_encoder. I set the environment param
```
def set_fsdp_env():
os.environ["ACCELERATE_USE_FSDP"] = 'true'
os.environ["FSDP_AUTO_WRAP_POLICY"] = 'TRANSFORMER_BASED_WRAP'
os.environ["FSDP_BACKWARD_PREFETCH"] = 'BACKWARD_PRE'
os.environ["FSDP_TRANSFORMER_CLS_TO_WRAP"] = 'QwenImageTransformerBlock,Qwen2_5_VLVisionBlock,Qwen2_5_VLDecoderLayer'
os.environ["FSDP_CPU_RAM_EFFICIENT_LOADING"] = 'false'
```
and prepare the two models
```
transformer = accelerator.prepare(transformer)
text_encoder = accelerator.prepare(text_encoder)
```
Finally, I encountered the error raised from ``text_encoder = accelerator.prepare(text_encoder)``
```
ValueError: Could not find the transformer layer class QwenImageTransformerBlock in the model.
```
How can I resolve this problem? Thanks!
|
https://github.com/huggingface/accelerate/issues/3811
|
closed
|
[] | 2025-10-11T10:13:14Z
| 2025-11-22T15:06:54Z
| 2
|
garychan22
|
huggingface/lerobot
| 2,172
|
Add support for remote GPUs (with async inference!)
|
Hello,
I'm a student in not the first-world country, and unforturnately, I don't own a PC that would have an NVidia GPU - it costs about $1200 for a decent setup. On the other hand, it costs only $0.12-0.24/hr to rent RTX 4090 instances, so it's pretty cheap to simply rent a computer whenever I need to data collect/train.
But, to my knowledge LeRobot - unlike e.g. most LLM or vision trainers - runs only locally. I haven't tried, but given Async Inference it should be very feasible to make streaming to a local browser from a remote instance. In particular, for data collection.
This will make robotics dataset generation (significantly) more accessible.
I may be able to PR this one, it should be straightforward.
Cheers.
|
https://github.com/huggingface/lerobot/issues/2172
|
open
|
[
"enhancement",
"question"
] | 2025-10-11T08:49:32Z
| 2025-12-19T06:35:21Z
| null |
MRiabov
|
huggingface/transformers
| 41,518
|
Add Structured Prompt Templates Registry for LLM / VLM / Diffusion Tasks
|
### Feature request
Introduce transformers.prompt_templates — a YAML-based registry and accessor API:
```
from transformers import PromptTemplates
PromptTemplates.get("summarization") # "Summarize the following text:"
PromptTemplates.list_tasks() # ["summarization","vqa","ocr",...]
```
- Templates stored as yaml/json under src/transformers/prompt_templates/templates/.
- Accessor + validation in registry.py.
- Optional CLI command transformers-cli list-prompts.
- Pipelines can import a template by task name instead of hard-coding.
### Motivation
Every pipeline and model today embeds its own prompt strings (e.g., summarization, OCR, VQA).
This duplication makes results inconsistent and hard to benchmark.
A central registry of task-specific prompt templates would unify defaults and enable easy community additions.
### Your contribution
I’ll implement the registry module, add unit tests and docs, and migrate 1–2 pipelines (summarization / captioning) to use it.
Contributor: [@Aki-07](https://github.com/Aki-07)
|
https://github.com/huggingface/transformers/issues/41518
|
open
|
[
"Feature request"
] | 2025-10-11T08:10:20Z
| 2025-10-13T15:06:20Z
| 2
|
Aki-07
|
vllm-project/vllm
| 26,616
|
[Usage]: How to enable MTP when using Qwen3-Next in local infer ( not vllm serve)
|
### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.2 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 | packaged by Anaconda, Inc. | (main, Jun 5 2025, 13:09:17) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-4.18.0-2.6.8.kwai.x86_64-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 11.8.89
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
Nvidia driver version : 550.54.14
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7V13 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
BogoMIPS: 4890.88
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat umip vaes vpclmulqdq rdpid
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
NUMA node2 CPU(s): 48-71
NUMA node3 CPU(s): 72-95
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, STIBP: disabled
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cudnn-frontend==1.14.1
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-ml-py==13.580.82
[pip3] nvidia-nccl-cu12==2.27.3
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] pyzmq==27.1.0
[pip3] torch==2.8.0
[pip3] torchaudio==2.8.0
[pip3] torchvision==0.23.0
[pip3] transformers==4.57.0
[pip3] triton==3.4.0
[conda] nu
|
https://github.com/vllm-project/vllm/issues/26616
|
open
|
[
"usage"
] | 2025-10-11T03:58:14Z
| 2025-10-16T08:45:35Z
| 1
|
Kimagure7
|
vllm-project/vllm
| 26,614
|
[Usage]: attn_metadata.seq_lens is not equal to attn_metadata.num_actual_tokens
|
### Your current environment
```
Collecting environment information...
uv is set
==============================
System Info
==============================
OS : Ubuntu 20.04.6 LTS (x86_64)
GCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version : Could not collect
CMake version : version 3.16.3
Libc version : glibc-2.31
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 (main, Jul 23 2025, 00:34:44) [Clang 20.1.4 ] (64-bit runtime)
Python platform : Linux-5.4.0-216-generic-x86_64-with-glibc2.31
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA H20
GPU 1: NVIDIA H20
GPU 2: NVIDIA H20
GPU 3: NVIDIA H20
GPU 4: NVIDIA H20
GPU 5: NVIDIA H20
GPU 6: NVIDIA H20
GPU 7: NVIDIA H20
Nvidia driver version : 555.42.06
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 224
On-line CPU(s) list: 0-223
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8480+
Stepping: 8
Frequency boost: enabled
CPU MHz: 900.000
CPU max MHz: 2001.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Virtualization: VT-x
L1d cache: 5.3 MiB
L1i cache: 3.5 MiB
L2 cache: 224 MiB
L3 cache: 210 MiB
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities
==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-
|
https://github.com/vllm-project/vllm/issues/26614
|
open
|
[
"usage"
] | 2025-10-11T03:35:38Z
| 2025-10-11T03:36:31Z
| 0
|
betacatZ
|
vllm-project/vllm
| 26,612
|
[Usage]: qwen3vl 30 A3B 启动vllm 服务报错
|
### 📚 The doc issue
A_A800-SXM4-80GB.json']
(Worker pid=1939690) INFO 10-11 10:42:13 [monitor.py:34] torch.compile takes 85.33 s in total
(Worker pid=1939690) INFO 10-11 10:42:14 [gpu_worker.py:298] Available KV cache memory: 13.69 GiB
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] EngineCore failed to start.
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] Traceback (most recent call last):
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 699, in run_engine_core
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 498, in __init__
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] super().__init__(vllm_config, executor_class, log_stats,
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 92, in __init__
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] self._initialize_kv_caches(vllm_config)
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 199, in _initialize_kv_caches
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] kv_cache_configs = get_kv_cache_configs(vllm_config, kv_cache_specs,
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/core/kv_cache_utils.py", line 1243, in get_kv_cache_configs
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] check_enough_kv_cache_memory(vllm_config, kv_cache_spec_one_worker,
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/core/kv_cache_utils.py", line 716, in check_enough_kv_cache_memory
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] raise ValueError(
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] ValueError: To serve at least one request with the models's max seq len (262144), (24.00 GiB KV cache is needed, which is larger than the available KV cache memory (13.69 GiB). Based on the available memory, the estimated maximum model length is 149520. Try increasing `gpu_memory_utilization` or decreasing `max_model_len` when initializing the engine.
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:17 [multiproc_executor.py:154] Worker proc VllmWorker-0 died unexpectedly, shutting down executor.
(EngineCore_DP0 pid=1937911) Process EngineCore_DP0:
(EngineCore_DP0 pid=1937911) Traceback (most recent call last):
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/anaconda3_flash_attn/envs/qwen3_vl/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_DP0 pid=1937911) self.run()
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/anaconda3_flash_attn/envs/qwen3_vl/lib/python3.12/multiprocessing/process.py", line 108, in run
(EngineCore_DP0 pid=1937911) self._target(*self._args, **self._kwargs)
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 712, in run_engine_core
(EngineCore_DP0 pid=1937911) raise e
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 699, in run_engine_core
(EngineCore_DP0 pid=1937911) engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=1937911) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 498, in __init__
(EngineCore_DP0 pid=1937911) super().__init__(vllm_config, executor_class, log_stats,
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 92, in __init__
(EngineCore_DP0 pid=1937911) self._initialize_kv_caches(vllm_config)
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 199, in _initialize_kv_caches
(EngineCore_DP0 pid=1937911) kv_cache_configs = get_kv_cache_configs(vllm_config, kv_cache_specs,
(EngineCore_DP0 pid=1937911) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Engin
|
https://github.com/vllm-project/vllm/issues/26612
|
closed
|
[
"usage"
] | 2025-10-11T02:45:20Z
| 2025-10-16T23:00:39Z
| 1
|
renkexuan369
|
huggingface/lerobot
| 2,171
|
Data diffusion and data format conversion
|
1. Can datasets collected in Lerobot format be disseminated?
2. Can data formats between different Lerobot versions be converted? I noticed that the data format collected in version 0.2.0 is different from the latest data format.
Thank you!
|
https://github.com/huggingface/lerobot/issues/2171
|
open
|
[
"question",
"dataset"
] | 2025-10-11T02:16:55Z
| 2025-10-17T02:02:36Z
| null |
FALCONYU
|
vllm-project/vllm
| 26,607
|
[Bug]: Since version 0.9.2 comes with nccl built-in, using PCIE causes sys errors. How to disable nccl in vllm for versions after 0.9.2?
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
<img width="833" height="138" alt="Image" src="https://github.com/user-attachments/assets/a42c415b-8c5b-4698-aa6f-879edc44d512" />
### 🐛 Describe the bug
sh 06_startVllmAPI.sh
INFO 09-30 10:30:16 [__init__.py:216] Automatically detected platform cuda.
(APIServer pid=1599676) INFO 09-30 10:30:17 [api_server.py:1896] vLLM API server version 0.10.2
(APIServer pid=1599676) INFO 09-30 10:30:17 [utils.py:328] non-default args: {'port': 6006, 'model': './autodl-tmp/modelscope/models/GeoGPT/Qwen2.5-72B-GeoGPT', 'tokenizer': './autodl-tmp/modelscope/models/GeoGPT/Qwen2.5-72B-GeoGPT', 'trust_remote_code': True, 'dtype': 'bfloat16', 'served_model_name': ['Qwen2.5-72B-GeoGPT'], 'tensor_parallel_size': 8, 'gpu_memory_utilization': 0.5}
(APIServer pid=1599676) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
(APIServer pid=1599676) INFO 09-30 10:30:24 [__init__.py:742] Resolved architecture: Qwen2ForCausalLM
(APIServer pid=1599676) `torch_dtype` is deprecated! Use `dtype` instead!
(APIServer pid=1599676) INFO 09-30 10:30:24 [__init__.py:1815] Using max model len 131072
(APIServer pid=1599676) INFO 09-30 10:30:24 [scheduler.py:222] Chunked prefill is enabled with max_num_batched_tokens=2048.
INFO 09-30 10:30:29 [__init__.py:216] Automatically detected platform cuda.
(EngineCore_DP0 pid=1600151) INFO 09-30 10:30:31 [core.py:654] Waiting for init message from front-end.
(EngineCore_DP0 pid=1600151) INFO 09-30 10:30:31 [core.py:76] Initializing a V1 LLM engine (v0.10.2) with config: model='./autodl-tmp/modelscope/models/GeoGPT/Qwen2.5-72B-GeoGPT', speculative_config=None, tokenizer='./autodl-tmp/modelscope/models/GeoGPT/Qwen2.5-72B-GeoGPT', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=auto, tensor_parallel_size=8, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=Qwen2.5-72B-GeoGPT, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2","vllm.mamba_mixer","vllm.short_conv","vllm.linear_attention","vllm.plamo2_mamba_mixer","vllm.gdn_attention"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":1,"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"pass_config":{},"max_capture_size":512,"local_cache_dir":null}
(EngineCore_DP0 pid=1600151) WARNING 09-30 10:30:31 [multiproc_worker_utils.py:273] Reducing Torch parallelism from 64 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
(EngineCore_DP0 pid=1600151) INFO 09-30 10:30:31 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3, 4, 5, 6, 7], buffer_handle=(8, 16777216, 10, 'psm_7e0498ff'), local_subscribe_addr='ipc:///tmp/33a7ec3b-72b3-4984-9ed3-6fc1fb572c4a', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:40 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_1413bf45'), local_subscribe_addr='ipc:///tmp/a
|
https://github.com/vllm-project/vllm/issues/26607
|
open
|
[
"bug"
] | 2025-10-11T01:48:50Z
| 2025-10-17T01:09:03Z
| 0
|
tina0852
|
pytorch/pytorch
| 165,177
|
cryptic symbolic shape error with FSDP2 and torch.compile
|
### 🐛 Describe the bug
Using FSDP2 and torch.compile with Llama3 (and most other generative models on HuggingFace). I get the following error:
```
AssertionError: s52 (could be from ["L['position_ids']._base.size()[0]"]) not in {
s53: ["L['attention_mask'].size()[1]", "L['attention_mask'].stride()[0]"],
s58: ["L['cache_position'].size()[0]", "L['position_ids']._base.size()[0]"],
s55: ["L['input_embeds'].size()[1]"],
s9: ["L['position_ids'].size()[1]", "L['position_ids'].stride()[0]"],
s52: []
}. If this assert is failing, it could be due to the issue described in https://github.com/pytorch/pytorch/pull/90665
```
A reasonable suggestion would be there is something in the code that isn't torch.compile friendly. That is certainly possible. However, without the `fully_shard` and with `torch.compile()` there is no error. Hence, I'm inclined to believe it's a bug with how torch.compile and FSDP2 interacts. The link https://github.com/pytorch/pytorch/pull/90665 was not instructive as to the cause.
The following code reproduces the error. The error produces with 1 GPU, 2 GPUs, 2x8 GPUs, and possibly more settings.
```py
import os
import numpy as np
import torch
from torch.distributed.fsdp import fully_shard
import torch.nn.functional as F
from transformers import AutoModelForCausalLM
from transformers.models.llama.modeling_llama import LlamaDecoderLayer
def init_distributed() -> tuple[int,torch.device]:
world_size = int(os.environ.get("WORLD_SIZE", 1))
rank = int(os.environ.get("RANK", 0))
local_rank = int(os.environ.get("LOCAL_RANK", 0))
if "SLURM_NTASKS" in os.environ:
world_size = int(os.environ["SLURM_NTASKS"])
rank = int(os.environ.get("SLURM_PROCID", os.environ.get("SLURM_TASK_PID", 0)))
local_rank = int(os.environ.get("SLURM_LOCALID", int(os.environ.get("SLURM_PROCID", 0)) % (torch.cuda.device_count() or 1)))
torch.cuda.set_device(local_rank)
init_method = os.environ.get("INIT_METHOD", "env://")
torch.distributed.init_process_group(
backend="nccl",
init_method=init_method,
world_size=world_size,
rank=rank
)
return rank, torch.device(f"cuda:{local_rank}")
def main():
# Setup distributed + device
rank, device = init_distributed()
# Only rank 0 downloads/prepares model weights; others wait.
gen_model = AutoModelForCausalLM.from_pretrained(
'meta-llama/Meta-Llama-3-8B-Instruct',
use_safetensors=True,
dtype=torch.bfloat16,
pad_token_id=0,
use_cache=False
)
torch.distributed.barrier()
if gen_model is None:
gen_model = AutoModelForCausalLM.from_pretrained(
'meta-llama/Meta-Llama-3-8B-Instruct',
use_safetensors=True,
pad_token_id=0,
use_cache=False
)
gen_model.to(device)
for submodule in gen_model.model.layers:
if isinstance(submodule, LlamaDecoderLayer):
fully_shard(submodule)
fully_shard(gen_model)
gen_model = torch.compile(gen_model) # type: ignore
torch.distributed.barrier()
dataset = [
np.array([[1, 27, 91, 882, 91, 397]], dtype=np.int64),
np.array([[1, 27, 91, 882, 91, 397, 45, 45, 45, 45, 45, 45]], dtype=np.int64)
]
assert gen_model is not None
for input_ids in dataset:
batch = torch.from_numpy(input_ids).to(device)
# padding changes the issue to a crash with no error.
# batch = F.pad(torch.from_numpy(input_ids), (0, 200 - input_ids.shape[1]), value=0).to(device)
logits = gen_model(input_ids=batch,
attention_mask=torch.ones_like(batch)).logits # AssertionError
print('SUCCESS')
if __name__ == "__main__":
torch.set_float32_matmul_precision('high')
main()
```
Full error:
```
Running setup on gpu-54
Using CPython 3.13.7
Creating virtual environment at: /tmp/pyenv
Activate with: source /tmp/pyenv/bin/activate
Cloning into '/tmp/code'...
done.
/tmp/code ~/workspace/reward-based-ift
HEAD is now at 72d2b68 job submission
Using Python 3.13.7 environment at: /tmp/pyenv
Resolved 134 packages in 1.15s
Building gl @ file:///tmp/code
Built gl @ file:///tmp/code
Prepared 1 package in 1.48s
Installed 134 packages in 3.04s
+ ai2-olmo-eval==0.8.5
+ aiofiles==24.1.0
+ aiohappyeyeballs==2.6.1
+ aiohttp==3.11.18
+ aiosignal==1.4.0
+ annotated-types==0.7.0
+ attrs==25.4.0
+ boto3==1.40.49
+ botocore==1.40.49
+ cached-path==1.8.0
+ cachetools==6.2.0
+ certifi==2025.10.5
+ charset-normalizer==3.4.3
+ click==8.3.0
+ datasets==4.0.0
+ deepspeed==0.16.9
+ dill==0.3.8
+ distlib==0.4.0
+ docker-pycreds==0.4.0
+ einops==0.8.1
+ filelock==3.20.0
+ frozenlist==1.8.0
+ fsspec==2025.3.0
+ gitdb==4.0.12
+ gitpython==3.1.45
+ gl==0.1.0 (from file:///tmp/code)
+ google-api-core==2.26.0
+ google-auth==2.41.1
+ google-cloud-core==2.4.3
+ google-cloud-storage==2.19.0
+ google-crc32c==1.7.1
+ google-resumable-media==2.7.2
|
https://github.com/pytorch/pytorch/issues/165177
|
closed
|
[
"high priority",
"oncall: distributed",
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 2025-10-10T19:53:04Z
| 2025-10-30T18:03:53Z
| 8
|
AndreasMadsen
|
pytorch/tutorials
| 3,611
|
Feedback about Quickstart
|
There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html#optimizing-the-model-parameters
System specs: Windows 11, python3.11, pytorch==2.8.0+xpu, Intel oneAPI 2025.2.
Been following this tut, I got this error raising from test function
```
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: UR error
```
Checked all the compatibility of oneAPI, pytorch, and intel_extension_for_pytorch
```
print(torch.xpu._is_compiled())
print(torch.xpu.is_available())
```
Both prints True
Really new to ML and NN, but not dev, so tried using torch.FloatTensor
`correct += (pred.argmax(1) == y).type(torch.FloatTensor).sum().item()
`
It works and output almost matches to what's given in tut.
I hope what I did is correct in terms of ML.
If not please suggest where can I lookup to understand this better.
cc @albanD @jbschlosser @gujinghui @EikanWang @fengyuan14 @guangyey
|
https://github.com/pytorch/tutorials/issues/3611
|
open
|
[
"question",
"core",
"module: xpu",
"windows"
] | 2025-10-10T17:06:21Z
| 2025-10-20T03:25:53Z
| null |
BhavneetSingh7
|
huggingface/hf-hub
| 131
|
InvalidCertificate and how to fix it
|
I am trying to install a DuckDB extension written in Rust (https://github.com/martin-conur/quackformers) that uses the library.
During the install, I am getting a
```
HfHub(RequestError(Transport(Transport { kind: ConnectionFailed, message: Some("tls connection init failed"), url: Some(Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("huggingface.co")), port: None, path: "/sentence-transformers/all-MiniLM-L6-v2/resolve/main/tokenizer.json", query: None, fragment: None }), source: Some(Custom { kind: InvalidData, error: InvalidCertificate(UnknownIssuer) }) })))
```
The file can be accessed from my environment via curl.
The file can be accessed from DuckDB using their `httpfs` extension which is written in C/C++.
I am working in environment with a very strict enterprise proxy and this is most likely what's causing the issue (I have zero issue when running the same commands at home).
1. can the behavior of HfHub with respect to proxy be modified using env variables?
2. can the behavior of HfHub with respect to TLS certificates be modified using env variables?
3. where can I find the default value(s) for the proxy settings and the location of certs used by the library
References:
- bug report for quackformer = https://github.com/martin-conur/quackformers/issues/7
|
https://github.com/huggingface/hf-hub/issues/131
|
open
|
[] | 2025-10-10T14:42:12Z
| 2025-10-10T18:18:28Z
| null |
sahuguet
|
vllm-project/vllm
| 26,585
|
[Usage]: use vllm embedding to extract last token hidden states?
|
### Your current environment
```/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
import pynvml # type: ignore[import]
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : 14.0.0-1ubuntu1.1
CMake version : version 3.21.0
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-5.10.134-16.3.al8.x86_64-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration : GPU 0: NVIDIA H20-3e
Nvidia driver version : 570.133.20
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) PLATINUM 8575C
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm uintr md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 640 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-pytho
|
https://github.com/vllm-project/vllm/issues/26585
|
closed
|
[
"usage"
] | 2025-10-10T13:01:42Z
| 2025-12-15T06:54:05Z
| 2
|
rxqy
|
vllm-project/vllm
| 26,582
|
[Bug]: which triton-kernels version for MXFP4 Triton backend?
|
### Your current environment
vllm v0.11.0 installed via `uv pip install vllm --torch-backend=auto`
triton + triton-kernels at different commits installed from source
### 🐛 Describe the bug
**Which triton + triton-kernels version does one have to install to run GPT-OSS with the MXFP4 Triton backend?**
No matter which version I try, I always get an error `Failed to import Triton kernels. Please make sure your triton version is compatible.`
Clearly, the latest triton-kernels will not work since the code in `vllm.model_executor.layers.fused_moe.gpt_oss_triton_kernels_moe` tries to import from `triton_kernels.routing`, but `triton_kernels.routing` has been deprecated (cf. https://github.com/triton-lang/triton/commit/30ede52aa2aecfd2ab3d6672ed21bbf4eb6438b3).
But also with older versions I get errors like `ImportError: cannot import name 'triton_key' from 'triton.compiler.compiler` or `Error: No module named 'triton.language.target_info`.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/26582
|
closed
|
[
"bug"
] | 2025-10-10T11:51:59Z
| 2025-12-12T20:30:06Z
| 8
|
matkle
|
huggingface/lerobot
| 2,162
|
[Question] How to suppress verbose Svt[info] logs from video encoding during save_episode()?
|
Hi, thank you for this fantastic library!
I am currently using lerobot (Version: 0.3.3) to record and save robotics data. When I use the `dataset.save_episode() method`, I get a large number of verbose log messages prefixed with Svt[info]:
```shell
Svt[info]: ------------------------------------------- | 0/1 [00:00<?, ?ba/s]
Svt[info]: SVT [version]: SVT-AV1 Encoder Lib v3.0.0
Svt[info]: SVT [build] : GCC 14.2.1 20250110 (Red Hat 14.2.1-7) 64 bit
Svt[info]: LIB Build date: Jul 3 2025 03:14:07
Svt[info]: -------------------------------------------
Svt[info]: Level of Parallelism: 5
Svt[info]: Number of PPCS 140
Svt[info]: [asm level on system : up to avx2]
Svt[info]: [asm level selected : up to avx2]
Svt[info]: -------------------------------------------
Svt[info]: SVT [config]: main profile tier (auto) level (auto)
Svt[info]: SVT [config]: width / height / fps numerator / fps denominator : 256 / 256 / 30 / 1
Svt[info]: SVT [config]: bit-depth / color format : 8 / YUV420
Svt[info]: SVT [config]: preset / tune / pred struct : 8 / PSNR / random access
Svt[info]: SVT [config]: gop size / mini-gop size / key-frame type : 2 / 32 / key frame
Svt[info]: SVT [config]: BRC mode / rate factor : CRF / 30
Svt[info]: SVT [config]: AQ mode / variance boost : 2 / 0
Svt[info]: SVT [config]: sharpness / luminance-based QP bias : 0 / 0
Svt[info]: Svt[info]: -------------------------------------------
Map: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 712/712 [00:00<00:00, 4740.68 examples/s]
Creating parquet from Arrow format: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 738.56ba/s]
```
While these logs are informative, they clutter the console output, especially when saving a large number of episodes in a loop. I would like to find a way to suppress them.
I try to redirecting stdout and stderr:
```python
import os
from contextlib import redirect_stdout, redirect_stderr
with open(os.devnull, 'w') as f_null:
with redirect_stderr(f_null), redirect_stdout(f_null):
dataset.save_episode()
```
But it doesn't works.
Any guidance on how to achieve a quieter output would be appreciated.
|
https://github.com/huggingface/lerobot/issues/2162
|
closed
|
[
"question",
"dataset"
] | 2025-10-10T08:56:52Z
| 2025-10-13T05:43:01Z
| null |
zxytql
|
huggingface/transformers
| 41,494
|
Incorrect tokenizer created for gemma gguf files
|
### System Info
- `transformers` version: 4.57.0
- Platform: Linux-5.15.0-144-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 0.34.2
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.3.1+cu121 (NA)
- Tensorflow version (GPU?): 2.17.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: NA
### Who can help?
@yijun-lee
@Isotr0py
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer
t1 = AutoTokenizer.from_pretrained("unsloth/gemma-3-4b-it-GGUF", gguf_file="gemma-3-4b-it-Q8_0.gguf")
x1 = t1.tokenize("<bos>What is eunoia?")
print(f"{x1=}")
t2 = AutoTokenizer.from_pretrained("google/gemma-3-4b-it")
x2 = t2.tokenize("<bos>What is eunoia?")
print(f"{x2=}")
```
### Expected behavior
The print out of the x1 and x2 should be the same. However,
```
x1=['<bos>', 'Wh', 'at', '▁is', '▁eu', 'no', 'ia', '?']
x2=['<bos>', 'What', '▁is', '▁e', 'uno', 'ia', '?']
```
Looking more into it, the tokenizer created for HF model (t2) is BPE while the tokenizer created for the GGUF model (t1) is Unigram.
|
https://github.com/huggingface/transformers/issues/41494
|
closed
|
[
"bug"
] | 2025-10-09T23:27:25Z
| 2025-11-29T08:02:57Z
| 4
|
amychen85
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.