repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/lerobot
| 2,464
|
Questions about Pi0.5 Model Training Details and High Level Planning Implementation
|
Hello, while studying the Pi0.5 model, I have two questions regarding the model implementation that I would like to ask you:
1、The paper mentions that the model adopts two-stage pre-training and designs a comprehensive loss function. However, when checking the compute_loss part in the open-source code, it is found that currently only the action loss is calculated, and the loss related to the VLM (Vision-Language Model) in the pre-training stage is not reflected. I would like to confirm whether this part is implemented elsewhere in the code or if there are other design considerations?
2、The ablation experiments in the paper show that the jointly trained Pi0.5 performs excellently in explicit and implicit High Level planning, even better than GPT4 and manual upper-level planning. However, from the open-source model code, the implementation part related to the High Level planning step has not been found for the time being. I would like to know how this part of the function is reflected in the code?
Looking forward to your reply, thank you!
|
https://github.com/huggingface/lerobot/issues/2464
|
open
|
[
"question",
"training"
] | 2025-11-18T01:27:59Z
| 2025-11-20T10:45:34Z
| null |
Ginldaj
|
vllm-project/vllm
| 28,876
|
[CI Failure]: should test_cumem.py use spawn or fork in cuda?
|
### Name of failing test
tests/basic_correctness/test_cumem.py
### Basic information
- [ ] Flaky test
- [x] Can reproduce locally
- [ ] Caused by external libraries (e.g. bug in `transformers`)
### 🧪 Describe the failing test
The test only fails locally for me when I use vllm main branch and on the CI of my PR, error is caused by cuda tests using `fork` instead of `spawn` I think, in the CI, there is a line that's trying for force spawn: https://github.com/vllm-project/vllm/blob/f2b8e1c5510cf3621dc4b910f0eba5289d9fee88/.buildkite/test-pipeline.yaml#L99-L100, but looks like it's not effective. I looked at the function that decides to use fork or spawn: https://github.com/vllm-project/vllm/blob/f8b19c0ffd65f7f6f01a0da4a39b6890f5db40cb/tests/utils.py#L1027 and I don't think it looks like the flag `VLLM_WORKER_MULTIPROC_METHOD`. Although the issue doesn't repro in the main vllm CI. Wondering how do we fix this?
```
FAILED basic_correctness/test_cumem.py::test_python_error - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
FAILED basic_correctness/test_cumem.py::test_basic_cumem - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
FAILED basic_correctness/test_cumem.py::test_cumem_with_cudagraph - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
FAILED basic_correctness/test_cumem.py::test_end_to_end[hmellor/tiny-random-LlamaForCausalLM] - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
FAILED basic_correctness/test_cumem.py::test_end_to_end[facebook/opt-125m] - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
FAILED basic_correctness/test_cumem.py::test_deep_sleep - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
FAILED basic_correctness/test_cumem.py::test_deep_sleep_async - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
```
### 📝 History of failing test
https://buildkite.com/vllm/ci/builds/39127/steps/canvas?jid=019a84f5-0fbf-46f3-859f-42c02a2d3de1
### CC List.
_No response_
|
https://github.com/vllm-project/vllm/issues/28876
|
open
|
[
"ci-failure"
] | 2025-11-17T18:58:08Z
| 2025-11-17T20:59:14Z
| 1
|
jerryzh168
|
vllm-project/vllm
| 28,868
|
[Bug]: When compiling with ranges, we should pass the range information to Inductor
|
### Your current environment
main
### 🐛 Describe the bug
Might be more of a feature request. Context is that https://github.com/vllm-project/vllm/pull/24248 adds a new compile ranges API, where a user can specify which ranges to compile on.
We should tell Inductor how to constrain the compilation on the symints of the compile ranges
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28868
|
open
|
[
"bug",
"torch.compile"
] | 2025-11-17T15:41:50Z
| 2026-01-05T23:37:12Z
| 1
|
zou3519
|
pytorch/pytorch
| 167,994
|
CI Not Detecting Failing Tests in test/distributed/elastic/*
|
A significant number of tests under `test/distributed/elastic/` are failing, but CI does **not** surface these failures, possibly same with test/distributed/launcher, Many of these tests appear to have been broken for a long time without detection. I opened a PR with fixes, but I believe this warrants an issue so the team can investigate why CI is not catching failures in this directory.
PR with fixes: https://github.com/pytorch/pytorch/pull/167993
### `test/distributed/elastic/rendezvous/c10d_rendezvous_backend_test.py`
**Issue:**
In `test_create_backend_returns_backend_if_is_host_is_false` and
`test_create_backend_returns_backend_if_is_not_specified_and_store_already_exists`, commit https://github.com/pytorch/pytorch/commit/d25e6e623fea0552d1a4b3124344d1b2c499f6f8 removed the unused `store` variable. This caused the `TCPStore` to be garbage-collected immediately, and the tests fail as a result.
---
### `test/distributed/elastic/rendezvous/dynamic_rendezvous_test.py`
#### Issue 1
`datetime.utcnow` was replaced with `datetime.now` in the implementation PR https://github.com/pytorch/pytorch/pull/136141, but the tests were not updated.
#### Issue 2
PR https://github.com/pytorch/pytorch/pull/145228 changed `create_handler()` to expect `keep_alive_interval` as an `int`, but the test `test_redundancy_transition_to_wait_list_then_join_rendezvous` passes `timedelta(seconds=1)`.
#### Issue 3
`test_share_tcp_store_from_backend` mocks `dist.PrefixStore` but also calls
`CustomPrefixStore(spec=dist.PrefixStore)`. Since `dist.PrefixStore` is already patched, this results in:
> Cannot spec a Mock object
---
### `test/distributed/elastic/rendezvous/etcd_server_test.py`
**Issue:**
In `test_etcd_server_with_rendezvous`, the `EtcdRendezvous` prefix does not include a leading slash, but etcd v2 always stores keys with one. This causes a hang during the `rdzv_handler.next_rendezvous()` → `RendezvousStoreInfo.build` → (`store.set` → `store.get`), because the key is written as `test/run_1/rdzv/v_1/kv/TUFTVEVSX0FERFI=` but etcd stores it as `/test/run_1/rdzv/v_1/kv/TUFTVEVSX0FERFI=`. Since `store.get` (via `ETCDStore._try_wait_get`) looks for the non–slash-prefixed key, it never finds
---
### `test/distributed/elastic/rendezvous/out_of_tree_rendezvous_test.py`
**Issue:**
`test_out_of_tree_handler_loading` attempts to test out-of-tree handler registration by adding a directory to `sys.path`.
However, the real mechanism uses Python entry points, which require pip installation.
The original PR https://github.com/pytorch/pytorch/pull/132633 used pip install, but after review it was replaced with `sys.path` modification — which probably only worked locally due to stale installations.
---
### `torch/distributed/elastic/rendezvous/etcd_rendezvous.py`
**Issue:**
PR https://github.com/pytorch/pytorch/pull/135262 added an optional `local_addr` parameter to `EtcdRendezvousHandler.__init__`, but did not define a default value.
This breaks `test_etcd_server_with_rendezvous` in `test/distributed/elastic/rendezvous/etcd_server_test.py`
---
Following test might actually be passing in some environments and configs.
### `test/distributed/launcher/test_run.py`
**Issue:**
`nproc_type="auto"` determines world size using `torch.accelerator.is_available()`, but the test incorrectly patches `torch.cuda.is_available()`.
cc @seemethere @malfet @pytorch/pytorch-dev-infra @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci
|
https://github.com/pytorch/pytorch/issues/167994
|
open
|
[
"oncall: distributed",
"module: ci"
] | 2025-11-17T15:39:56Z
| 2025-11-17T18:18:56Z
| 0
|
harikodali
|
pytorch/pytorch
| 167,991
|
Warnings from inside Dynamo should include at least one level of stack trace
|
We saw the following in vLLM:
```
(Worker_TP6_EP6 pid=3247488) /home/robertgshaw2-redhat/vllm/.venv/lib64/python3.12/site-packages/torch/_dynamo/variables/functions.py:1692: UserWarning: Dynamo detected a call to a `functools.lru_cache`-wrapped function. Dynamo ignores the cache wrapper and directly traces the wrapped function. Silent incorrectness is only a *potential* risk, not something we have observed. Enable TORCH_LOGS="+dynamo" for a DEBUG stack trace.
```
if we could *just* see one frame of the stack trace, we'd be able to tell the line of vLLM where this is coming from.
NB: I don't know how to reproduce this yet (we didn't get a repro command). I assume TORCH_LOGS=+dynamo has that stack trace, but it's nice to be able to debug this one step by just looking at the logs
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @kadeng @amjames @Lucaskabela @jataylo @chenyang78
|
https://github.com/pytorch/pytorch/issues/167991
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"vllm-compile",
"module: compile ux",
"module: vllm",
"dynamo-triage-dec2025"
] | 2025-11-17T15:31:02Z
| 2026-01-01T18:17:59Z
| 1
|
zou3519
|
vllm-project/vllm
| 28,866
|
[Usage]: When is going to be the next release?
|
Hi everyone,
Thank you for developing such a great tool!
I was wondering when the next release is scheduled. I’m interested in running Gemma3-text type architecture GGUF quantized models with VLLM. Are there any alternatives to do this with the latest release (v0.11.0)?
I also noticed that you merged this PR with the working solution on October 9:
https://github.com/vllm-project/vllm/pull/26189
|
https://github.com/vllm-project/vllm/issues/28866
|
open
|
[
"usage"
] | 2025-11-17T15:24:47Z
| 2025-11-19T10:51:47Z
| 1
|
Invalid-coder
|
huggingface/transformers
| 42,241
|
How to use padding with Mistral?
|
I'm trying to understand how to use Mistral with `batch_size` > 1. One aspect of this is setting `padding="longest"` in, e.g., `MistralCommonTokenizer.encode()`. But I'm getting `TypeError: 'set' object is not callable` when I try this. Example:
```python
import torch
from transformers import MistralForCausalLM, MistralCommonTokenizer
tokenizer = MistralCommonTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3")
model = MistralForCausalLM.from_pretrained(
"mistralai/Mistral-7B-Instruct-v0.3",
dtype=torch.bfloat16,
attn_implementation="sdpa",
device_map="auto",
)
messages = [
"You are a pirate chatbot who always responds in pirate speak!",
"Who are you?",
]
model_inputs = tokenizer.encode(messages, return_tensors="pt", padding="longest").to(
model.device
)
```
Output:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[1], line 17
5 model = MistralForCausalLM.from_pretrained(
6 "mistralai/Mistral-7B-Instruct-v0.3",
7 dtype=torch.bfloat16,
8 attn_implementation="sdpa",
9 device_map="auto",
10 )
12 messages = [
13 "You are a pirate chatbot who always responds in pirate speak!",
14 "Who are you?",
15 ]
---> 17 model_inputs = tokenizer.encode(messages, return_tensors="pt", padding="longest").to(
18 model.device
19 )
21 generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True)
22 tokenizer.batch_decode(generated_ids)[0]
File ~/ad_hoc_analysis/src/asr_and_summarization/.venv/lib/python3.13/site-packages/transformers/tokenization_mistral_common.py:407, in MistralCommonTokenizer.encode(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, pad_to_multiple_of, padding_side, return_tensors, verbose, **kwargs)
404 if text_pair:
405 raise ValueError("`MistralCommonTokenizer.encode` does not support `text_pair`.")
--> 407 padding_strategy, truncation_strategy, max_length, _ = self._get_padding_truncation_strategies(
408 padding=padding,
409 truncation=truncation,
410 max_length=max_length,
411 pad_to_multiple_of=pad_to_multiple_of,
412 verbose=verbose,
413 )
415 encoded_inputs = self._encode_plus(
416 text,
417 add_special_tokens=add_special_tokens,
(...) 429 verbose=verbose,
430 )
432 return encoded_inputs["input_ids"]
File ~/ad_hoc_analysis/src/asr_and_summarization/.venv/lib/python3.13/site-packages/transformers/tokenization_mistral_common.py:1034, in MistralCommonTokenizer._get_padding_truncation_strategies(self, padding, truncation, max_length, pad_to_multiple_of, verbose, **kwargs)
1031 max_length = self.model_max_length
1033 # Test if we have a padding token
-> 1034 if padding_strategy != PaddingStrategy.DO_NOT_PAD and (self.pad_token is None or self.pad_token_id < 0):
1035 raise ValueError(
1036 "Asking to pad but the tokenizer does not have a padding token. "
1037 "Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` "
1038 "or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`."
1039 )
1041 # Check that we will truncate to a multiple of pad_to_multiple_of if both are provided
File ~/ad_hoc_analysis/src/asr_and_summarization/.venv/lib/python3.13/site-packages/transformers/tokenization_mistral_common.py:334, in MistralCommonTokenizer.pad_token(self)
329 @property
330 def pad_token(self) -> str:
331 """
332 String associated to the padding token in the vocabulary.
333 """
--> 334 return self.convert_ids_to_tokens(self.pad_token_id)
File ~/ad_hoc_analysis/src/asr_and_summarization/.venv/lib/python3.13/site-packages/transformers/tokenization_mistral_common.py:548, in MistralCommonTokenizer.convert_ids_to_tokens(self, ids, skip_special_tokens)
546 tokens: list[str] = []
547 for token_id in ids:
--> 548 if self._is_control_token(token_id) and skip_special_tokens:
549 continue
550 tokens.append(self.tokenizer.instruct_tokenizer.tokenizer.id_to_piece(token_id))
File ~/ad_hoc_analysis/src/asr_and_summarization/.venv/lib/python3.13/site-packages/transformers/tokenization_mistral_common.py:513, in MistralCommonTokenizer._is_control_token(self, token_id)
511 def _is_control_token(self, token_id: int) -> bool:
512 if self._tokenizer_type == MistralTokenizerType.spm:
--> 513 return token_id in self.tokenizer.instruct_tokenizer.tokenizer._control_tokens()
514 elif self._tokenizer_type == MistralTokenizerType.tekken:
515 return token_id < self.tokenizer.instruct_tokenizer.tokenizer.num_special_tokens
TypeError: 'set' object is not callable
```
Env:
```
- `transformers` version: 4.57.1
|
https://github.com/huggingface/transformers/issues/42241
|
closed
|
[] | 2025-11-17T12:54:21Z
| 2025-11-19T06:11:44Z
| null |
TopCoder2K
|
pytorch/audio
| 4,132
|
How can I use one streamwriter to write multiple videos?
|
### 🚀 The feature
Use one streamwriter to write multiple videos.
### Motivation, pitch
Can the streamwriter support writing multiple videos using the same object, with each video corresponding to a different stream when I use gpu to encode? In current situation, this result in writing to the same buffer, ultimately producing one video. How can I do this? This can avoid the overhead caused by multiple initializations and destructions of the streamwriter.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/audio/issues/4132
|
open
|
[] | 2025-11-17T11:55:56Z
| 2025-11-17T11:55:56Z
| null |
Z-NAVY
|
pytorch/pytorch
| 167,977
|
[DTensor]Sharding propagation failed for custom operation with Tensor in kwargs
|
### 🐛 Describe the bug
I try to register strategy for my custom operation by ```@register_sharding```, which has Tensor params in kwargs. And my custom strategy function provides strategies for all DTensor in args and kwargs.
During sharding propagation, an AssertionError `assert len(input_specs) == len(input_args_strategy)` occurs in function `expand_to_full_mesh_op_strategy`.
The cause is that `input_args_strategy` only considers DTensor in args, while `input_specs` contains sharding strategies of all DTensor in args and kwargs.
Is it a limitation of DTensor? Or how can I adapt my operation with Tensor kwargs to Dtensor?
Here is a simple demo using `aten.min.dim_min` to reproduce the problem:
```python
import torch
from torch.distributed.tensor import distribute_tensor, Replicate
from torch.distributed.tensor.experimental import register_sharding
from torch.testing._internal.common_utils import run_tests
from torch.testing._internal.distributed._tensor.common_dtensor import DTensorTestBase, with_comms
aten = torch.ops.aten
class TestRegisterSharding(DTensorTestBase):
@with_comms
def test_register_sharding_for_tensor_kwargs(self):
mesh = self.build_device_mesh()
x = torch.randn(4, 4, device="cuda")
y = torch.randn(4, 4, device="cuda")
x = distribute_tensor(x, mesh, [Replicate()])
y = distribute_tensor(y, mesh, [Replicate()])
# aten::min.dim_min(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) min, Tensor(b!) min_indices) -> (Tensor(a!) values, Tensor(b!) indices)
@register_sharding(aten.min.dim_min)
def custom_strategy(x, dim, keepdim, min, min_indices):
acceptable_shardings = []
all_replicate = ([Replicate(), Replicate()], [Replicate(), None, None, Replicate(), Replicate()])
acceptable_shardings.append(all_replicate)
return acceptable_shardings
value = torch.randn(4, 1, device="cuda")
indices = torch.randn(4, 1, device="cuda").long()
value = distribute_tensor(value, mesh, [Replicate()])
indices = distribute_tensor(indices, mesh, [Replicate()])
torch.min(x, dim=1, keepdim=True, out=(value, indices))
if __name__ == "__main__":
run_tests()
```
The error message:
```
Traceback (most recent call last):
File "/opt/conda/envs/py310_pt29/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py", line 156, in dispatch
self.sharding_propagator.propagate(op_info)
File "/opt/conda/envs/py310_pt29/lib/python3.10/site-packages/torch/distributed/tensor/_sharding_prop.py", line 327, in propagate
OutputSharding, self.propagate_op_sharding(op_info.schema)
File "/opt/conda/envs/py310_pt29/lib/python3.10/site-packages/torch/distributed/tensor/_sharding_prop.py", line 46, in __call__
return self.cache(*args, **kwargs)
File "/opt/conda/envs/py310_pt29/lib/python3.10/site-packages/torch/distributed/tensor/_sharding_prop.py", line 352, in propagate_op_sharding_non_cached
op_strategy = self.op_strategy_funcs[op_schema.op](strategy_schema)
File "/opt/conda/envs/py310_pt29/lib/python3.10/site-packages/torch/distributed/tensor/experimental/_register_sharding.py", line 98, in custom_strategy
return expand_to_full_mesh_op_strategy(
File "/opt/conda/envs/py310_pt29/lib/python3.10/site-packages/torch/distributed/tensor/_ops/utils.py", line 332, in expand_to_full_mesh_op_strategy
assert len(input_specs) == len(input_args_strategy)
AssertionError
```
### Versions
torch 2.9.0
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @tianyu-l @XilunWu @SherlockNoMad
|
https://github.com/pytorch/pytorch/issues/167977
|
closed
|
[
"oncall: distributed",
"module: dtensor"
] | 2025-11-17T11:43:09Z
| 2025-11-24T05:24:21Z
| 3
|
qqq6op
|
huggingface/chat-ui
| 1,986
|
HI i would like to use default_headers={ "X-HF-Bill-To": "org-name" } in my chatui local deployment how i can??
|
Hi,
So i want to bill my Inference usage to my organization and like to pass default_headers={
"X-HF-Bill-To": "org-name"
} parameter how i can do that??
|
https://github.com/huggingface/chat-ui/issues/1986
|
open
|
[
"support"
] | 2025-11-17T08:33:41Z
| 2025-11-17T08:33:41Z
| null |
aditya-oss-prog
|
huggingface/diffusers
| 12,672
|
How to set pipe "requires_grad=true"?
|
I have set the variable and the model "requires_grad=true" with the following:
` pipe.transformer.requires_grad = True
pipe.vae.requires_grad = True`
`prev_sample = prev_sample.detach().requires_grad_(True)`
but the "requires_grad" of result by the pipe is still not true:
`image_tar = pipe.vae.decode(prev_sample, return_dict=False)[0]`
"image_tar" still can not requires_grad, so how to set pipe "requires_grad=true"?(all the operation is during inference stage.)
|
https://github.com/huggingface/diffusers/issues/12672
|
closed
|
[] | 2025-11-17T03:36:43Z
| 2025-11-20T12:19:20Z
| null |
micklexqg
|
pytorch/pytorch
| 167,950
|
Insufficient documentation about the batching logic of `torch.linalg.solve`
|
### 📚 The doc issue
The documentation for `torch.linalg.solve` states that
>
> Letting _*_ be zero or more batch dimensions,
> If `A` has shape _(*, n, n)_ and `B` has shape _(*, n)_ (a batch of vectors) or shape _(*, n, k)_ (a batch of matrices or “multiple right-hand sides”), this function returns _X_ of shape _(*, n)_ or _(*, n, k)_ respectively.
However, from what I understand based on testing the code, the meaning of _*_ is different in these two cases. In the first case (batch of vectors), the batch dimensions _*_ of `A` and `B` must have the exact same shape, while in the second case (batch of matrices), the batch dimensions _*_ of `A` and `B` need only be broadcastable with each other. For example, an error is raised if `A` has shape _(2, 3, 4, 4)_ and `B` has shape _(3, 4)_ or _(1, 3, 4)_ (batch of vectors), while no error is raised if `A` has shape _(2, 3, 4, 4)_ and `B` has shape _(3, 4, 4)_ or _(1, 3, 4, 4)_ (batch of matrices). I think the documentation needs to be clear about how the meaning of _*_ is different for these two cases.
It should also be clarified that the interpretation of `B` as a batch of vectors should take precedence over the interpretation of `B` as a zero-dimensional batch of matrices. For example, if `A` has shape _(n, n, n)_ and `B` has shape _(n, n)_, then the output X has shape _(n, n)_, indicating that `B` is interpreted as a batch of vectors, even though `B` can be interpreted as a 'batch' of matrices with zero-dimensional batch shape _* = ()_.
### Suggest a potential alternative/fix
Update the quoted part of the documentation to
> Letting _*_ be zero or more batch dimensions, and _**_ be one or more batch dimensions, such that _*_ and _**_ are broadcastable with each other,
> If `A` has shape _(*, n, n)_ and `B` has shape _(*, n)_ (a batch of vectors) or shape _(**, n, k)_ (a batch of matrices or “multiple right-hand sides”), this function returns _X_ of shape _(*, n)_ or _(***, n, k)_ respectively, where _***_ is the shape obtained by broadcasting _*_ with _**_.
Note that this revision also automatically clarifies the ambiguity as stated in the case where `A` has shape _(n, n, n)_ and `B` has shape _(n, n)_. This is because _**_ is defined as one or more (instead of zero or more) batch dimensions, so `B` cannot be interpreted as having shape _(**, n, k)_.
cc @svekars @sekyondaMeta @AlannaBurke @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
|
https://github.com/pytorch/pytorch/issues/167950
|
open
|
[
"module: docs",
"triaged",
"module: linear algebra"
] | 2025-11-17T02:02:25Z
| 2025-11-19T16:44:07Z
| 5
|
hchau630
|
huggingface/diffusers
| 12,669
|
Flux1-Dev inference with single file ComfyUI/SD-Forge Safetensors
|
Is it possible to run inference with diffusers using a single-file safetensors created for ComfyUI/SD-Forge?
It looks like FluxPipeline.from_single_file() might be intended for this purpose, but I'm getting the following errors:
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_single_file("./flux1-dev-fp8.safetensors", torch_dtype=torch.float8_e4m3fn, use_safetensors=True)
```
```
Traceback (most recent call last):
File "/home/user/flux/imgen.py", line 9, in <module>
pipe = FluxPipeline.from_single_file("./flux1-dev-fp8.safetensors", torch_dtype=torch.float8_e4m3fn, use_safetensors=True)
File "/home/user/.local/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/user/.local/lib/python3.13/site-packages/diffusers/loaders/single_file.py", line 509, in from_single_file
loaded_sub_model = load_single_file_sub_model(
library_name=library_name,
...<11 lines>...
**kwargs,
)
File "/home/user/.local/lib/python3.13/site-packages/diffusers/loaders/single_file.py", line 127, in load_single_file_sub_model
loaded_sub_model = create_diffusers_t5_model_from_checkpoint(
class_obj,
...<4 lines>...
local_files_only=local_files_only,
)
File "/home/user/.local/lib/python3.13/site-packages/diffusers/loaders/single_file_utils.py", line 2156, in create_diffusers_t5_model_from_checkpoint
model.load_state_dict(diffusers_format_checkpoint)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.13/site-packages/torch/nn/modules/module.py", line 2641, in load_state_dict
raise RuntimeError(
...<3 lines>...
)
RuntimeError: Error(s) in loading state_dict for T5EncoderModel:
Missing key(s) in state_dict: "encoder.embed_tokens.weight".
```
I checked the safetensors file and the T5 encoder is present. However, it is named differently, which confuses diffusers.
|
https://github.com/huggingface/diffusers/issues/12669
|
open
|
[] | 2025-11-16T11:57:48Z
| 2025-12-03T16:53:58Z
| 12
|
ddpasa
|
pytorch/pytorch
| 167,906
|
Avoid Exception Refcycle Problems
|
### 🚀 The feature, motivation and pitch
https://github.com/pytorch/pytorch/blob/d01a7b0241ed1c4cded7e7ca097249feb343f072/torch/_utils.py#L720-L726
The traceback refcycle problem can happen whenever an exception is stored in a local variable. This happened in many places across pytorch:
```
$ grep ' = e$' torch -R | wc -l
22 # note: a few are false positives
```
There are likely some potential refcycles that could cause tensors to not get freed at the earliest possible time.
Take one of the detected results from `collective_utils.py` for example, we can create a repro of refcycle:
```python
import sys
import torch
from torch.distributed.collective_utils import all_gather
def f(obj):
def f():
raise RuntimeError('hhh')
try:
all_gather(f)
except Exception as e:
pass
if __name__ == '__main__':
torch.distributed.init_process_group(backend='gloo')
rank = torch.distributed.get_rank()
obj = object()
for k in range(20):
f(obj)
if rank == 0:
print(sys.getrefcount(obj)) # Refcount keep increasing!
```
run it with
```
torchrun --nproc_per_node=2 test.py
```
(Note: `collective_utils` seems not used anywhere, maybe a good idea to remove it. I'm not a user of it. )
PyTorch users' callstacks often have giant objects that better not get leaked. Ideas to avoid similar issues:
* Check if any of the 22 results are worth fixing.
* Apply a lint rule (perhaps with https://github.com/ast-grep/ast-grep/) to disable assignment of exception, unless explicitly bypassed.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD
|
https://github.com/pytorch/pytorch/issues/167906
|
open
|
[
"module: memory usage",
"triaged",
"better-engineering",
"module: python frontend"
] | 2025-11-15T16:47:16Z
| 2025-11-18T22:15:10Z
| 1
|
ppwwyyxx
|
pytorch/pytorch
| 167,901
|
nvalid _global_ write of size 16 bytes in torch.bmm with sparse tensors
|
### 🐛 Describe the bug
When using `torch.bmm` with sparse tensors, a CUDA `__global__` memory write out-of-bounds error occurs.
```python
import torch
m1 = torch.randn(2, 291105, 1).to_sparse().cuda()
m2 = torch.randn(2, 1, 1).cuda()
print([m1.size(), m2.size()])
torch.bmm(m1, m2)
```
### How to Reproduce
1. Save the code above as `poc.py`.
2. Run the script using `compute-sanitizer`. The `Invalid __global__ write` error will be reported.
```bash
compute-sanitizer python poc.py
```
### Observed Results
```
========= Invalid __global__ write of size 16 bytes
========= at void cusparse::vector_scalar_multiply_kernel<cusparse::VectorWiseMulPolicy<(bool)1, float>, long, float, float>(cusparse::KernelCoeff<T3>, T2, T4 *)+0x460
========= by thread (32,0,0) in block (4,0,0)
========= Address 0x7ffe9b0aa884 is misaligned
========= and is inside the nearest allocation at 0x7ffe9a000000 of size 20,971,520 bytes
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x93c3fa] in libcusparse.so.12
========= Host Frame: [0x99859a] in libcusparse.so.12
========= Host Frame: [0x89fbfc] in libcusparse.so.12
========= Host Frame: [0x17c999] in libcusparse.so.12
========= Host Frame: [0x196a6b] in libcusparse.so.12
========= Host Frame: cusparseSpMM [0xf3ed3] in libcusparse.so.12
========= Host Frame: at::native::bmm_out_sparse_cuda(at::Tensor const&, at::Tensor const&, at::Tensor&)::{lambda()#1}::operator()() const::{lambda()#2}::operator()() const [0x2e54a29] in libtorch_cuda.so
========= Host Frame: at::native::bmm_out_sparse_cuda(at::Tensor const&, at::Tensor const&, at::Tensor&) [0x2e573ff] in libtorch_cuda.so
========= Host Frame: at::native::bmm_sparse_cuda(at::Tensor const&, at::Tensor const&) [0x2e59137] in libtorch_cuda.so
========= Host Frame: at::(anonymous namespace)::(anonymous namespace)::wrapper_SparseCUDA__bmm(at::Tensor const&, at::Tensor const&) [0x3510e0a] in libtorch_cuda.so
========= Host Frame: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&), &at::(anonymous namespace)::(anonymous namespace)::wrapper_SparseCUDA__bmm>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&> >, at::Tensor (at::Tensor const&, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) [0x3510ebf] in libtorch_cuda.so
========= Host Frame: at::_ops::bmm::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) [0x29ca7fd] in libtorch_cpu.so
========= Host Frame: torch::autograd::VariableType::(anonymous namespace)::bmm(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) [0x47cbb1e] in libtorch_cpu.so
========= Host Frame: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&), &torch::autograd::VariableType::(anonymous namespace)::bmm>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&> >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) [0x47cc502] in libtorch_cpu.so
========= Host Frame: at::_ops::bmm::call(at::Tensor const&, at::Tensor const&) [0x2a1a6fd] in libtorch_cpu.so
========= Host Frame: torch::autograd::THPVariable_bmm(_object*, _object*, _object*) [0x6a8ad8] in libtorch_python.so
========= Host Frame: cfunction_call in methodobject.c:542 [0x128db6] in python
========= Host Frame: _PyObject_MakeTpCall in call.c:214 [0x10454b] in python
========= Host Frame: _PyEval_EvalFrameDefault in ceval.c:4769 [0x111cf5] in python
========= Host Frame: _PyEval_Vector in ceval.c:6434 [0x1cd0a9] in python
========= Host Frame: PyEval_EvalCode in ceval.c:1148 [0x1cc77e] in python
========= Host Frame: run_eval_code_obj in pythonrun.c:1741 [0x1ed556] in python
========= Host Frame: run_mod in pythonrun.c:1762 [0x1e907f] in python
========= Host Frame: pyrun_file in pythonrun.c:1657 [0x1fde71] in python
========= Host Frame: _PyRun_SimpleFileObject in pythonrun.c:440 [0x1fd28e] in python
========= Host Frame: _PyRun_AnyFileObject in pythonrun.c:79 [0x1fcfb2] in python
========= Host Frame: Py_RunMain in main.c:684 [0x1f7dad] in python
========= Host Frame: Py_BytesMain in main.c:738 [0x1bcdf8] in python
========= Host Frame: __libc_start_call_main in libc_start_call_main.h:58 [0x2a1c9] in libc.so.6
========= Host Frame: __libc_start_main in libc-start.c:360 [0x2a28a] in libc.so.6
========= Host Frame: [0x1bcc42]
|
https://github.com/pytorch/pytorch/issues/167901
|
open
|
[
"module: sparse",
"triaged",
"module: sanitizers"
] | 2025-11-15T03:25:41Z
| 2025-11-24T04:30:02Z
| 1
|
supermarkli
|
huggingface/ai-deadlines
| 41
|
How to indicate ARR deadlines
|
Right now the yaml format assumes conferences with locations and dates, but ACL ARR has rolling deadlines not tied to a physical conference. We are largely operating around these deadlines. How can we incorporate these into this system?
|
https://github.com/huggingface/ai-deadlines/issues/41
|
open
|
[] | 2025-11-15T00:26:33Z
| 2025-11-15T00:26:33Z
| null |
morrisalp
|
pytorch/torchtitan
| 2,046
|
Any interest in adding MLPerf Llama 3 8B to TorchTitan models ?
|
It will be great to have MLPerf LLama 3 pre-training working OOB with TorchTitan, Here are some references on that .
[MLPerf Training Adds Llama 3.1 8B Benchmark](https://mlcommons.org/2025/10/training-llama-3-1-8b/)
[small_llm_pretraining/nemo](https://github.com/mlcommons/training/tree/master/small_llm_pretraining/nemo)
|
https://github.com/pytorch/torchtitan/issues/2046
|
open
|
[] | 2025-11-14T18:38:59Z
| 2026-01-05T22:49:56Z
| 14
|
githubsgi
|
pytorch/pytorch
| 167,843
|
Some docs are outdated about how to access ctx object in forward function?
|
### 📚 The doc issue
I remember some docs said that the forward function (originally in torch.autograd.Function subclass) can pass anything to setup_context function by saving the data to ctx object. I was off for a while. Back in 2.6, the input param for forward function looks like (ctx, *input), but now it's(input_1, input_2, ...). I updated to 2.9 yesterday, and I found it's impossible to access the ctx object in forward function. I need to modify the old code a bit, which is ok. But the problem is, if I want to save anything for backward pass while I don't want to output it, in extreme case, do I have to compute it twice? 1st in forward function, 2nd in setup_context function.
I saw some other docs said that, the 2 function style(forward+setup_context) is more similar to the vanilla torch implementation, so users are encouraged to do the 2 func style. I believe this docs is new and up to date, but the docs I mentioned uppon uppon is outdated.
And, can you guys consider modifying the type hint of ctx object from *Any* to *[torch.autograd.function.]CtxFunction*. And add all the optional data member in __builtins__ or somewhere to help the auto completion in vs code. Thanks.
### Suggest a potential alternative/fix
_No response_
cc @ezyang @albanD @gqchen @nikitaved @soulitzer @Varal7 @xmfan
|
https://github.com/pytorch/pytorch/issues/167843
|
closed
|
[
"module: autograd",
"triaged"
] | 2025-11-14T16:05:18Z
| 2025-11-28T05:06:40Z
| null |
YagaoDirac
|
pytorch/xla
| 9,712
|
Why isn't there a binding for clearing the XLAGraphExecutor::ComputationCache?
|
We have exposed this function in our [tenstorrent fork](https://github.com/tenstorrent/pytorch-xla/pull/16/files) and found that it works for post-test cleanup.
My assumption was that TPU runtime does not require such a feature because it does not bind scarce device resources to PJRTComputation lifetime. So, implementers did not find it necessary to implement such a function. Is that correct? Were there any other reasons to avoid exposing this to the user?
|
https://github.com/pytorch/xla/issues/9712
|
open
|
[
"question",
"runtime"
] | 2025-11-14T15:19:55Z
| 2025-11-24T18:55:09Z
| null |
jameszianxuTT
|
huggingface/diffusers
| 12,662
|
question on stable_audio_transformer.py
|
Execuse me, I am leaning the code of `class StableAudioDiTModel` , I do not know what is the argument ` global_states_input_dim` used to? It seems that it is a must component that should be packed before the hidden_states sequence. and its default dim seems larger then the transformer inner_dim. What is that componenet means? If it is used to take in additional conditions, that seems can be done in the encoder outside. and compared with the concatenate, I think it may be better to repeat condition embedding to the sequence length and concat on hidden_dim.
And what is the ` sample_size: int = 1024,` parameter used in the model creation? it seems not used during `forward` call
The func doc of `class StableAudioDiTModel:forward`, it said ``` encoder_attention_mask (`torch.Tensor` of shape `(batch_size, sequence_len)`, *optional*):```. why the shape of encoder_attention_mask is batch_size X sequence_len instead of batch_size X encoder_sequence_len to be identical with the shape of the input `encoder_hidden_states`
and why thee return value of this `forward` is the direct `(hidden_states,)` but not `(hidden_states * attention_mask, )`?
about the `class StableAudioDiTModel forward`, what is the shape of parameters `rotary_embedding` and `timestep`?
why the global_embedding is concated before the hidden_states? I think hidden_states is what we want to generated during DiT pipeline. while encoder_hidden_states is the condition signal, so global_embedding should be used to en-rich the encoder_hidden_states. and the action of concate the global_embedding before the input hidden_states sequence will change the input seq_length, according to[ [1]](https://github.com/Stability-AI/stable-audio-tools/blob/main/docs/conditioning.md#input-concatenation), the concatenation should be done in the feature_dim direction, is it?
It seems using normal LayerNorm layer instead of adaLN layer?
|
https://github.com/huggingface/diffusers/issues/12662
|
open
|
[] | 2025-11-14T09:26:01Z
| 2025-11-25T08:53:39Z
| 1
|
JohnHerry
|
vllm-project/vllm
| 28,717
|
[Usage]: Errors running vLLM docker in a closed environment with gpt-oss-120b on RTX 6000 Pro
|
### Your current environment
Can't get vLLM to start with the below configuration. Seems to have issues loading in the model .safetensors. Any ideas on what could be causing it?
vllm version: 0.11.1
CPU: Intel Xeon w7-2595X
GPU: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
Model: https://huggingface.co/openai/gpt-oss-120b/tree/main
Command:
docker run --rm --name vllm --gpus=all --runtime=nvidia -p 8000:8000 -e HF_HUB_OFFLINE=1 --ipc=host -v opt/models/cache/:/root/.cache/huggingface/hub vllm/vllm-openai:latest --model openai/gpt-oss-120b
Also tried:
docker run --rm --name vllm --gpus=all --runtime=nvidia -p 8000:8000 -e HF_HUB_OFFLINE=1 --ipc=host -v opt/models/cache/:/root/.cache/huggingface/hub vllm/vllm-openai:latest --model openai/gpt-oss-120b
with the same output.
Output:
INFO 11-12 06:23:18 [__init__.py:216] Automatically detected platform cuda.
[1;36m(APIServer pid=1)[0;0m INFO 11-12 06:23:21 [api_server.py:1839] vLLM API server version 0.11.0
[1;36m(APIServer pid=1)[0;0m INFO 11-12 06:23:21 [utils.py:233] non-default args: {'model': 'openai/gpt-oss-120b'}
[1;36m(APIServer pid=1)[0;0m INFO 11-12 06:23:21 [arg_utils.py:504] HF_HUB_OFFLINE is True, replace model_id [openai/gpt-oss-120b] to model_path [/root/.cache/huggingface/hub/models--openai--gpt-oss-120b/snapshots/b5c939de8f754692c1647ca79fbf85e8c1e70f8a]
[1;36m(APIServer pid=1)[0;0m `torch_dtype` is deprecated! Use `dtype` instead!
[1;36m(APIServer pid=1)[0;0m INFO 11-12 06:23:26 [model.py:547] Resolved architecture: GptOssForCausalLM
[1;36m(APIServer pid=1)[0;0m ERROR 11-12 06:23:26 [config.py:278] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/root/.cache/huggingface/hub/models--openai--gpt-oss-120b/snapshots/b5c939de8f754692c1647ca79fbf85e8c1e70f8a'. Use `repo_type` argument if needed., retrying 1 of 2
[1;36m(APIServer pid=1)[0;0m ERROR 11-12 06:23:28 [config.py:276] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/root/.cache/huggingface/hub/models--openai--gpt-oss-120b/snapshots/b5c939de8f754692c1647ca79fbf85e8c1e70f8a'. Use `repo_type` argument if needed.
[1;36m(APIServer pid=1)[0;0m INFO 11-12 06:23:28 [model.py:1730] Downcasting torch.float32 to torch.bfloat16.
[1;36m(APIServer pid=1)[0;0m INFO 11-12 06:23:28 [model.py:1510] Using max model len 131072
[1;36m(APIServer pid=1)[0;0m INFO 11-12 06:23:29 [scheduler.py:205] Chunked prefill is enabled with max_num_batched_tokens=8192.
[1;36m(APIServer pid=1)[0;0m INFO 11-12 06:23:29 [config.py:271] Overriding max cuda graph capture size to 992 for performance.
INFO 11-12 06:23:31 [__init__.py:216] Automatically detected platform cuda.
[1;36m(EngineCore_DP0 pid=308)[0;0m INFO 11-12 06:23:33 [core.py:644] Waiting for init message from front-end.
[1;36m(EngineCore_DP0 pid=308)[0;0m INFO 11-12 06:23:33 [core.py:77] Initializing a V1 LLM engine (v0.11.0) with config: model='/root/.cache/huggingface/hub/models--openai--gpt-oss-120b/snapshots/b5c939de8f754692c1647ca79fbf85e8c1e70f8a', speculative_config=None, tokenizer='/root/.cache/huggingface/hub/models--openai--gpt-oss-120b/snapshots/b5c939de8f754692c1647ca79fbf85e8c1e70f8a', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=mxfp4, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='openai_gptoss'), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=/root/.cache/huggingface/hub/models--openai--gpt-oss-120b/snapshots/b5c939de8f754692c1647ca79fbf85e8c1e70f8a, enable_prefix_caching=True, chunked_prefill_enabled=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2","vllm.mamba_mixer","vllm.short_conv","vllm.linear_attention","vllm.plamo2_mamba_mixer","vllm.gdn_attention","vllm.sparse_attn_indexer"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":[2,1],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[992,976,960,944,928,912,896,880,864,848,832,816,800,784,768,752,736,720,704,688,672,656,640,624,608,592,576,560,544,528,512,496,480,464,448,432,416,400,384,368,352,336,320,304,288,272,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48
|
https://github.com/vllm-project/vllm/issues/28717
|
open
|
[
"usage"
] | 2025-11-14T08:49:48Z
| 2025-11-20T15:45:21Z
| 3
|
antonkarlsson1
|
pytorch/pytorch
| 167,820
|
Why torch==2.9 compile qwen3 model with block ptr will crash?
|
### 🐛 Describe the bug
torch==2.8 compile with “torch._inductor.config.triton.use_block_ptr = True“ is ok, 2.9 torch will crash as shown in the figure.
<img width="1268" height="402" alt="Image" src="https://github.com/user-attachments/assets/9cc9742a-be5b-4754-a954-01aac02fb936" />
```python
import torch
from vllm import LLM, SamplingParams
from vllm.config import CompilationConfig
from torch._inductor.lowering import make_fallback
prompts = [
"Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name is Hello, my name" ,
]
sampling_params = SamplingParams(temperature=0.0, top_p=0.95,max_tokens=2)
def main():
torch._inductor.config.implicit_fallbacks = False
torch._inductor.config.layout_optimization = False
torch._inductor.config.prologue_fusion = True
torch._inductor.config.permute_fusion = True
torch._inductor.config.online_softmax = True
torch._inductor.config.memory_planning = False
torch._inductor.config.memory_pool = "intermediates"
torch._inductor.config.autotune_local_cache = True
torch._inductor.config.autotune_fallback_to_aten = False
torch._inductor.config.max_autotune_gemm = True
torch._inductor.config.max_autotune_gemm_backends = "TRITON"
torch._inductor.config.triton.use_block_ptr = True
torch._inductor.config.triton.prefer_nd_tiling = True
torch._inductor.config.triton.tile_reductions = True
torch._inductor.config.triton.codegen_upcast_to_fp32 = False
llm = LLM(model="models/Qwen3-0.6B",
dtype=torch.float16,
enforce_eager=False,
compilation_config=CompilationConfig(
mode=3,
cache_dir="output/vllm/compile"))
outputs = llm.generate(prompts, sampling_params)
print("\nGenerated Outputs:\n" + "-" * 60)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}")
print(f"Output: {generated_text!r}")
print("-" * 60)
if __name__ == "__main__":
main()
```
### Versions
torch >=2.9.0
no other requirements
cc @chauhang @penguinwu @zou3519
|
https://github.com/pytorch/pytorch/issues/167820
|
open
|
[
"needs reproduction",
"triaged",
"oncall: pt2",
"vllm-compile",
"module: vllm"
] | 2025-11-14T08:06:29Z
| 2025-11-18T06:07:13Z
| 2
|
TracyMac1
|
pytorch/pytorch
| 167,818
|
undefined symbol for `at::meta::_index_put_impl_` when running or compiling executable on my own torch-related project.
|
### 🐛 Describe the bug
I have a torch extended backend(PrivateUse1), somewhere in my code, I invoked `at::meta::_index_put_impl_` API. undefined symbol error occurs when I try to create executable or running python.
`at::meta::_index_put_impl_` seems like a LOCAL symbol in libtorch_cpu.so, and not exist in dynsym, but why?
it marked as `TORCH_API` as `at::cpu::_index_put_impl_`, but I found `at::cpu::_index_put_impl_` in output of `nm -CD libtorch_cpu.so`, no `at::meta::_index_put_impl_`.
how can I use this API or some other APIs like this in my own shared lib?
```bash
nm -C libtorch_cpu.so| grep -E "at::(cpu|meta)::_index_put_impl_"
0000000003403cc8 T at::cpu::_index_put_impl_(at::Tensor&, c10::List<std::optional<at::Tensor> > const&, at::Tensor const&, bool, bool)
00000000048592a3 t at::meta::_index_put_impl_(at::Tensor&, c10::List<std::optional<at::Tensor> > const&, at::Tensor const&, bool, bool)
```
```bash
nm -CD libtorch_cpu.so| grep -E "at::(cpu|meta)::_index_put_impl_"
0000000003403cc8 T at::cpu::_index_put_impl_(at::Tensor&, c10::List<std::optional<at::Tensor> > const&, at::Tensor const&, bool, bool)
```
```bash
readelf -CWs libtorch_cpu.so| grep -E "at::(cpu|meta)::_index_put_impl_"
32915: 0000000003403cc8 70 FUNC GLOBAL DEFAULT 12 at::cpu::_index_put_impl_(at::Tensor&, c10::List<std::optional<at::Tensor> > const&, at::Tensor const&, bool, bool)
4301074: 00000000048592a3 70 FUNC LOCAL DEFAULT 12 at::meta::_index_put_impl_(at::Tensor&, c10::List<std::optional<at::Tensor> > const&, at::Tensor const&, bool, bool)
4441498: 0000000003403cc8 70 FUNC GLOBAL DEFAULT 12 at::cpu::_index_put_impl_(at::Tensor&, c10::List<std::optional<at::Tensor> > const&, at::Tensor const&, bool, bool)
```
### Versions
PyTorch version: 2.9.0a0+git0fabc3b
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version: Could not collect
CMake version: version 4.1.0
Libc version: glibc-2.35
Python version: 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 14 2025, 16:16:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerabil
|
https://github.com/pytorch/pytorch/issues/167818
|
open
|
[
"module: binaries",
"triaged",
"actionable",
"module: PrivateUse1"
] | 2025-11-14T07:32:33Z
| 2025-12-08T06:43:48Z
| 4
|
sunjiabin17
|
huggingface/trl
| 4,525
|
How to modify the advantage computation in GRPOTrainer
|
I’m looking to customize the advantage computation used in the DAPO algorithm. Do I need to subclass the full GRPOTrainer to do this, or is it sufficient to overwrite the logic in _generate_and_score_completions, since that method appears to handle the advantage calculation?
|
https://github.com/huggingface/trl/issues/4525
|
open
|
[
"❓ question",
"🏋 GRPO"
] | 2025-11-14T03:48:17Z
| 2025-11-14T11:37:18Z
| null |
Tuziking
|
huggingface/transformers
| 42,200
|
Request of rewriting implementation of prediction_step in trainer.py
|
### System Info
Any system. Because it's a problem coming from source code.
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Hi, i am talking about an issue that was reported 5 years ago but still exists in 2025, specifically, 13th Nov, 2025.
I quote one of the issues that was discussed before, ignored by sgugger. Please find the link below
https://discuss.huggingface.co/t/cuda-out-of-memory-when-using-trainer-with-compute-metrics/2941
When i was about to fine tune a LLM today, i ran into the same issue but i got saved by one folk's solution provided in this discussion.
How to reproduce (you should have a GPU, no quantization, just full fine tuning):
1. Find a random decoder-only text2text LLM, let's say Qwen3 0.6B.
2. Prepare a train dataset (>0 rows) and eval dataset (>850 rows).
3. Set eval_on_start = True, either TrainingArguments or SFTConfig could work.
4. Implement your own compute_metrics BUT DON'T implement preprocess_logits_for_metrics.
5. start training (don't need deepspeed or accelerate, just trainer.train())
What would happen?
First it would go through the evaluation dataset because i set eval_on_start=True, the model would go really fast originally but then it would go extremely slow. Finally, you would get an error that says numpy is trying to allocate a ridiculously big array to memory.
<img width="1567" height="986" alt="Image" src="https://github.com/user-attachments/assets/e1885324-fb09-48b6-8bfd-d36306c2a156" />
One of the folk who seems to be inspired by example code provided the implementation of preprocess_logits_for_metrics, which solved problem i encountered perfectly. The evaluation run is done within 2 mins.
Why it would happen?
I briefly go over the source code of evaluation_loop and i located prediction_step.
prediction_step says it would return a tuple of three optional torch.Tensor (loss, logits, label).
<img width="719" height="68" alt="Image" src="https://github.com/user-attachments/assets/537032b1-9371-4852-bed8-8f31cd6a0437" />
But most of the time, the returned logits is a tuple.
Why?
if you look at the the function that processes logits before logits is returned:
<img width="535" height="140" alt="Image" src="https://github.com/user-attachments/assets/d6f7f3b1-6c2a-4298-b4c2-f0ab85fa88cf" />
This function would receive all kinds of "tensors". The type of "tensors" could be list, tuple, Mapping or torch.Tensor.
Does it change the variable, called "tensors", from other data types to torch.Tensor?
No.
type(tensors)(........) would preserve the original type of tensors. It means if the variable "tensors" (i hate this variable name because it is misleading and confusing) is a tuple, after this function, it's still a tuple!!!!!
It's a recursive function btw. I would love doing recursion in programming competition, but not in huggingface codebase!!! It also implies a fact that the input of nested_detach could be complexly nested, like ([],())
So this function doesn't guarantee the logits is a torch.Tensor.
Nor does the implementation of prediction_step before nested_detach was called in prediction_step
<img width="702" height="759" alt="Image" src="https://github.com/user-attachments/assets/451982b4-648b-4876-a2b7-c9d748899fd1" />
So, the logits is not always a torch.Tensor, which is contradictory to what the type hint says, what did developers do?
They developed preprocess_logits_for_metrics.
So that user could fix it ON THEIR OWN IMPLEMENTATION.
(preprocess_logits_for_metrics is called within evaluation_loop to clean the mess, specifically, logits, returned by prediction_step())
<img width="803" height="772" alt="Image" src="https://github.com/user-attachments/assets/c7494018-e282-4577-b824-3db9c9e57609" />
It's such a lazy fix. Why a regular user is expected to implement their own preprocess_logits_f
or_metrics, to deal with a poorly-designed prediction_step?
It has been 5 years since the person who reported it.........
If a user-defined compute_metrics is not provided to Trainer or SFTTrainer, the prediction_step would return (loss, none, none), which skips the whole problem and this is why users said the issue of "slow evaluation" is gone when they don't provide compute_metrics.
I would like to make a Pull Request to fix it but i don't have enough time and energy to do this massive amount of work.
A temporary fix is to let users know when they need to make their own compute_metrics, they also have to implement preprocess_logits_for_metrics. Different models would have different styles of implementations but for text2text decoder only LLM.
<img width="687" height="78" alt="Image" src="https://github.com/user-attachments/assets/125fffe3-d8cc-44c7-9a96-35a11500d975" />
(Another thing is that the variable called
|
https://github.com/huggingface/transformers/issues/42200
|
open
|
[
"Good Second Issue",
"bug"
] | 2025-11-14T00:13:40Z
| 2025-12-18T14:29:32Z
| 3
|
Yacklin
|
huggingface/transformers
| 42,197
|
Attempt to access socket despite HF_HUB_OFFLINE = 1 if cache warmed outside current process
|
### System Info
- `transformers` version: 4.57.1
- Platform: Linux-6.6.84.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.13.0
- Huggingface_hub version: 0.36.0
- Safetensors version: 0.6.2
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.9.1+cpu (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
### Who can help?
@ydshieh I have created a reproducible example of the issue I mentioned in https://github.com/huggingface/transformers/issues/41311#issuecomment-3508674325.
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Reproducible example: https://github.com/fr1ll/HF_HUB_OFFLINE
Warming the cache in a subprocess, then disabling sockets, then loading the same model should work.
However, it fails with an attempt to access a socket and then "Can't load" errors.
The script named `subprocess-warm_then_offline-load.py` reproduces this error.
Interestingly, warming the cache in process, then disabling sockets, then loading the same model works.
This is reproduced in `inprocess-warm_then_offline-load.py` in the repo above.
### Expected behavior
When a model has already been loaded into the cache ("warm cache"), if `HF_OFFLINE_MODE` = `"1"`, a Transformers pipeline should be able to load the model without accessing any network sockets.
|
https://github.com/huggingface/transformers/issues/42197
|
closed
|
[
"Good Second Issue",
"bug"
] | 2025-11-13T21:38:29Z
| 2025-11-24T09:33:54Z
| 6
|
fr1ll
|
vllm-project/vllm
| 28,646
|
[Feature][P2]: Implement CI Build Time and Size Guards
|
### 🚀 The feature, motivation and pitch
### Description
Once we optimize the Docker build, we need to prevent regressions. Create CI checks that fail if build time exceeds thresholds or if image size grows beyond acceptable limits. Also set up monitoring dashboards.
### What You'll Do
1. Create Python scripts to check image metrics:
- `check-image-size.py` (extend existing wheel size checker)
- `check-build-time.py`
- `check-image-layers.py`
2. Add these checks to CI pipeline after image build
3. Set appropriate thresholds (configurable)
4. Create Buildkite annotations for warnings
5. Set up CloudWatch dashboard for metrics (optional)
### Deliverables
- [ ] Python scripts for checking metrics
- [ ] Integration into test-template-ci.j2
- [ ] Configurable thresholds via environment variables
- [ ] Documentation on how to adjust thresholds
- [ ] CloudWatch dashboard (optional)
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28646
|
open
|
[
"feature request",
"ci/build"
] | 2025-11-13T12:50:34Z
| 2025-11-13T18:55:29Z
| 0
|
rzabarazesh
|
pytorch/pytorch
| 167,721
|
Minimal, comprehensive test suite
|
We are building PyTorch from source using, among others, the system installed CUDA.
Currently we are running the full test suite to ensure nothing got broken due to e.g. wrong dependency versions or missing dependencies. I.e. `python test/run_test.py --continue-through-error`
However, that takes up to 3 days on a GPU node of a HPC cluster and shows random failures due to small accuracy issues or "unlucky" random inputs.
Is there some smaller test suite that can be used to verify sufficiently large parts of PyTorch but runs much faster?
I noticed that requesting a PR merge has an anticipated delay of 3-4hrs for running some tests. What is exactly used there? Could that be enough for our use case too?
So what I request is some documentation next to the "Building PyTorch from source" section on how to verify the built package in a reasonable time frame.
cc @svekars @sekyondaMeta @AlannaBurke
|
https://github.com/pytorch/pytorch/issues/167721
|
open
|
[
"module: docs",
"feature",
"triaged",
"module: infra",
"module: testing"
] | 2025-11-13T12:18:42Z
| 2025-11-26T21:55:31Z
| 5
|
Flamefire
|
huggingface/diffusers
| 12,650
|
Question about the `# Copied from` system
|
Hi team! 👋
While working on improving docstrings and type hints across scheduler files (issue #9567), I've noticed the `# Copied from` pattern used extensively throughout the codebase.
Examples:
- Functions like `betas_for_alpha_bar` are duplicated across multiple schedulers
- Output classes like `DDPMSchedulerOutput` are copied with name replacements (e.g., DDPM->EulerDiscrete)
My question: What's the rationale behind this duplication system instead of:
1. Using a shared utils.py or common.py file for common functions
2. Using class inheritance for similar Output classes
I understand there might be good architectural reasons (module independence, API stability, avoiding circular dependencies, etc.), but this isn't documented anywhere that I could find.
Suggested action: Regardless of the answer, I think we should either:
- Option A: Refactor to use inheritance/shared utilities (if the current system is legacy)
- Option B: Document this design decision in:
- A CONTRIBUTING.md or architecture doc
- Comments in the utils/check_copies.py script itself
- Another README in the diffusers directory
This would help future contributors (like me! 😅) understand why this pattern exists and how to work with it properly when improving documentation. What do you think?
Thanks for maintaining such a great library! 🚀
|
https://github.com/huggingface/diffusers/issues/12650
|
open
|
[] | 2025-11-13T11:53:22Z
| 2025-12-21T22:44:03Z
| 3
|
delmalih
|
huggingface/transformers
| 42,179
|
Add TileLang Kernel Support
|
### Feature request
I would like to propose adding support for TileLang kernel in the transformers library. TileLang is a modular approach for writing attention kernels that could provide flexibility and performance benefits.
github link: https://github.com/tile-ai/tilelang
- Add TileLang as an optional attention backend
- Provide configuration options similar to existing attention mechanisms
- Ensure compatibility with existing model architectures
- Add proper multi-GPU support and synchronization
### Motivation
- Enhanced Modularity
TileLang offers a more modular approach to writing attention kernels, making it easier for researchers and developers to modify and optimize the implementation for specific use cases.
- Performance Comparison
Integrating TileLang would allow users to benchmark its performance directly against existing attention implementations, such as Flex Attention and Flash Attention. This would foster a better understanding of how different kernels can impact model performance and efficiency.
- Community Engagement
Supporting TileLang in the Transformers library would attract a broader community of developers interested in optimizing transformer models, thus enhancing collaboration and innovation.
- Flexibility
TileLang's architecture is designed for ease of modification, allowing users to experiment with and refine attention mechanisms more effectively.
### Your contribution
I've experimented with TileLang kernel on transformers models and found it works well in single-GPU scenarios. However, when enabling multi-GPU inference using `device_map='auto'`, I encounter NaN tensors. This may be related to tensor synchronization issues in distributed settings.
I'm willing to help with testing and potentially contributing to the implementation once the multi-GPU synchronization issue is understood and resolved.
I also have 3 questions:
1. Is there any existing plan or roadmap for TileLang integration?
2. Are there specific guidelines for adding new attention backends?
3. What would be the recommended approach for handling multi-GPU synchronization in custom kernels?
|
https://github.com/huggingface/transformers/issues/42179
|
open
|
[
"Feature request"
] | 2025-11-13T11:38:33Z
| 2025-11-13T11:38:33Z
| 0
|
crownz248
|
huggingface/tokenizers
| 1,885
|
Feature request: Characters delimiter argument
|
I wish to develop a k-mer-character-based BPE tokenizer using your beautiful Rust package, for genomic applications. Unfortunately, it doesn't seem to support defining a characters delimiter. As I see it, it is a pretty straightforward change, instead of iterating a word by character, first split it by the delimiter and then iterate. Also, when merges are computed, in the string representation the character delimiter should also be considered. In that way, a multi-character word splitting could have been made feasible. Right now I am using a modified Python version of the BPE tokenizer made by the genius [Yikai-Liao](https://github.com/Yikai-Liao/efficient_bpe/blob/main/ebpe_v2.py), however it would be nice to see that happening in Rust as well, and natively supported by huggingface. Unfortunately, I am still novice in working with Rust, otherwise I would make a pull request with the suggested changes. Is it something that can be worked out in the future? Or is there a way to do this with the current implementation? Thank you!
|
https://github.com/huggingface/tokenizers/issues/1885
|
open
|
[] | 2025-11-13T10:40:29Z
| 2025-11-28T07:51:07Z
| 1
|
VasLem
|
vllm-project/vllm
| 28,629
|
[Usage]: TPOT per request information was not collected by vllm bench serve
|
### Your current environment
```text
The output of `python collect_env.py`
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 4.1.0
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+xpu
Is debug build : False
CUDA used to build PyTorch : None
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)
Python platform : Linux-6.14.0-1006-intel-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : False
CUDA runtime version : No CUDA
CUDA_MODULE_LOADING set to : N/A
GPU models and configuration : No CUDA
Nvidia driver version : No CUDA
cuDNN version : No CUDA
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: Intel(R) Xeon(R) w5-3435X
BIOS Model name: Intel(R) Xeon(R) w5-3435X CPU @ 3.1GHz
BIOS CPU family: 179
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 8
CPU(s) scaling MHz: 45%
CPU max MHz: 4700.0000
CPU min MHz: 800.0000
BogoMIPS: 6192.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 768 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 32 MiB (16 instances)
L3 cache: 45 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Ghostwrite: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and
|
https://github.com/vllm-project/vllm/issues/28629
|
open
|
[
"usage"
] | 2025-11-13T09:20:19Z
| 2025-11-13T09:20:19Z
| 0
|
jlwang1996
|
vllm-project/vllm
| 28,626
|
[Bug]:Qwen3-VL-32B-AWQ model memory usage: 8k context limit with 40GB VRAM?
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
Running models on the latest stable vLLM release: https://huggingface.co/QuantTrio/Qwen3-VL-32B-Instruct-AWQ
The model size is 20GB, and my GPU has 40GB VRAM total.
Using parameter: --gpu-memory-utilization 0.9
Why am I only getting around 8k max context length? Do VL models really hog that much VRAM?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28626
|
open
|
[
"bug"
] | 2025-11-13T08:00:20Z
| 2025-11-17T07:08:47Z
| 3
|
maxin9966
|
vllm-project/vllm
| 28,622
|
[Bug]: Can we able to benchmark Quantized MOE models Either W8A8 or W8A16 ?
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 3.22.1
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.18 (main, Jun 4 2025, 08:56:00) [GCC 13.3.0] (64-bit runtime)
Python platform : Linux-6.14.0-33-generic-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.0.140
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration : GPU 0: NVIDIA RTX 6000 Ada Generation
Nvidia driver version : 575.57.08
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.11.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 52
On-line CPU(s) list: 0-51
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) w7-2595X
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 26
Socket(s): 1
Stepping: 8
CPU(s) scaling MHz: 21%
CPU max MHz: 4800.0000
CPU min MHz: 800.0000
BogoMIPS: 5616.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.2 MiB (26 instances)
L1i cache: 832 KiB (26 instances)
L2 cache: 52 MiB (26 instances)
L3 cache: 48.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-51
Vulnerability Gather data sampling: Not affected
Vulnerability Ghostwrite: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not af
|
https://github.com/vllm-project/vllm/issues/28622
|
open
|
[
"bug"
] | 2025-11-13T07:26:56Z
| 2025-11-13T07:27:06Z
| 0
|
logesh13
|
pytorch/pytorch
| 167,716
|
`torch.sparse.mm` returns corrupted sparse tensor causing Segmentation fault in `to_dense()` on PyTorch 2.9.0
|
### 🐛 Describe the bug
I experienced a problem while using the "torch.sparse.mm()" function, which prompted me to consult the official documentation for clarification. The documentation includes sample code that executes successfully. According to the documentation, the second matrix parameter accepts both sparse and dense matrices. In the official example provided, the second matrix is implemented as a dense matrix. The example code runs as follows:
```python
import torch
a = torch.tensor([[1., 0, 2], [0, 3, 0]]).to_sparse().requires_grad_()
b = torch.tensor([[0, 1.], [2, 0], [0, 0]], requires_grad=True)
y = torch.sparse.mm(a, b)
z = y.to_dense()
```
The official example executes successfully. Out of curiosity about the behavior when the second matrix parameter is sparse, I created a custom sparse matrix to test the functionality. The sparse matrix multiplication operation itself completed without errors, but attempting to inspect the result using "to_dense()" caused a "Segmentation fault".
The code runs as follows:
```python
import torch
torch.manual_seed(42)
indices_A = torch.tensor([[0, 1, 2], [0, 2, 3]])
values_A = torch.tensor([1.0, 2.0, 3.0])
A = torch.sparse_coo_tensor(indices_A, values_A, size=(3, 4))
indices_B = torch.tensor([[0, 1, 2, 3], [0, 1, 1, 2]])
values_B = torch.tensor([4.0, 5.0, 6.0, 7.0])
B = torch.sparse_coo_tensor(indices_B, values_B, size=(4, 2))
C = torch.sparse.mm(A, B)
C = C.to_dense()
```
it comes out:
```
test2.py:13: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /pytorch/aten/src/ATen/SparseCsrTensorImpl.cpp:53.)
C = torch.sparse.mm(A, B)
Segmentation fault (core dumped)
```
To further investigate the root cause, I proceeded to debug the code. During debugging, I found that simply printing the two matrices prior to invoking torch.sparse.mm()resolved the segmentation fault. The specific modification is shown below:
```python
import torch
torch.manual_seed(42)
indices_A = torch.tensor([[0, 1, 2], [0, 2, 3]])
values_A = torch.tensor([1.0, 2.0, 3.0])
A = torch.sparse_coo_tensor(indices_A, values_A, size=(3, 4))
indices_B = torch.tensor([[0, 1, 2, 3], [0, 1, 1, 2]])
values_B = torch.tensor([4.0, 5.0, 6.0, 7.0])
B = torch.sparse_coo_tensor(indices_B, values_B, size=(4, 2))
print("A:", A)
print("B:", B)
C = torch.sparse.mm(A, B)
C = C.to_dense()
```
Based on this observation, I suspect that the print()function inadvertently triggers necessary initialization procedures that should occur within "torch.sparse.mm()", but due to an implementation flaw in the sparse matrix multiplication function, these critical initialization steps are not being properly executed, resulting in the "Segmentation fault".
To investigate the root cause of the issue, I proceeded to examine the internal data structures of the matrices by printing their properties. The diagnostic code is shown below:
```python
import torch
import os
torch.manual_seed(42)
indices_A = torch.tensor([[0, 1, 2], [0, 2, 3]])
values_A = torch.tensor([1.0, 2.0, 3.0])
A = torch.sparse_coo_tensor(indices_A, values_A, size=(3, 4))
indices_B = torch.tensor([[0, 1, 2, 3], [0, 1, 1, 2]])
values_B = torch.tensor([4.0, 5.0, 6.0, 7.0])
B = torch.sparse_coo_tensor(indices_B, values_B, size=(4, 2))
C = torch.sparse.mm(A, B)
indices = C.indices()
values = C.values()
print(f" Indices shape: {indices.shape}")
print(f" Values shape: {values.shape}")
print(f" Indices range: [{indices.min().item()}, {indices.max().item()}]")
print(f" Values range: [{values.min().item()}, {values.max().item()}]")
C = C.to_dense()
```
it comes out:
<img width="326" height="90" alt="Image" src="https://github.com/user-attachments/assets/28078672-9831-4492-a6aa-0f640776ef44" />
As highlighted in the red box, the index data of the resulting matrix C appears to be corrupted, leading the "to_dense()" function to attempt accessing an invalid memory address. This concludes my current analysis of the issue. Since I'm not proficient in C++, I'm unable to conduct deeper source code investigation. I greatly appreciate any insights you can provide!
### Versions
Collecting environment information...
PyTorch version: 2.9.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.3 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.13.9 (main, Oct 14 2025, 21:29:44) [Clang 20.1.4 ] (64-bit runtime)
Python platform: Linux-6.8.0-86-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: False
HIP runtime version: N/A
MIOpen runti
|
https://github.com/pytorch/pytorch/issues/167716
|
closed
|
[
"module: sparse",
"module: crash"
] | 2025-11-13T07:25:20Z
| 2025-11-13T16:57:47Z
| 2
|
David-YB
|
vllm-project/vllm
| 28,610
|
[Usage]: Does 0.11.0 suport tree attenton with eagle?
|
### Your current environment
Does 0.11.0 suport tree attenton with eagle? Do I need to enable it manually?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28610
|
open
|
[
"usage"
] | 2025-11-13T03:35:02Z
| 2025-12-03T17:08:16Z
| 1
|
wincle
|
huggingface/datasets
| 7,864
|
add_column and add_item erroneously(?) require new_fingerprint parameter
|
### Describe the bug
Contradicting their documentation (which doesn't mention the parameter at all), both Dataset.add_column and Dataset.add_item require a new_fingerprint string. This parameter is passed directly to the dataset constructor, which has the fingerprint parameter listed as optional; is there any reason it shouldn't be optional in these methods as well?
### Steps to reproduce the bug
Reproduction steps:
1. Look at the function signature for add_column: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6078
2. Repeat for add_item: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6336
### Expected behavior
add_column and add_item should either set the fingerprint parameter to optional or include it in their docstrings
### Environment info
Not environment-dependent
|
https://github.com/huggingface/datasets/issues/7864
|
open
|
[] | 2025-11-13T02:56:49Z
| 2025-12-07T14:41:40Z
| 2
|
echthesia
|
vllm-project/vllm
| 28,566
|
[Usage]: pd disagg scenario , I discover in the decoder , also has the prefill operation, is it normal ?
|
### Your current environment
when num_computed_tokens is less than num_prompt_tokens, it will enter prefill operation
<img width="633" height="149" alt="Image" src="https://github.com/user-attachments/assets/bab96187-37c8-4ea2-ba68-9f52dda07f6b" />
and i found, num_computed_tokens is possible less than num_prompt_tokens, because num_prompt_tokens is len(block_ids) * self.block_size, event num_prompt_tokens is just equal to num_prompt_tokens, it do num_computed_tokens -= 1, why ? this cause num_computed_tokens is never equal to num_prompt_tokens
<img width="980" height="762" alt="Image" src="https://github.com/user-attachments/assets/81eb6f4f-f0db-45f8-8934-64bd8ea21988" />
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28566
|
open
|
[
"usage"
] | 2025-11-12T16:18:53Z
| 2025-11-12T16:18:53Z
| 0
|
yangshanjun
|
vllm-project/vllm
| 28,564
|
[Usage]: Can't get ModernBert models to run in vllm serve
|
### Your current environment
I am trying to download and use ModernBertModel with the vllm serve feature.
At first I thought it was an issue with the model so I switched from trying to use BertEmbed with Alibaba-NLP/gte-modernbert-base since it appears in the docs as a model that supports embedding.
Source: https://docs.vllm.ai/en/latest/models/supported_models/#pooling-models
I download and run it like this.
Download:
`huggingface-cli download Alibaba-NLP/gte-modernbert-base --local-dir models/bert --local-dir-use-symlinks False`
Serve (example, I have used many iterations):
`vllm serve models/bert2 --host 0.0.0.0 --port 8003 --task embed --trust-remote-code --gpu-memory-utilization 0.3`
No matter what I get this: Assertion failed, The model should be a generative or pooling model when task is set to 'embedding'. [type=assertion_error, input_value=ArgsKwargs((), {'model': ...rocessor_plugin': None}), input_type=ArgsKwargs]
I tried setting runner but that didn't do a thing. I really have no clue why it says this model is supported in the docs. I have searched through other issues and documentation to try out a bunch of solutions but obviously none have worked so far. Been trying to figure this out for hours now and I am losing my mind (not relevant ig, need to vent).
### How would you like to use vllm
I want to run inference of a Alibaba-NLP/gte-modernbert-base or any ModernBertModel. I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28564
|
open
|
[
"usage"
] | 2025-11-12T15:51:18Z
| 2025-11-12T15:51:18Z
| 0
|
Logikschleifen
|
pytorch/pytorch
| 167,631
|
`jit.export` analoge for `torch.export`
|
### 🚀 The feature, motivation and pitch
According to the documentation, [`TorchScript` is deprecated in favor of `torch.export`](https://docs.pytorch.org/docs/stable/jit.html).
However, `torch.jit.script` offered some functionality that does not seem to be covered by `torch.export`, specifically the ability to export multiple entry points via the `@jit.export` decorator. This is useful in many situations, for instance when working with Normalizing Flows and wanting to use both forward and inverse method, probabilistic models with multiple relevant methods, or for declaring additional state-modifying functions.
Without this functionality, it's unclear how to convert models that relied on `jit.export` to the new `torch.export` setup.
Related Discussions:
- https://github.com/pytorch/executorch/issues/7458
- https://discuss.pytorch.org/t/export-multiple-functions-of-a-pytorch-module/194816
### Alternatives
_No response_
### Additional context
_No response_
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
|
https://github.com/pytorch/pytorch/issues/167631
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 2025-11-12T10:24:10Z
| 2025-11-17T18:41:46Z
| 3
|
randolf-scholz
|
pytorch/pytorch
| 167,630
|
Memory leak in aoti compile
|
### 🐛 Describe the bug
I want to compile many exported programs into an aoti .so file. However it seems like there is a memory leak
```python
import contextlib
import gc
import logging
import os
import tempfile
from pathlib import Path
import torch
import torch._inductor
import torch.nn as nn
logging.basicConfig(
format="%(asctime)s %(levelname)s: %(message)s",
level=logging.INFO,
)
def log_current_memory() -> None:
total = torch.cuda.get_device_properties(0).total_memory
allocated = torch.cuda.memory_allocated(0)
reserved = torch.cuda.memory_reserved(0)
msg = "Current CUDA memory usage:"
msg += f"\n Total: {total / 1e9:.2f} GB"
msg += f"\n Allocated: {allocated / 1e9:.4} GB"
msg += f"\n Reserved: {reserved / 1e9:.4f} GB"
logging.info(msg)
# ---------- toy model ----------
def make_mlp(in_dim=128, hidden=256, out_dim=64, depth=3):
layers = []
d = in_dim
for _ in range(depth):
layers += [nn.Linear(d, hidden), nn.ReLU()]
d = hidden
layers += [nn.Linear(d, out_dim)]
return nn.Sequential(*layers)
def one_iter(i, device, batch, in_dim, hidden, out_dim, depth, workdir):
model = make_mlp(in_dim, hidden, out_dim, depth).to(device).eval()
x = torch.randn(batch, in_dim, device=device)
with torch.inference_mode():
_ = model(x)
exported = torch.export.export(
model,
(x,),
)
pkg_path = Path(workdir) / f"mlp_{i}.pt2"
path = torch._inductor.aoti_compile_and_package( # returns artifact path
exported_program=exported,
package_path=str(pkg_path),
)
logging.info(f"[iter {i}] AOTI artifact: {path}")
log_current_memory()
del _
del model, x, exported
torch.cuda.synchronize()
torch.cuda.empty_cache()
gc.collect()
with contextlib.suppress(OSError):
os.remove(path)
def main():
assert torch.cuda.is_available(), "CUDA is required for this MRE."
device = "cuda"
logging.info(f"Running on {torch.cuda.get_device_name(0)}")
log_current_memory()
for i in range(10):
with tempfile.TemporaryDirectory() as tmp_workdir:
one_iter(
i=i,
device=device,
batch=32,
in_dim=2048,
hidden=512,
out_dim=10,
depth=6,
workdir=tmp_workdir,
)
logging.info("Done.")
torch.cuda.synchronize()
torch.cuda.empty_cache()
gc.collect()
log_current_memory()
if __name__ == "__main__":
main()
```
will print
```
2025-11-12 09:14:08,074 INFO: Current CUDA memory usage:
Total: 85.10 GB
Allocated: 0.0 GB
Reserved: 0.0000 GB
/persist/envs/Fluyt312/lib/python3.12/site-packages/torch/_inductor/compile_fx.py:282: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
2025-11-12 09:14:16,492 INFO: [iter 0] AOTI artifact: /tmp/tmpu7l62f87/mlp_0.pt2
2025-11-12 09:14:16,493 INFO: Current CUDA memory usage:
Total: 85.10 GB
Allocated: 0.01825 GB
Reserved: 0.0294 GB
2025-11-12 09:14:20,846 INFO: [iter 1] AOTI artifact: /tmp/tmp_4ueq3r9/mlp_1.pt2
2025-11-12 09:14:20,847 INFO: Current CUDA memory usage:
Total: 85.10 GB
Allocated: 0.02772 GB
Reserved: 0.0336 GB
2025-11-12 09:14:25,241 INFO: [iter 2] AOTI artifact: /tmp/tmpe3zldlu3/mlp_2.pt2
2025-11-12 09:14:25,242 INFO: Current CUDA memory usage:
Total: 85.10 GB
Allocated: 0.03719 GB
Reserved: 0.0608 GB
2025-11-12 09:14:29,657 INFO: [iter 3] AOTI artifact: /tmp/tmptv6ucstj/mlp_3.pt2
2025-11-12 09:14:29,657 INFO: Current CUDA memory usage:
Total: 85.10 GB
Allocated: 0.04667 GB
Reserved: 0.0650 GB
2025-11-12 09:14:36,116 INFO: [iter 4] AOTI artifact: /tmp/tmp_9hsaky4/mlp_4.pt2
2025-11-12 09:14:36,117 INFO: Current CUDA memory usage:
Total: 85.10 GB
Allocated: 0.05614 GB
Reserved: 0.0713 GB
2025-11-12 09:14:40,528 INFO: [iter 5] AOTI artifact: /tmp/tmpi2q_wgap/mlp_5.pt2
2025-11-12 09:14:40,529 INFO: Current CUDA memory usage:
Total: 85.10 GB
Allocated: 0.06561 GB
Reserved: 0.0755 GB
2025-11-12 09:14:44,982 INFO: [iter 6] AOTI artifact: /tmp/tmp_9xuabi5/mlp_6.pt2
2025-11-12 09:14:44,982 INFO: Current CUDA memory usage:
Total: 85.10 GB
Allocated: 0.07508 GB
Reserved: 0.0818 GB
2025-11-12 09:14:49,412 INFO: [iter 7] AOTI artifact: /tmp/tmpeedfcd55/mlp_7.pt2
2025-11-12 09:14:49,412 INFO: Current CUDA memory usage:
Total: 85.10 GB
Allocated: 0.08455 GB
Reserved: 0.1070 GB
2025-11-12 09:14:53,822 INFO: [iter 8] AOTI artifact: /tmp/tmpp2miv7ts/mlp_8.pt2
2025-11-12 09:14:53,823 INFO: Current CUDA memory usage:
Total: 85.10 GB
Allocated: 0.09402 GB
Reserved: 0.1132 GB
2025-11-12 09:14:58,244 INFO: [iter 9] AOTI artifact: /tmp/tmp3mwylv5e/mlp_9.pt2
2025-11-12 09:14:58,244 INFO: Current CUDA memory usage:
|
https://github.com/pytorch/pytorch/issues/167630
|
closed
|
[
"module: memory usage",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 2025-11-12T09:23:03Z
| 2025-11-19T03:42:13Z
| 1
|
ben-da6
|
vllm-project/vllm
| 28,527
|
💡 Bounty Platform for vLLM
|
Hi vLLM team! 👋
I wanted to share **Roxonn** - a decentralized bounty platform for accelerating AI/ML development.
**What is Roxonn?**
✅ Fund GitHub issues with crypto bounties (XDC, USDC, ROXN)
✅ Notify 300+ AI/ML developers
✅ Auto-pay when PRs merge via blockchain
✅ Zero crypto setup needed
**Quick flow:**
1. Register repo (GitHub App)
2. Fund pool with USDC (stable pricing)
3. Assign bounties to features
4. PR merged → automatic payment
**Perfect for AI/ML:**
- Access to research community
- **Only 1% total platform fee**
- Transparent payments
Learn more: **https://roxonn.com**
*No pressure - sharing a resource!*
|
https://github.com/vllm-project/vllm/issues/28527
|
closed
|
[] | 2025-11-12T07:50:33Z
| 2025-11-13T12:36:15Z
| 0
|
dineshroxonn
|
huggingface/transformers
| 42,154
|
💡 Bounty Platform for Hugging Face Transformers
|
Hi Hugging Face Transformers team! 👋
I wanted to share **Roxonn** - a decentralized bounty platform for accelerating AI/ML development.
**What is Roxonn?**
✅ Fund GitHub issues with crypto bounties (XDC, USDC, ROXN)
✅ Notify 300+ AI/ML developers
✅ Auto-pay when PRs merge via blockchain
✅ Zero crypto setup needed
**Quick flow:**
1. Register repo (GitHub App)
2. Fund pool with USDC (stable pricing)
3. Assign bounties to features
4. PR merged → automatic payment
**Perfect for AI/ML:**
- Access to research community
- **Only 1% total platform fee**
- Transparent payments
Learn more: **https://roxonn.com**
*No pressure - sharing a resource!*
|
https://github.com/huggingface/transformers/issues/42154
|
closed
|
[] | 2025-11-12T07:49:59Z
| 2025-11-17T11:40:10Z
| 2
|
dineshroxonn
|
pytorch/pytorch
| 167,624
|
💡 Bounty Platform for PyTorch
|
Hi PyTorch team! 👋
I wanted to share **Roxonn** - a decentralized bounty platform for accelerating AI/ML development.
**What is Roxonn?**
✅ Fund GitHub issues with crypto bounties (XDC, USDC, ROXN)
✅ Notify 300+ AI/ML developers
✅ Auto-pay when PRs merge via blockchain
✅ Zero crypto setup needed
**Quick flow:**
1. Register repo (GitHub App)
2. Fund pool with USDC (stable pricing)
3. Assign bounties to features
4. PR merged → automatic payment
**Perfect for AI/ML:**
- Access to research community
- **Only 1% total platform fee**
- Transparent payments
Learn more: **https://roxonn.com**
*No pressure - sharing a resource!*
|
https://github.com/pytorch/pytorch/issues/167624
|
closed
|
[] | 2025-11-12T07:49:51Z
| 2025-11-13T12:35:34Z
| 0
|
dineshroxonn
|
pytorch/pytorch
| 167,613
|
UNSTABLE inductor-periodic / inductor-smoke-test / test (inductor_torchbench_smoketest_perf)
|
I can't figure out from the logs what is wrong
cc @ezyang @gchanan @kadeng @msaroufim @mcarilli @eellison @penguinwu @BoyuanFeng @chauhang @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov @coconutruben @seemethere @malfet @pytorch/pytorch-dev-infra
|
https://github.com/pytorch/pytorch/issues/167613
|
closed
|
[
"high priority",
"module: ci",
"triaged",
"module: cuda graphs",
"oncall: pt2",
"module: inductor",
"unstable"
] | 2025-11-12T03:47:42Z
| 2026-01-05T15:15:52Z
| 6
|
zou3519
|
vllm-project/vllm
| 28,508
|
[Usage]: KVCacheManager Parameter question
|
I noticed that the parameter “self.req_to_block_hashes” has been removed from KVCacheManager since version v0.10.0. But this parameter is still preserved in the official documentation. Could you please provide an explanation of this change?
- [Document Description](https://docs.vllm.ai/en/v0.9.2/api/vllm/v1/core/kv_cache_manager.html)
- [Version v0.10.0 code](https://github.com/vllm-project/vllm/blob/v0.10.0/vllm/v1/core/kv_cache_manager.py)
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28508
|
closed
|
[
"usage"
] | 2025-11-12T03:10:18Z
| 2025-11-16T08:33:45Z
| 1
|
Liziqi-77
|
huggingface/diffusers
| 12,638
|
How to design network with DiT blocks that are friendly to Tensorrt fp16 conversion?
|
We had a network that structed as `a convnet pre-encoder -> DiT blocks -> final block for last sampling`, it worked well with torch format and onnx format, but when we tried to convert it into tensorrt fp16 format, the inference will get value overflow. we had seen the data differene [between onnx and trt fp16, with polygraphy.] get larger and larger following those DiT blocks. My question is, how to make the whole model design more friendly to mix-precision inference? to let the DiT blocks less sensitive to value precision. Should I make the convnet pre-encoder and final blocks more complex, or more simple? Thanks
|
https://github.com/huggingface/diffusers/issues/12638
|
open
|
[] | 2025-11-12T02:23:37Z
| 2025-11-12T02:23:37Z
| null |
JohnHerry
|
huggingface/lerobot
| 2,428
|
how to eval the real world recorded dataset?
|
can lerobot eval the real world dataset with metric such as mse? I check the eval script and found that now it can only eval the sim env dataset
|
https://github.com/huggingface/lerobot/issues/2428
|
open
|
[
"question",
"evaluation"
] | 2025-11-12T02:08:44Z
| 2025-11-19T16:55:42Z
| null |
shs822
|
vllm-project/vllm
| 28,505
|
[Feature]: Is there a plan to introduce the new feature nano-pearl, a new engineering effort in speculative reasoning.
|
### 🚀 The feature, motivation and pitch
Nano-pearl can support speculative inference with higher concurrency (larger batch sizes) and is seamlessly compatible with algorithms like Eagle. Is there a plan to introduce it?
github:https://github.com/smart-lty/nano-PEARL
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28505
|
open
|
[
"feature request"
] | 2025-11-12T01:34:22Z
| 2025-11-17T06:14:09Z
| 1
|
Lexlum
|
pytorch/pytorch
| 167,596
|
[dynamo][feature] Guard on constants only if graph is specialized and not bytecode
|
### 🐛 Describe the bug
When Dynamo creates guards, it specializes not just for Fx graph, but also for residual bytecode. For example, in the following codebase, the graph is same, but the `summary` update leads to a recompilation. This causes unnecessary compile time issues. Is it possible to create guards only for those constants that actually end up changing graph? And somehow replay the constant variable compute in the resulting bytecode.
```
import torch
summary = {}
class SubMod(torch.nn.Module):
def __init__(self, name):
super().__init__()
self.name = name
@torch.compile(backend="eager")
def forward(self, x):
out = torch.sin(x)
self.add_summary()
return out
def add_summary(self):
global summary
summary[self.name] = 0
class Mod(torch.nn.Module):
def __init__(self):
super().__init__()
self.mod_a = SubMod("mod_a")
self.mod_b = SubMod("mod_b")
def forward(self, x):
global summary
summary = {}
x = self.mod_a(x)
x = self.mod_b(x)
return x
mod = Mod()
x = torch.randn(4)
mod(x)
print(summary)
x = torch.randn(4)
mod(x)
print(summary)
```
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela
|
https://github.com/pytorch/pytorch/issues/167596
|
open
|
[
"triaged",
"enhancement",
"oncall: pt2",
"module: dynamo"
] | 2025-11-12T00:55:17Z
| 2025-11-20T17:44:45Z
| 2
|
anijain2305
|
vllm-project/vllm
| 28,498
|
[Bug][RL]: Port Conflict
|
### Your current environment
- bug report:
```
Hello vLLM team, I'm running into a suspicious ZMQ socket bug with my 2P 4D configuration for DeepSeek-V3 (see below). I thought it is caused by reusing same nodes for many vLLM launches, but now it happened also at a clean node. Seems like a DP bug of sorts. Please find logs attached. vllm==0.11.0.
```
```bash
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 134, in __init__
[1;36m(APIServer pid=670293)[0;0m self.engine_core = EngineCoreClient.make_async_mp_client(
[1;36m(APIServer pid=670293)[0;0m ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 101, in make_async_mp_client
[1;36m(APIServer pid=670293)[0;0m return DPLBAsyncMPClient(*client_args)
[1;36m(APIServer pid=670293)[0;0m ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 1125, in __init__
[1;36m(APIServer pid=670293)[0;0m super().__init__(vllm_config, executor_class, log_stats,
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 975, in __init__
[1;36m(APIServer pid=670293)[0;0m super().__init__(vllm_config, executor_class, log_stats,
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 769, in __init__
[1;36m(APIServer pid=670293)[0;0m super().__init__(
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 466, in __init__
[1;36m(APIServer pid=670293)[0;0m self.resources.output_socket = make_zmq_socket(
[1;36m(APIServer pid=670293)[0;0m ^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/utils/__init__.py", line 2983, in make_zmq_socket
[1;36m(APIServer pid=670293)[0;0m socket.bind(path)
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/zmq/sugar/socket.py", line 320, in bind
[1;36m(APIServer pid=670293)[0;0m super().bind(addr)
[1;36m(APIServer pid=670293)[0;0m File "zmq/backend/cython/_zmq.py", line 1009, in zmq.backend.cython._zmq.Socket.bind
[1;36m(APIServer pid=670293)[0;0m File "zmq/backend/cython/_zmq.py", line 190, in zmq.backend.cython._zmq._check_rc
[1;36m(APIServer pid=670293)[0;0m zmq.error.ZMQError: Address already in use (addr='tcp://slurm-h200-206-017:59251')
```
### 🐛 Describe the bug
From Nick:
```
I think the problem is that each DP worker finds/assigns free ports dynamically/independently.. so there is a race condtion. I'm not sure of an immediate workaround apart from just re-attempt to start things when this happens. We'll have to look at how to catch and re-find a port if possible (though I have a memory this might be nontrivial).
```
From Reporter:
```
Received init message: EngineHandshakeMetadata(addresses=EngineZmqAddresses(inputs=['tcp://slurm-h200-207-083:60613'], outputs=['tcp://slurm-h200-207-083:36865'], coordinator_input='tcp://slurm-h200-207-083:34575', coordinator_output='tcp://slurm-h200-207-083:48025', frontend_stats_publish_address='ipc:///tmp/88ec875f-3de9-46ec-9947-6d1d6573b910'), parallel_config={'data_parallel_master_ip': 'slurm-h200-207-083', 'data_parallel_master_port': 41917, '_data_parallel_master_port_list': [60545, 36835, 47971, 37001], 'data_parallel_size': 32})
```
I'm looking at the code and I see that all code paths for getting ports eventually to go to _get_open_port, and that in _get_open_port there is basically no defence against choosing the same port twice. Can you please confirm my understanding?
_get_open_port in main is here: https://github.com/vllm-project/vllm/blob/main/vllm/utils/network_utils.py#L177
UPD: I imagine the assumption here is that once a code path gets a port, that code path will use it immediately, and thus the port will be come busy. It doesn't seem to hold though.
Even where all sockets that vLLM chose for itself are unique, I get the stack trace below.
I have the following explanation in mind:
- vLLM chooses zmq ports before launching the engines
- launching the engines takes ~5 mins
- by the time the engines are launched, something can listen on this port, like for example Ray
- **It looks the right solution is to hold on to then chosen ports immediately are they are chosen.**
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28498
|
open
|
[
"bug",
"help wanted",
"good first issue"
] | 2025-11-11T22:51:35Z
| 2025-12-04T07:35:31Z
| 13
|
robertgshaw2-redhat
|
vllm-project/vllm
| 28,489
|
[Usage]: Online continuous batching
|
### Current environment
```
==============================
System Info
==============================
OS : macOS 26.1 (arm64)
GCC version : Could not collect
Clang version : 17.0.0 (clang-1700.4.4.1)
CMake version : Could not collect
Libc version : N/A
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0
Is debug build : False
CUDA used to build PyTorch : None
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.6 (v3.12.6:a4a2d2b0d85, Sep 6 2024, 16:08:03) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform : macOS-26.1-arm64-arm-64bit
==============================
CUDA / GPU Info
==============================
Is CUDA available : False
CUDA runtime version : No CUDA
CUDA_MODULE_LOADING set to : N/A
GPU models and configuration : No CUDA
Nvidia driver version : No CUDA
cuDNN version : No CUDA
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Apple M2
==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.6
[pip3] nvidia-ml-py==13.580.82
[pip3] pyzmq==27.0.0
[pip3] sentence-transformers==5.1.2
[pip3] spacy-transformers==1.3.9
[pip3] torch==2.8.0
[pip3] torchaudio==2.8.0
[pip3] torchvision==0.23.0
[pip3] transformers==4.57.1
[conda] Could not collect
==============================
vLLM Info
==============================
ROCM Version : Could not collect
vLLM Version : 0.11.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
Could not collect
==============================
Environment Variables
==============================
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
```
Hello,
I am looking to run an LLM (using vLLM) within a FastAPI application. My goal is to achieve online, continuous batching.
I want the application to continuously receive requests from external clients, and have vLLM automatically batch them up for parallel inference.
In the past, I used the LLM() engine wrapped in RayServe. While this worked, it seemed to create a new internal deployment each time, which I want to avoid.
I am now trying to achieve this without RayServe, using the AsyncLLMEngine directly (don't know If I need the async, read online).
Here is an example of my current code. I'm running for test purposes on a cpu, but I have another issue on GPU (very long inference time, like minutes, with Ray, only 2-3 seconds).
```
# Model:
engine_args = AsyncEngineArgs(
model=path,
tensor_parallel_size=1,
gpu_memory_utilization=0.7,
enforce_eager=False,
disable_custom_all_reduce=False,
max_model_len=2048,
trust_remote_code=True,
enable_log_requests=False,
max_num_seqs=10
)
model_ = AsyncLLMEngine.from_engine_args(engine_args)
# Params
sampling_params = SamplingParams(
n=1,
best_of=None,
presence_penalty=0.0,
frequency_penalty=0.0,
temperature=0,
top_p=1.0,
top_k=1,
stop=my_stop_token,
stop_token_ids=[my_eos_token_id],
ignore_eos=False,
max_tokens=2048,
logprobs=None,
skip_special_tokens=True
)
outputs_generator = model_.generate(prompt, sampling_params, request_id)
final_output = None
async for request_output in outputs_generator:
if request_output.finished:
final_output = request_output
break
if final_output and final_output.outputs:
result = final_output.outputs[0].text
```
In my local test, I got the error when I try as example 3 inferences, calling 3 times self.model.generate() with 1 inputs and not 1 time self.model.generate() with 3 inputs.
Error: `Assertion failed: !_current_out (src/router.cpp:166)`
Is it possible to achieve what I'm asking by always calling a generate() for internal batching, or the solution it's only by "collecting" the prompts with a management and then calling a centralized generate()?
Thanks
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28489
|
open
|
[
"usage"
] | 2025-11-11T20:51:58Z
| 2025-11-11T20:53:47Z
| 0
|
GenVr
|
pytorch/pytorch
| 167,566
|
include string names of types in logs when dynamo guards on input types
|
When debugging recompile reasons in dynamo, it is convenient to look at a tlparse to understand what is causing recompiles.
One guard that dynamo has is a type_id guard, which guards on the id(type(x)) of an input. In the tlparse, when one these guards fails it shows up as this:
<img width="576" height="29" alt="Image" src="https://github.com/user-attachments/assets/0fa8b67d-790c-47aa-ace9-c62ab82c4c18" />
This is not very easy to interpret - it would be great if dynamo can stash the string name of the type so it can include it in the tlparse.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @kadeng @amjames @Lucaskabela @jataylo @chenyang78
|
https://github.com/pytorch/pytorch/issues/167566
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"module: compile ux"
] | 2025-11-11T19:11:14Z
| 2025-12-10T17:14:27Z
| 0
|
bdhirsh
|
pytorch/pytorch
| 167,560
|
naming of periodic-dynamo-benchmarks-cpu-test / test (cpu_inductor_amp_freezing_torchbench, 1, 2, linux.8xlarge.amx) seems wrong
|
Why is it a dynamo benchmark but also running cpu_inductor_amp_freezing ?
cc @chauhang @penguinwu
|
https://github.com/pytorch/pytorch/issues/167560
|
open
|
[
"triaged",
"module: benchmark",
"oncall: pt2"
] | 2025-11-11T18:20:30Z
| 2025-11-17T16:51:51Z
| 0
|
zou3519
|
pytorch/pytorch
| 167,558
|
per_page=1000 doesn't work in hud.pytorch.org
|
e.g. https://hud.pytorch.org/hud/pytorch/pytorch/main/31?per_page=50&mergeEphemeralLF=true
Whatever I set it to, it seems to just be 50.
My use case is that I am trying to find the first date that a test began to fail. The test has been failing for weeks. I have to hit the next button a lot.
cc @ZainRizvi @huydhn @clee2000
|
https://github.com/pytorch/pytorch/issues/167558
|
open
|
[
"triaged",
"module: devx"
] | 2025-11-11T18:06:21Z
| 2025-11-11T19:30:25Z
| 1
|
zou3519
|
huggingface/trl
| 4,507
|
Can a multimodal model like Gemma be trained in the same way as a text-only model like Qwen, but with the goal of improving only its text capabilities?
|
As stated in the title, I hope to improve only the text capabilities of Gemma 3, but it doesn’t seem to have worked as expected. The model I used is gemma-3-4b-it, and I conducted the following simple tests:
```python
dataset = Dataset.from_list(
[
{"prompt": "What is 2+2?", "task": "math"},
{"prompt": "Write a function that returns the sum of two numbers.", "task": "code"},
{"prompt": "What is 3*4?", "task": "math"},
{"prompt": "Write a function that returns the product of two numbers.", "task": "code"},
]
)
```
These data shouldn’t cause Gemma to generate excessively long responses, but according to the logs, its output length is quite large: ```'completions/mean_length': 4096.0, 'completions/min_length': 4096.0, 'completions/max_length': 4096```
This doesn’t seem normal.
|
https://github.com/huggingface/trl/issues/4507
|
open
|
[
"🐛 bug",
"⏳ needs more info"
] | 2025-11-11T15:59:51Z
| 2025-11-21T05:58:50Z
| 0
|
Tuziking
|
vllm-project/vllm
| 28,472
|
[Usage]: Will the reasoning_content in the chat template still be applied correctly after switching reasoning_content to reasoning
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
Will the message.reasoning_content for (which exists in default chat_template for qwen3-next-thinking qwen3-vl-thinking or other qwen3-thinking series or glm4.5 or kimi-k2-thinking or other models) in the chat template still be applied correctly after changing reasoning_content to reasoning (apply reasoning on ai message to reasoning_content on chat template)
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28472
|
closed
|
[
"usage"
] | 2025-11-11T15:04:11Z
| 2025-11-13T06:25:29Z
| 4
|
zhcn000000
|
pytorch/pytorch
| 167,540
|
[Dtensor]:change the test_mm shape from (12,8) * (8,16) to (512, 512) * (512, 512), throw assert error
|
### 🐛 Describe the bug
when I try to use (512, 512) * (512, 512) instead of the original shape in the testcase, it throw assert error.
```python
@with_comms
def test_mm(self):
device_mesh = self.build_device_mesh()
shard0_spec = Shard(0)
shard1_spec = Shard(1)
replica_spec = Replicate()
t1 = torch.randn(512, 512, requires_grad=True)
t2 = torch.randn(512, 512, requires_grad=True)
local_res = torch.mm(t1, t2)
def test_placement_comb(
placements1: list[Placement], placements2: list[Placement]
) -> None:
dt1 = distribute_tensor(t1, device_mesh, placements1)
dt2 = distribute_tensor(t2, device_mesh, placements2)
dist_res: DTensor = cast(DTensor, torch.mm(dt1, dt2)).redistribute(
device_mesh, [replica_spec]
)
self.assertEqual(dist_res.to_local(), local_res)
# backward
grad_dist_res = torch.ones_like(dist_res)
dist_res.backward(grad_dist_res)
self.assertIsNotNone(dt1.grad)
placement_specs = [shard0_spec, shard1_spec, replica_spec]
shard_specs_comb = list(itertools.product(placement_specs, placement_specs))
for spec in shard_specs_comb:
test_placement_comb([spec[0]], [spec[1]])
```
CUDA:12.8, driver 550.54.15
pytorch:2.9.0
### Versions
Is there anything that needs to be supplemented.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @tianyu-l @XilunWu @SherlockNoMad
|
https://github.com/pytorch/pytorch/issues/167540
|
open
|
[
"oncall: distributed",
"module: dtensor"
] | 2025-11-11T13:14:49Z
| 2025-11-12T08:21:45Z
| 2
|
zhanghanleo93
|
vllm-project/vllm
| 28,456
|
[Usage]: benchmark_moe Usage
|
### Your current environment
```text
(EngineCore_DP0 pid=7498) INFO 11-10 11:42:48 [shm_broadcast.py:466] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-consuming work (e.g. compilation).
(APIServer pid=7416) INFO 11-10 11:42:50 [loggers.py:127] Engine 000: Avg prompt throughput: 104162.6 tokens/s, Avg generation throughput: 10.0tokens/s, Running: 100 reqs, Waiting: 0 reqs, GPU KV cache usage: 10.1%, Prefix cache hit rate: 98.6%
(APIServer pid=7416) INFO 11-10 11:43:00 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 100 reqs, Waiting: 0 reqs, GPU KV cache usage: 10.1%, Prefix cache hit rate: 98.6%
(APIServer pid=7416) INFO 11-10 11:43:20 [loggers.py:127] Engine 000: Avg prompt throughput: 5.1 tokens/s, Avg generation throughput: 0.1 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.1%, Prefix cache hit rate: 98.6%
Collecting environment information...==============================
System Info==============================
OS : Ubuntu 24.04.3 LTS (x86_64)GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collectCMake version : version 3.28.3
Libc version : glibc-2.39
============================== PyTorch Info
==============================
PyTorch version : 2.8.0+cu128Is debug build : False
CUDA used to build PyTorch : 12.8ROCM used to build PyTorch : N/A
==============================
Python Environment==============================
Python version : 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)Python platform : Linux-6.8.0-85-generic-x86_64-with-glibc2.39
============================== CUDA / GPU Info
==============================Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: Tesla V100-PCIE-16GB
GPU 1: Tesla V100-PCIE-16GB
Nvidia driver version : 570.195.03
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7402P 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
Frequency boost: disabled
CPU(s) scaling MHz: 74%
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5599.64
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 1
|
https://github.com/vllm-project/vllm/issues/28456
|
open
|
[
"usage"
] | 2025-11-11T09:22:33Z
| 2025-11-21T01:43:41Z
| 6
|
ekmekovski
|
huggingface/lerobot
| 2,422
|
Running inference on Libero with pi0
|
Hello, I am trying to run inference with pi0 but the commands referenced in this issue #683 are outdated I believe. What would the commands be to run inference in Lerobot, and also running inference with pi0 in Libero? Additionally, if there is any documentation for these commands in general for fine-tuning and eval, that would be great!
|
https://github.com/huggingface/lerobot/issues/2422
|
open
|
[
"question",
"policies",
"evaluation"
] | 2025-11-11T09:22:25Z
| 2025-11-19T16:53:27Z
| null |
thomasdeng2027
|
pytorch/pytorch
| 167,526
|
Missing documentation for CUTLASS backend
|
### 📚 The doc issue
The release notes of PyTorch 2.8.0 report
> Inductor CUTLASS backend support
But it is missing information on how to activate/use that.
There are multiple NVIDIA PYPI packages that are related: nvidia-cutlass, nvidia-cutlass-dsl
And there is the CUTLASS repository on GitHub included under the `third_party` submodule folder.
For PyTorch 2.9.0 the submodule points to the CUTLASS 4.1.0 tag, but there is no corresponding release of nvidia-cutlass on PYPI, but there is one for nvidia-cutlass-dsl.
Similar for PyTorch 2.8.0 it points to v3.9.2 but neither nvidia PYPI package has a corresponding release.
There is a message "Please check whether _inductor.config.cuda.cutlass_dir [...] is set correctly" but no information what that is supposed to. In the source then env variable `TORCHINDUCTOR_CUTLASS_DIR` can be found but that is nowhere mentioned in the docs either. See this 2 searches:
- https://docs.pytorch.org/docs/stable/search.html?q=TORCHINDUCTOR_CUTLASS_DIR
- https://docs.pytorch.org/docs/stable/search.html?q=_inductor.config.cuda.cutlass_dir
### Suggest a potential alternative/fix
Add documentation how to use CUTLASS:
- Requirements
- Setup
- Expected results/example/tutorial
cc @svekars @sekyondaMeta @AlannaBurke @ptrblck @msaroufim @eqy @jerryzh168 @tinglvv
|
https://github.com/pytorch/pytorch/issues/167526
|
open
|
[
"module: docs",
"module: cuda",
"triaged"
] | 2025-11-11T08:33:22Z
| 2025-12-17T15:25:44Z
| 1
|
Flamefire
|
huggingface/lerobot
| 2,421
|
Seeking assistance with tactile data acquisition
|
I want to simultaneously collect tactile and visual data, with tactile data sampled at 150 fps and visual data at 30 fps. Each time an image frame is saved, I also want to store all tactile data collected during that time interval as additional features associated with the image.
What would be the best approach to implement this? Which parts of the source code should I modify?
|
https://github.com/huggingface/lerobot/issues/2421
|
open
|
[
"question"
] | 2025-11-11T02:49:57Z
| 2025-11-19T16:53:05Z
| null |
zhoushaoxiang
|
vllm-project/vllm
| 28,438
|
[Usage]: How do I install vLLM nightly?
|
### Your current environment
The output of collect_env.py
```text
==============================
System Info
==============================
OS : Ubuntu 20.04.5 LTS (x86_64)
GCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version : Could not collect
CMake version : version 3.16.3
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.5.1+cu121
Is debug build : False
CUDA used to build PyTorch : 12.1
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.18 (main, Jun 5 2025, 13:14:17) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.4.131
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version : 535.129.03
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 112
On-line CPU(s) list: 0-108
Off-line CPU(s) list: 109-111
Thread(s) per core: 1
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz
Stepping: 6
CPU MHz: 2294.608
BogoMIPS: 4589.21
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.3 MiB
L1i cache: 896 KiB
L2 cache: 35 MiB
L3 cache: 54 MiB
NUMA node0 CPU(s): 0-55
NUMA node1 CPU(s): 56-111
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq av
|
https://github.com/vllm-project/vllm/issues/28438
|
closed
|
[
"usage"
] | 2025-11-11T02:24:47Z
| 2025-11-12T01:54:42Z
| 2
|
LittleLucifer1
|
pytorch/pytorch
| 167,499
|
check_compiler_is_gcc() fails to detect versioned GCC compilers (g++-13, g++-14, etc.)
|
### 🐛 Describe the bug
🐛 Describe the bug
The torch.utils.cpp_extension.check_compiler_is_gcc() function only returns True when the compiler basename is exactly 'c++', failing to detect other GCC variants like g++, gcc, g++-13, g++-14, etc.
This affects any PyTorch functionality that relies on GCC detection, causing features to be silently disabled or tests to be incorrectly skipped on systems using versioned GCC compilers.
How to reproduce:
On a system where the default C++ compiler is g++-13 (common on Fedora/RHEL):
```python
from torch.utils.cpp_extension import get_cxx_compiler, check_compiler_is_gcc
compiler = get_cxx_compiler() # Returns /usr/bin/g++-13
result = check_compiler_is_gcc(compiler)
print(f"Compiler: {compiler}")
print(f"Detected as GCC: {result}") # False (incorrect!)
```
Expected result: Detected as GCC: True
Actual result: Detected as GCC: False
Root cause:
In torch/utils/cpp_extension.py, the check_compiler_is_gcc() function only checks if the compiler basename is exactly 'c++':
```python
compiler_path = os.path.realpath(results[0].strip())
if os.path.basename(compiler_path) == 'c++' and 'gcc version' in version_string:
return True
return False
```
Impact:
- Any feature/test that uses check_compiler_is_gcc() will fail to detect GCC on systems with versioned compilers
- GCC-specific optimizations or features may be silently disabled
Environment:
- PyTorch version: main/viable/strict
- OS: Fedora/RHEL with versioned GCC
- Compiler: g++-13, g++-14, or similar
I am working on a PR to fix this issue.
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.9.23 (main, Aug 19 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-11)] (64-bit runtime)
Python platform: Linux-5.14.0-615.el9.x86_64-x86_64-with-glibc2.34
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA H200
GPU 1: NVIDIA H200
GPU 2: NVIDIA H200
GPU 3: NVIDIA H200
GPU 4: NVIDIA H200
GPU 5: NVIDIA H200
GPU 6: NVIDIA H200
GPU 7: NVIDIA H200
Nvidia driver version: 580.82.07
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.9.13.0
/usr/lib64/libcudnn_adv.so.9.13.0
/usr/lib64/libcudnn_cnn.so.9.13.0
/usr/lib64/libcudnn_engines_precompiled.so.9.13.0
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.13.0
/usr/lib64/libcudnn_graph.so.9.13.0
/usr/lib64/libcudnn_heuristic.so.9.13.0
/usr/lib64/libcudnn_ops.so.9.13.0
Is XPU available: N/A
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 160
On-line CPU(s) list: 0-159
Vendor ID: GenuineIntel
Model name: Intel Xeon Processor (SapphireRapids)
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 40
Socket(s): 2
Stepping: 4
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 5 MiB (160 instances)
L1i cache: 5 MiB (160 instances)
L2 cache: 320 MiB (80 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 2
|
https://github.com/pytorch/pytorch/issues/167499
|
closed
|
[
"module: cpp-extensions"
] | 2025-11-11T01:11:22Z
| 2025-11-11T05:14:08Z
| 0
|
razaaliraza
|
vllm-project/vllm
| 28,425
|
[Feature][RL]: Fix Fp8 Weight Loading for RL
|
### 🚀 The feature, motivation and pitch
Feedback from RL community that vLLM weight loading in fp8 is bad for RL
- https://vllm-dev.slack.com/archives/C07UUL8E61Z/p1762811441757529
The cause is clear: in [fp8.py](https://github.com/vllm-project/vllm/blob/bf6a3d0ff5a69e0a30567f2ad417530c002eaa4e/vllm/model_executor/layers/quantization/fp8.py#L490) in process_weights_after_loading there is a lot of parameter wrapping that drops .weight_loader attribute.
There's a patch from the Moonshot team that fixes this issue and there's a [PR](https://github.com/vllm-project/vllm/pull/24488) with this patch that never got any comments. The [patch](https://github.com/MoonshotAI/checkpoint-engine/blob/main/patches/vllm_fp8.patch) only works on top of v0.10.2rc1. Shortly after that tag, this [PR](https://github.com/vllm-project/vllm/pull/23280) made fp8 weight updates even trickier by transposing weight_inv_scale parameter for CUTLASS.
I don't know how to patch any vLLM version after this PR to be able to call model.load_weights after the engine has started. It is a bummer, because DeepSeek wide EP inference is quite a bit faster in v0.11.0.
We need to fix this ASAP
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28425
|
open
|
[
"feature request"
] | 2025-11-10T21:59:02Z
| 2025-11-10T23:25:37Z
| 1
|
robertgshaw2-redhat
|
pytorch/pytorch
| 167,480
|
OS command injection via torch.utils.cpp_extension precompiled-header build (use_pch path)
|
**Summary**
There is an OS command injection risk in `torch/utils/cpp_extension.py` in the precompiled-header build helper. The helper constructs a compiler command including user-supplied values (e.g., `extra_cflags`, `extra_include_paths`) and executes the command via `subprocess.check_output(..., shell=True)`. If untrusted input reaches those parameters (for example, a service calling `load_inline(..., extra_cflags=..., use_pch=True)` with user-provided flags), an attacker can inject shell metacharacters and execute arbitrary commands as the Python process user.
Original Thread : https://github.com/pytorch/pytorch/security/advisories/GHSA-gfrj-f355-6v3r#advisory-comment-139444
**Affected versions**
- Introduced in Aug 2023 (commit `5ed6047`, PR #106696).
- Present in PyTorch releases starting with 2.1 and later main/nightly as of 2025.
**Severity**
- Meta Security and @malfet asked me to file me as general bug. Exact comment from Security Issue
```
_ @sumantro93 your example highlights the point I was trying to make: it's not framework's responsibility to sanitize the inputs.
For example, In the sample that you've shared, one is allowed to compile and run any untrusted code, which is a huge security issue on its own, so even if this issue is fixed, one is already allowed to execute arbitrary code on the host by the Flask endpoint developer.
Closing, but please do not hesitate to report it as regular issue or propose a pull request that would sanitize the inputs _
```
**Technical details & PoC**
- The vulnerable pattern constructs a single command string and calls: `subprocess.check_output(cmd_string, shell=True, stderr=subprocess.STDOUT)`.
- Proof-of-concept (local):
```python
# repro_pch_bug.py
import os, subprocess
def build_precompile_header(pch_cmd):
try:
subprocess.check_output(pch_cmd, shell=True, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
print('Error:', e)
payload = "false; echo vulnerable > /tmp/pch_exploit"
build_precompile_header(payload)
print('Exploit file exists?', os.path.exists('/tmp/pch_exploit'))
cc @janeyx99 @malfet
|
https://github.com/pytorch/pytorch/issues/167480
|
closed
|
[
"module: cpp-extensions",
"module: error checking",
"triaged",
"actionable"
] | 2025-11-10T20:36:14Z
| 2025-11-11T07:27:44Z
| 1
|
sumantro93
|
huggingface/transformers.js
| 1,450
|
SmolVLM2 500M Video Instruct - Video inference
|
### Question
Hey, is it possible to setup **video** inference through **transformers.js** (may be somehow else?) for the model SmolVLM2 500M Video Instruct? I can't make it work, but I saw, that it is possible in py transformers.
I want to create something similar to https://huggingface.co/spaces/HuggingFaceTB/SmolVLM2-HighlightGenerator/tree/main but with full local WebGPU inference.
Thanks in advance. cc: @xenova
|
https://github.com/huggingface/transformers.js/issues/1450
|
open
|
[
"question"
] | 2025-11-10T19:51:07Z
| 2025-11-12T07:46:32Z
| null |
youchi1
|
vllm-project/vllm
| 28,409
|
[Usage]: There is any performance benchmark between running vLLM server via docker image and python?
|
### Your current environment
```text
I mean, if I run a service with the vLLM docker image, it has any performance upgrade if comparing with running it as a python service (e.g., importing vllm package, setting up vllm inference, handling payload/responses, etc)?
```
### How would you like to use vllm
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28409
|
open
|
[
"usage"
] | 2025-11-10T17:56:14Z
| 2025-11-10T17:56:14Z
| 0
|
rafaelsandroni
|
pytorch/pytorch
| 167,467
|
Tensor creation documentation: example code not consistent with its description
|
https://docs.pytorch.org/cppdocs/notes/tensor_creation.html#configuring-properties-of-the-tensor says “Here is an example of creating a `TensorOptions` object that represents a **64-bit float**, strided tensor that requires a gradient, and lives on CUDA device 1”, but then calls `.dtype(torch::kFloat32)`.
cc @svekars @sekyondaMeta @AlannaBurke
|
https://github.com/pytorch/pytorch/issues/167467
|
open
|
[
"module: docs",
"triaged",
"actionable"
] | 2025-11-10T15:00:11Z
| 2025-11-10T21:04:17Z
| 0
|
sboukortt
|
vllm-project/vllm
| 28,393
|
[Feature]: Does vllm-jax plan to support GPU acceleration?
|
### 🚀 The feature, motivation and pitch
Does vllm-jax plan to support GPU acceleration?
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28393
|
closed
|
[
"feature request"
] | 2025-11-10T12:28:20Z
| 2025-11-10T21:44:57Z
| 2
|
south-ocean
|
pytorch/pytorch
| 167,459
|
Dynamic number of omp threads of torch.compile cache
|
### 🚀 The feature, motivation and pitch
It looks like torch.compile hardcodes the number of omp threads in the cache. I can see things like `#pragma omp parallel num_threads(8)` in the cache. And if different number threads is used the performance is much worse. Is it possible to make it compatible for different number of threads? It's quite useful when running on HPC. Naively it sounds to be something very simple. Hopefully one just need to change `#pragma omp parallel num_threads(xxx)` to `#pragma omp parallel`?
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu
|
https://github.com/pytorch/pytorch/issues/167459
|
open
|
[
"triaged",
"oncall: pt2",
"oncall: cpu inductor"
] | 2025-11-10T10:24:23Z
| 2025-12-22T19:49:32Z
| 1
|
SUSYUSTC
|
vllm-project/vllm
| 28,388
|
[Bug]: 新版的vllm已经废弃了v0代码,而对qwen-omni系列的模型支持仅限于v0,似乎是因为这个原因,我们无法使用最新版的vllm推理qwen-omni模型
|
### Your current environment
Name: vllm
Version: 0.10.2
### 🐛 Describe the bug
下面的官方样例代码似乎是无法运行的,会对其中的音频使用参数
"mm_processor_kwargs": {
"use_audio_in_video": True,
},
进行报错:
```python
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright contributors to the vLLM project
"""
This example shows how to use vLLM for running offline inference
with the correct prompt format on Qwen2.5-Omni (thinker only).
"""
from typing import NamedTuple
import vllm.envs as envs
from vllm import LLM, SamplingParams
from vllm.assets.audio import AudioAsset
from vllm.assets.image import ImageAsset
from vllm.assets.video import VideoAsset
from vllm.multimodal.image import convert_image_mode
from vllm.utils import FlexibleArgumentParser
class QueryResult(NamedTuple):
inputs: dict
limit_mm_per_prompt: dict[str, int]
# NOTE: The default `max_num_seqs` and `max_model_len` may result in OOM on
# lower-end GPUs.
# Unless specified, these settings have been tested to work on a single L4.
default_system = (
"You are Qwen, a virtual human developed by the Qwen Team, Alibaba "
"Group, capable of perceiving auditory and visual inputs, as well as "
"generating text and speech."
)
def get_mixed_modalities_query() -> QueryResult:
question = (
"What is recited in the audio? "
"What is the content of this image? Why is this video funny?"
)
prompt = (
f"<|im_start|>system\n{default_system}<|im_end|>\n"
"<|im_start|>user\n<|audio_bos|><|AUDIO|><|audio_eos|>"
"<|vision_bos|><|IMAGE|><|vision_eos|>"
"<|vision_bos|><|VIDEO|><|vision_eos|>"
f"{question}<|im_end|>\n"
f"<|im_start|>assistant\n"
)
return QueryResult(
inputs={
"prompt": prompt,
"multi_modal_data": {
"audio": AudioAsset("mary_had_lamb").audio_and_sample_rate,
"image": convert_image_mode(
ImageAsset("cherry_blossom").pil_image, "RGB"
),
"video": VideoAsset(name="baby_reading", num_frames=16).np_ndarrays,
},
},
limit_mm_per_prompt={"audio": 1, "image": 1, "video": 1},
)
def get_use_audio_in_video_query() -> QueryResult:
question = (
"Describe the content of the video, then convert what the baby say into text."
)
prompt = (
f"<|im_start|>system\n{default_system}<|im_end|>\n"
"<|im_start|>user\n<|vision_bos|><|VIDEO|><|vision_eos|>"
f"{question}<|im_end|>\n"
f"<|im_start|>assistant\n"
)
asset = VideoAsset(name="baby_reading", num_frames=16)
audio = asset.get_audio(sampling_rate=16000)
assert not envs.VLLM_USE_V1, (
"V1 does not support use_audio_in_video. "
"Please launch this example with "
"`VLLM_USE_V1=0`."
)
return QueryResult(
inputs={
"prompt": prompt,
"multi_modal_data": {
"video": asset.np_ndarrays,
"audio": audio,
},
"mm_processor_kwargs": {
"use_audio_in_video": True,
},
},
limit_mm_per_prompt={"audio": 1, "video": 1},
)
def get_multi_audios_query() -> QueryResult:
question = "Are these two audio clips the same?"
prompt = (
f"<|im_start|>system\n{default_system}<|im_end|>\n"
"<|im_start|>user\n<|audio_bos|><|AUDIO|><|audio_eos|>"
"<|audio_bos|><|AUDIO|><|audio_eos|>"
f"{question}<|im_end|>\n"
f"<|im_start|>assistant\n"
)
return QueryResult(
inputs={
"prompt": prompt,
"multi_modal_data": {
"audio": [
AudioAsset("winning_call").audio_and_sample_rate,
AudioAsset("mary_had_lamb").audio_and_sample_rate,
],
},
},
limit_mm_per_prompt={
"audio": 2,
},
)
query_map = {
"mixed_modalities": get_mixed_modalities_query,
"use_audio_in_video": get_use_audio_in_video_query,
"multi_audios": get_multi_audios_query,
}
def main(args):
model_name = "Qwen/Qwen2.5-Omni-7B"
query_result = query_map[args.query_type]()
llm = LLM(
model=model_name,
max_model_len=5632,
max_num_seqs=5,
limit_mm_per_prompt=query_result.limit_mm_per_prompt,
seed=args.seed,
)
# We set temperature to 0.2 so that outputs can be different
# even when all prompts are identical when running batch inference.
sampling_params = SamplingParams(temperature=0.2, max_tokens=64)
outputs = llm.generate(query_result.inputs, sampling_params=sampling_params)
for o in outputs:
generated_text = o.outputs[0].text
print(generated_text)
def parse_args():
parser = FlexibleArgumentParser(
description="Demo on using vLLM for offline inference with "
"audio language models"
)
|
https://github.com/vllm-project/vllm/issues/28388
|
open
|
[
"bug"
] | 2025-11-10T09:23:33Z
| 2025-11-16T05:51:42Z
| 1
|
Lee-xeo
|
huggingface/accelerate
| 3,836
|
When using gradient accumulation, does the order of optimizer.zero_grad() affect training?
|
if I use accelerate+deepspeed to train a model, and I set
`deepspeed_config:
gradient_accumulation_steps: 8
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: false
zero_stage: 2`
does the order of the order of backward(), step(), zero_grad() affect training?
For example:
`for batch in training_dataloader:
with accelerator.accumulate(model):
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()`
and
`for batch in training_dataloader:
with accelerator.accumulate(model):
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
`
I want to know whether the two situations will yield the same result. During gradient accumulation training, when the model needs to update the parameters and `accelerate.sync_gradients=True`, will using the second method clear the gradients, causing the gradient accumulation to be incorrect, so that at this point there is only one sample?
|
https://github.com/huggingface/accelerate/issues/3836
|
closed
|
[] | 2025-11-10T03:11:21Z
| 2025-12-20T15:24:00Z
| 3
|
polestarss
|
huggingface/transformers
| 42,113
|
Add AutoMergeAdapters: Official Utility to Combine Multiple LoRA Adapters into One Unified Model
|
### Feature request
Introduce a new built-in class AutoMergeAdapters to the Transformers/PEFT ecosystem that enables users to merge multiple LoRA adapters trained on different domains or datasets into a single model.
This feature simplifies the process of creating multi-domain fine-tuned models for inference and deployment, without manual merging scripts
### Motivation
Today, users can fine-tune models with LoRA adapters easily using PEFT, but they face a major bottleneck when trying to combine more than one adapter.
Current limitations:
Only one LoRA adapter can be merged using merge_and_unload()
Manual merges are error-prone and undocumented
Model config alignment must be handled manually
No built-in CLI or user-friendly API for adapter composition
A high-level API for multi-adapter merging would:
Promote adapter reusability across domains
Simplify deployment of multi-domain, multi-skill models
Reduce code duplication across community projects
### Your contribution
I would like to implement this feature and contribute the following:
Develop the AutoMergeAdapters class under src/transformers/adapters/auto_merge_adapters.py to support merging multiple LoRA adapters with optional weighted combination and compatibility validation.
Extend transformers-cli by adding a new merge-adapters command for CLI-based merging and model export.
Add unit and integration tests in tests/adapters/test_auto_merge_adapters.py to ensure correctness for weighted merges, config mismatches, and adapter integrity.
Provide documentation including a usage guide and a sample notebook under examples/adapters/merge_multiple_adapters.ipynb.
Publish a demo merged model to the Hugging Face Hub for reproducibility and reference.
Open a clean, well-tested PR and iterate based on maintainer feedback.
Happy to start implementation once the approach is approved. Looking forward to guidance if any adjustments are required.
|
https://github.com/huggingface/transformers/issues/42113
|
closed
|
[
"Feature request"
] | 2025-11-09T18:43:20Z
| 2025-11-10T16:58:34Z
| 1
|
3015pavan
|
pytorch/torchtitan
| 2,008
|
On the TorchTitan Infrastructure Build-out (VLM)
|
In the past, I’ve always trained models with the Lightning framework; now I’d like to switch to a more efficient one (TorchTitan or Megatron). However, I’ve run into a few questions and would appreciate your advice:
Can I simply import the encoder part straight from Hugging Face Transformers? (In VLM, the encoder usually accounts for only a small fraction of the parameters, so in my view it doesn’t need tensor-parallelism, etc.)
|
https://github.com/pytorch/torchtitan/issues/2008
|
open
|
[
"question"
] | 2025-11-09T15:03:35Z
| 2025-11-10T09:56:00Z
| null |
Joluck
|
huggingface/transformers
| 42,111
|
Add thinking-budget support (max_thinking_tokens) for reasoning-capable chat models
|
### Feature request
A built-in way to cap how many tokens a reasoning model spends inside its ``<think> … </think>`` block. Today, we can only control the total response length via ``max_new_tokens``. No parameter limits the internal reasoning segment when ``enable_thinking=True``.
### Motivation
- Reasoning models (e.g., Qwen3 series) often produce very long thought blocks, which can blow past latency budgets before the final answer starts.
- Users need a simple, model-agnostic control to bound that “thinking” cost without disabling reasoning entirely.
- The Qwen docs (https://qwen.readthedocs.io/en/latest/getting_started/quickstart.html#thinking-budget) already describe a brute-force approach (two-step generation) to implement “thinking budgets”.
### Your contribution
I want to submit a PR that:
- Extends ``GenerationConfig`` with:
``max_thinking_tokens``: integer budget for reasoning tokens.
``begin_thinking_token_id / end_thinking_token_id``: marker IDs so generation knows where the thinking span begins/ends.
- Add a ``MaxThinkingTokensLogitsProcessor`` that watches the active ``<think>`` block. Once the budget is reached, it forces end_thinking_token_id, ensuring the model exits reasoning and continues with the final response.
- Document the new parameter in reasoning-model guides (EXAONE, CWM, etc.) and show how to wire the thinking-token IDs until configs do it automatically.
- Provide unit coverage so ``_get_logits_processor`` injects the new processor whenever the config is fully specified.
|
https://github.com/huggingface/transformers/issues/42111
|
open
|
[
"Feature request"
] | 2025-11-09T10:09:11Z
| 2025-11-09T10:09:11Z
| 0
|
AndresAlgaba
|
vllm-project/vllm
| 28,362
|
[Usage]: Can't get vLLM to run on an Intel 125H with XPU and Arc graphics
|
### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 4.1.2
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+xpu
Is debug build : False
|
https://github.com/vllm-project/vllm/issues/28362
|
open
|
[
"usage",
"intel-gpu"
] | 2025-11-09T09:45:05Z
| 2025-11-12T00:19:39Z
| 2
|
phlibi
|
vllm-project/vllm
| 28,350
|
[Doc]: Running VLLM via Docker Swarm With Support for Tensor Parallelism
|
### 📚 Running VLLM via Docker Swarm With Support for Tensor Parallelism
There's no documentation that I have found outlining how to run VLLM in a docker swarm when utilizing tensor parallelism. The issue is that ```ipc=host``` is not an available option within docker swarm. Consulting the AI feature on the VLLM website suggests to use the ```shm``` option which is available to swarm, but this produces continued failures on startup.
Please advise how to run VLLM via docker swarm utilizing tensor parallelism. thx
|
https://github.com/vllm-project/vllm/issues/28350
|
closed
|
[
"documentation"
] | 2025-11-08T21:11:15Z
| 2025-11-19T16:37:31Z
| 2
|
ep5000
|
vllm-project/vllm
| 28,348
|
[Usage]: Does vllm support max_pixels in prompt on Qwen3-VL reasoning?
|
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of Qwen3-VL-A3B-Instruct, I tried to set max_pixels but it doesn't work.
import json
import base64
import requests
img_path = r".\images\MMMU\735_1.jpg"
base64_str = base64.b64encode(open(img_path, 'rb').read()).decode()
url = "http://71.10.29.136:8000/v1/chat/completions"
payload = json.dumps(
{
"model": "qwen3-vl-30b",
"messages": [
{
"role": "system",
"content": ""
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Question: "
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpg;base64,{base64_str}"
},
"max_pixels": 192 * 96 ## this is not work.... ##
},
{
"type": "text",
"text": " How does the green and photosynthesising mistletoe impact the tree it is hosting? Options:\\nA. It will grow down into the roots and kill the tree.\\nB. Mistletoe is beneficial and increases the growth of the plant.\\nC. It just uses the tree for support and does not damage it.\\nD. I don't know and don't want to guess.\\nE. It has a very damaging impact on the health of the plant but localised to the place of infection.\\n Please select the correct answer from the options above. \\n Only answer with the option letter, e.g. A, B, C, D, E, F, G, H, I. *DO NOT output any other information*. \\n"
}
]
}
],
"n": 1,
"top_p": 0.001,
"top_k": 1,
"temperature": 0.01,
"max_tokens": 8192
}
)
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer EMPTY'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28348
|
open
|
[
"usage"
] | 2025-11-08T16:06:07Z
| 2025-11-08T16:56:17Z
| 1
|
leijie-ww
|
pytorch/pytorch
| 167,412
|
How can I train in C++ using a Pytorch torchscript model
|
### 🐛 Describe the bug
dd
### Versions
I trained a model in the PyTorch, and then saved it to Torchscript format using torch.jit.save.
Now, I want to retrain on this model. I have a question about whether the torchscript model can be used for training.
I have a few different questions about how to train the Torchscript model in C++.
I want to use a trained model for fine tuning. I generated the Torchscript model in pytorch. In C++ API, I load the model using torch::jit::load function. And then I want to retrain the model.
In my code:
torch::jit::script::Module m_model = torch::jit::load(m_modulePath);
torch::optim::SGD optimizer(m_model.parameters(), SGDoptions);
When I set up the optimizer, I was told that the first parameter was incorrect.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
|
https://github.com/pytorch/pytorch/issues/167412
|
open
|
[
"oncall: jit"
] | 2025-11-08T13:11:14Z
| 2025-11-10T19:11:24Z
| 1
|
mullerhai
|
vllm-project/vllm
| 28,344
|
[Usage]: Function calling Request's sampling_params.structured_outputs is None?
|
Hi, I used openai server API to build a LLM backend when I tried to deploy a MCP server. I discovered that the prompt of vllm engine combined system prompt, tool lists and user prompt. but i saw sampling_params.structured_outputs is None. Although the result seemed correct, I think it's important to use structured output when generating function calling.But why not use structured output when generate JSON? Please explain,thanks a lot.
Below start a vllm backend.
```
python -m vllm.entrypoints.openai.api_server \
--model /workspace/models/qwen-2.5B/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/ \
--served-model-name "qwen-2.5b" \
--port 8000 \
--trust-remote-code \
--enable-auto-tool-choice \
--tool-call-parser hermes
```
Below is input of vllm engine.
```
(APIServer pid=703600) > /workspace/vllm/vllm/entrypoints/openai/serving_chat.py(326)create_chat_completion()
(APIServer pid=703600) -> generator = self.engine_client.generate(
(APIServer pid=703600) ['<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{"type": "function", "function": {"name": "weather", "description": "城市天气查询", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"]}}}\n{"type": "function", "function": {"name": "stock", "description": "股票价格查询", "parameters": {"type": "object", "properties": {"code": {"type": "string"}}, "required": ["code"]}}}\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{"name": <function-name>, "arguments": <args-json-object>}\n</tool_call><|im_end|>\n<|im_start|>user\n查询北京天气和贵州茅台股价<|im_end|>\n<|im_start|>assistant\n']
(Pdb) sampling_params.structured_outputs
(Pdb) sampling_params
(APIServer pid=703600) SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=32549, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=False, spaces_between_special_tokens=True, truncate_prompt_tokens=None, **structured_outputs=None,** extra_args=None)
```
Below is output of vllm engine.
```
(APIServer pid=703600) > /workspace/vllm/vllm/entrypoints/openai/serving_chat.py(1290)chat_completion_full_generator()
(APIServer pid=703600) -> async for res in result_generator:
(Pdb) final_res
(APIServer pid=703600) RequestOutput(request_id=chatcmpl-573ea011c8894432bf8aa9d1468cae60, prompt='<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{"type": "function", "function": {"name": "weather", "description": "城市天气查询", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"]}}}\n{"type": "function", "function": {"name": "stock", "description": "股票价格查询", "parameters": {"type": "object", "properties": {"code": {"type": "string"}}, "required": ["code"]}}}\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{"name": <function-name>, "arguments": <args-json-object>}\n</tool_call><|im_end|>\n<|im_start|>user\n查询北京天气和贵州茅台股价<|im_end|>\n<|im_start|>assistant\n', prompt_token_ids=[151644, 8948, 198, 2610, 525, 1207, 16948, 11, 3465, 553, 54364, 14817, 13, 1446, 525, 264, 10950, 17847, 382, 2, 13852, 271, 2610, 1231, 1618, 825, 476, 803, 5746, 311, 7789, 448, 279, 1196, 3239, 382, 2610, 525, 3897, 448, 729, 32628, 2878, 366, 15918, 1472, 15918, 29, 11874, 9492, 510, 27, 15918, 397, 4913, 1313, 788, 330, 1688, 497, 330, 1688, 788, 5212, 606, 788, 330, 15206, 497, 330, 4684, 788, 330, 99490, 104307, 51154, 497, 330, 13786, 788, 5212, 1313, 788, 330, 1700, 497, 330, 13193, 788, 5212, 8926, 788, 5212, 1313, 788, 330, 917, 9207, 2137, 330, 6279, 788, 4383, 8926, 1341, 3417, 532, 4913, 1313, 788, 330, 1688, 497, 330, 1688, 788, 5212, 606, 788, 330, 13479, 497, 330, 4684, 788, 330, 104023, 97480, 51154, 497, 330, 13786, 788, 5212, 1313, 788, 330, 1700, 497, 330, 13193, 788, 5212, 1851, 788, 5212, 1313, 788, 330, 917, 9207, 2137, 330, 6279, 788, 4383, 1851, 1341, 3417, 532, 522, 15918, 1339, 2461, 1817, 729, 1618, 11, 470, 264, 2951, 1633, 448, 729, 829, 323, 5977, 2878, 220, 151657, 151658, 11874, 9492, 510, 151657, 198, 4913, 606, 788, 366, 1688, 11494, 8066, 330, 16370, 788, 366, 2116, 56080, 40432, 31296, 151658, 151645, 198, 151644, 872, 198, 51154, 68990, 104307, 33108, 102345, 109625, 105281, 151645, 198, 151644, 77091
|
https://github.com/vllm-project/vllm/issues/28344
|
closed
|
[
"usage"
] | 2025-11-08T08:57:17Z
| 2025-11-10T07:51:51Z
| 5
|
wtr0504
|
vllm-project/vllm
| 28,340
|
[Installation]: Need offline wheel for vLLM 0.11.0rc2 (pip download fails) to deploy qwen3_vl_235b_a22b_instruct_i18n
|
### Your current environment
I need to install vLLM 0.11.0rc2 in an offline environment.
Is there an official wheel (.whl) available for vLLM==0.11.0rc2 that I can download directly?
Running:
```
pip download vllm==0.11.0rc2 --pre --extra-index-url https://wheels.vllm.ai/nightly -d wheels
```
fails with an error:
Looking in indexes: https://bytedpypi.byted.org/simple/, https://wheels.vllm.ai/nightly
ERROR: Ignored the following yanked versions: 0.2.1
ERROR: Could not find a version that satisfies the requirement vllm==0.11.0rc2 (from versions: 0.0.1, 0.1.0, 0.1.1, 0.1.2, 0.1.3, 0.1.4, 0.1.5, 0.1.6, 0.1.7, 0.2.0, 0.2.1.post1, 0.2.2, 0.2.3, 0.2.4, 0.2.5, 0.2.6, 0.2.7, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.4.0, 0.4.0.post1, 0.4.1, 0.4.2, 0.4.3, 0.5.0, 0.5.0.post1, 0.5.1, 0.5.2, 0.5.3, 0.5.3.post1, 0.5.4, 0.5.5, 0.6.0, 0.6.1, 0.6.1.post1, 0.6.1.post2, 0.6.2, 0.6.3, 0.6.3.post1, 0.6.4, 0.6.4.post1, 0.6.5, 0.6.6, 0.6.6.post1, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.5.post1, 0.9.0, 0.9.0.1, 0.9.1, 0.9.2, 0.10.0, 0.10.1, 0.10.1.1, 0.10.2, 0.11.0, 0.11.1rc6.dev210+g70af44fd1.cu129)
ERROR: No matching distribution found for vllm==0.11.0rc2.
### How you are installing vllm
```sh
pip download vllm==0.11.0rc2 --pre --extra-index-url https://wheels.vllm.ai/nightly -d wheels
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28340
|
closed
|
[
"installation"
] | 2025-11-08T06:05:31Z
| 2025-11-08T06:08:37Z
| 0
|
FateForever0222
|
pytorch/ao
| 3,314
|
Loading 8bit optimizer state from checkpoint causes dtype mismatch
|
We are using torch2.8. Optimizer states are quantized to [8bit](https://github.com/pytorch/ao/blob/main/torchao/optim/subclass_8bit.py). Normal training jobs are fine, but jobs that resume from checkpoint fail at `optimizer.step()`. We use AdamW optimizer copied from some older version of torch/torchao, where computation is done at fp32 precision:
```
exp_avg_f32 = exp_avg.float().lerp(grad_f32, 1 - beta1)
```
This fails with error that indicates `exp_avg.float()` is somehow bf16.
```
torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_method lerp(*(DTensor(local_tensor=OptimState8bit(signed=True, block_size=256, shape=(1408, 2048), device=cuda:0, requires_grad=False), device_mesh=DeviceMesh('cuda', [0], mesh_dim_names=('fsdp_cp',)), placements=(Shard(dim=0),)), DTensor(local_tensor=FakeTensor(..., device='cuda:0', size=(1408, 2048)), device_mesh=DeviceMesh('cuda', [0], mesh_dim_names=('fsdp_cp',)), placements=(Shard(dim=0),)), 0.09999999999999998), **{}): got RuntimeError('expected dtype torch.bfloat16 for `end`, but got dtype torch.float32')
from user code:
File "/traindata/yunfan/lotus/lotus/components/optim/adamw.py", line 165, in single_param_adam
exp_avg_f32 = exp_avg_f32.lerp(grad_f32, 1 - beta1)
```
The casting in load_state_dict() is suspicious that it converts state values like exp_avg to bf16 to match model weights' precision. So I tried to make both `DTensor` wrapper and `OptimState8bit` local tensor converted to fp32 if they appear to be bf16 after checkpoint loading, and added assert statement before `lerp()` to make sure `exp_avg.float()`'s dtype is fp32. But these efforts don't help. It seems somewhere in DTensor operation bf16 is enforced without triggering the assert statement. Can I get help on understanding the behavior and making correct fix? Thanks in advance!
Below is more detailed stacktrace:
```
Traceback (most recent call last):
File "/traindata/yunfan/lotus/lotus/grpo.py", line 1051, in <module>
recipe_main()
File "/traindata/yunfan/lotus/lotus/utils/config.py", line 184, in wrapper
recipe_main(conf)
File "/traindata/yunfan/lotus/lotus/grpo.py", line 1046, in recipe_main
recipe.train()
File "/traindata/yunfan/lotus/lotus/grpo.py", line 813, in train
step_output = self.train_step(
^^^^^^^^^^^^^^^^
File "/traindata/yunfan/lotus/lotus/grpo.py", line 694, in train_step
self._optimizer.step()
File "/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/optim/lr_scheduler.py", line 133, in wrapper
return func.__get__(opt, opt.__class__)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/optim/optimizer.py", line 516, in wrapper
out = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/traindata/yunfan/lotus/lotus/components/optim/adamw.py", line 166, in step
adamw8bit_step_helper(self, self.param_groups, self._new_buffer, self.bf16_stochastic_round, self.is_adamw)
File "/traindata/yunfan/lotus/lotus/components/optim/adamw.py", line 280, in adamw8bit_step_helper
single_param_adam(
File "/traindata/yunfan/lotus/lotus/components/optim/adamw.py", line 208, in single_param_adam
exp_avg_f32 = exp_avg_float.lerp(grad_f32, 1 - beta1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/_compile.py", line 53, in inner
return disable_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 929, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/distributed/tensor/_api.py", line 350, in __torch_dispatch__
return DTensor._op_dispatcher.dispatch(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/distributed/tensor/_dispatch.py", line 154, in dispatch
self.sharding_propagator.propagate(op_info)
File "/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/distributed/tensor/_sharding_prop.py", line 266, in propagate
OutputSharding, self.propagate_op_sharding(op_info.schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/distributed/tensor/_sharding_prop.py", line 45, in __call__
return self.cache(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/traindata/yunfan/lotus/.venv/lib/python3.12/site-packages/torch/distributed/tensor/_sharding_prop.py", line 279, in propagate_op_sharding_non_cached
out_tensor
|
https://github.com/pytorch/ao/issues/3314
|
open
|
[
"optimizer",
"triaged"
] | 2025-11-08T00:27:00Z
| 2025-12-05T01:12:07Z
| 6
|
yz-ppl
|
pytorch/pytorch
| 167,369
|
Dynamo fails to trace repr
|
### 🐛 Describe the bug
```python
import torch
import torch.nn as nn
class Config:
def __repr__(self):
return "Config()"
def forward(x, config):
# Calling repr() on non-constant user object
# This triggers the bug without the fix
return x * len(repr(config))
config = Config()
x = torch.randn(2, 2)
compiled = torch.compile(forward, fullgraph=True)
```
Errors with:
```
Unsupported: Failed to trace builtin operator
Explanation: Dynamo does not know how to trace builtin operator `repr` with argument types ['Config'] (has_kwargs False)
Hint: Avoid calling builtin `repr` with argument types ['Config']. Consider using an equivalent alternative function/method to `repr`.
Hint: If you are attempting to call a logging function (e.g. `print`), you can try adding it to `torch._dynamo.config.reorderable_logging_functions`.
Hint: Please report an issue to PyTorch.
Developer debug context: builtin repr [<class 'torch._dynamo.variables.user_defined.UserDefinedObjectVariable'>] False
```
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela
|
https://github.com/pytorch/pytorch/issues/167369
|
closed
|
[
"oncall: pt2",
"module: dynamo"
] | 2025-11-07T22:02:51Z
| 2025-11-10T21:06:41Z
| 0
|
tugsbayasgalan
|
pytorch/pytorch
| 167,344
|
UnboundLocalError: cannot access local variable 'tracer_output' where it is not a ssociated with a value
|
(Worker_TP1 pid=243560) ERROR 11-07 10:44:16 [multiproc_executor.py:699] if tracer_output:
(Worker_TP1 pid=243560) ERROR 11-07 10:44:16 [multiproc_executor.py:699] ^^^^^^^^^^^^^
(Worker_TP1 pid=243560) ERROR 11-07 10:44:16 [multiproc_executor.py:699] UnboundLocalError: cannot access local variable 'tracer_output' where it is not associated with a value
Only in 2.9.0. Can we fix for 2.9.1?
https://github.com/pytorch/pytorch/blame/release/2.9/torch/_dynamo/convert_frame.py#L1473
cc @chauhang @penguinwu
|
https://github.com/pytorch/pytorch/issues/167344
|
closed
|
[
"oncall: pt2"
] | 2025-11-07T18:48:42Z
| 2025-11-07T22:31:29Z
| null |
zou3519
|
vllm-project/vllm
| 28,310
|
[Doc]: Update GPU requirements to include AMD gfx1150/gfx1151
|
### 📚 The doc issue
Summary: The documentation for GPU requirements does not list AMD gfx1150 and gfx1151 architectures, which are now supported.
Background: Support for AMD gfx1150 and gfx1151 GPUs was added in https://github.com/vllm-project/vllm/pull/25908. The GPU requirements page should be updated to reflect this.
Affected page: https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html#requirements
Expected behavior: The GPU requirements page lists AMD gfx1150 and gfx1151 as supported architectures.
### Suggest a potential alternative/fix
Proposed fix: https://github.com/vllm-project/vllm/pull/28308
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28310
|
closed
|
[
"documentation",
"rocm"
] | 2025-11-07T17:26:47Z
| 2025-11-08T03:01:08Z
| 1
|
hammmmy
|
pytorch/pytorch
| 167,331
|
[TEST FAILURE UT] TestForeachCUDA.test_foreach_copy_with_multi_dtypes_large_input_cuda fails
|
**TDLR** for_each test fails when ran with:
`TEST_CONFIG=default python3 test/run_test.py --verbose --keep-going -i test_foreach`
Adding @serialTest() decorator to the test function `test_foreach_copy_with_multi_dtypes_large_input` fixes this issue.
```
_____ TestForeachCUDA.test_foreach_copy_with_multi_dtypes_large_input_cuda _____
Traceback (most recent call last):
File "/pytorch/torch/testing/_comparison.py", line 1289, in not_close_error_metas
pair.compare()
File "/pytorch/torch/testing/_comparison.py", line 740, in compare
self._compare_values(actual, expected)
File "/pytorch/torch/testing/_comparison.py", line 898, in _compare_values
compare_fn(
File "/pytorch/torch/testing/_comparison.py", line 1077, in _compare_regular_values_close
matches = torch.isclose(
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 8.00 GiB. GPU 0 has a total capacity of 139.80 GiB of which 94.40 GiB is free. Process 32188 has 518.00 MiB memory in use. Process 32189 has 518.00 MiB memory in use. Process 32190 has 518.00 MiB memory in use. Including non-PyTorch memory, this process has 39.76 GiB memory in use. Process 33858 has 518.00 MiB memory in use. Process 33860 has 518.00 MiB memory in use. Process 33859 has 518.00 MiB memory in use. Process 34062 has 520.00 MiB memory in use. Process 35455 has 518.00 MiB memory in use. Process 35453 has 518.00 MiB memory in use. Process 35454 has 518.00 MiB memory in use. Process 35670 has 520.00 MiB memory in use. 46.13 GiB allowed; Of the allocated memory 39.00 GiB is allocated by PyTorch, and 12.00 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/pytorch/test/test_foreach.py", line 1376, in test_foreach_copy_with_multi_dtypes_large_input
self.assertEqual(self_tensor, ref_out)
File "/pytorch/torch/testing/_internal/common_utils.py", line 4139, in assertEqual
error_metas = not_close_error_metas(
File "/pytorch/torch/testing/_comparison.py", line 1295, in not_close_error_metas
raise RuntimeError(
RuntimeError: Comparing
TensorOrArrayPair(
id=(),
actual=tensor([1., 1., 1., ..., 1., 1., 1.], device='cuda:0'),
expected=tensor([1., 1., 1., ..., 1., 1., 1.], device='cuda:0'),
rtol=1.3e-06,
atol=1e-05,
equal_nan=True,
check_device=False,
check_dtype=True,
check_layout=False,
check_stride=False,
)
resulted in the unexpected exception above. If you are a user and see this message during normal operation please file an issue at https://github.com/pytorch/pytorch/issues. If you are a developer and working on the comparison functions, please except the previous error and raise an expressive `ErrorMeta` instead.
To execute this test, run the following from the base repo dir:
python test/test_foreach.py TestForeachCUDA.test_foreach_copy_with_multi_dtypes_large_input_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
I can send a pr if that is okay.
cc @crcrpar @mcarilli @janeyx99
|
https://github.com/pytorch/pytorch/issues/167331
|
open
|
[
"triaged",
"actionable",
"module: mta"
] | 2025-11-07T17:09:56Z
| 2025-11-07T17:16:33Z
| 2
|
arkadip-maitra
|
huggingface/transformers
| 42,093
|
Mbart decoder ignoring index 0 from labels | index 1 from dec in
|
### System Info
I am creating a ocr model using VisionEncoderDecoderModel class by connecting plm vision tower and donut base decoder (mbart model).
I am using teacher forcing method to train the model ( default training and i found out that the model is ignoring index 0 from the target ( index 1 from the decoder_input_ids ).
I read the documentation for mbart and it says lang_code should be the bos for the target labels. but unlike the traditional methods where mbart used for translation task im using it for image - text task.
and when i use the Seq2SeqTrainer to train the model i notice that the model is skipping is index 0 no matter what token is present there.
I made my trainer to print the labels, dec in ( my own shift right just to display ) and pred. this is how it looks:
```python
label: [985, 735, 8, 690, 28264, 1448, 15320, 8, 4467, 18823, 258, 30606, 5965, 2164, 451, 8, 4467, 18823, 35, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100]
decin: [2, 985, 735, 8, 690, 28264, 1448, 15320, 8, 4467, 18823, 258, 30606, 5965, 2164, 451, 8, 4467, 18823, 35, 2, 1, 1, 1, 1, 1, 1, 1, 1]
preds: [735, 8, 690, 28264, 1448, 15320, 8, 4467, 18823, 258, 30606, 5965, 2164, 451, 8, 4467, 18823, 35, 2, 4467, 2, 2, 2, 185, 2, 2, 2, 2]
label: [15418, 417, 893, 7271, 12, 8, 6583, 13, 46, 6549, 5538, 3632, 388, 8, 3633, 11, 34, 5221, 8, 188, 28, 2234, 8, 22, 11, 8, 26, 8340, 2]
decin: [2, 15418, 417, 893, 7271, 12, 8, 6583, 13, 46, 6549, 5538, 3632, 388, 8, 3633, 11, 34, 5221, 8, 188, 28, 2234, 8, 22, 11, 8, 26, 8340]
preds: [417, 893, 7271, 12, 8, 6583, 13, 46, 6549, 5538, 3632, 388, 8, 3633, 11, 34, 5221, 8, 188, 28, 2234, 8, 22, 11, 8, 26, 8340, 2]
label: [877, 8, 13, 397, 8, 3038, 10180, 7049, 88, 8, 13, 5348, 9, 36, 208, 123, 11, 12311, 148, 2696, 2, -100, -100, -100, -100, -100, -100, -100, -100]
decin: [2, 877, 8, 13, 397, 8, 3038, 10180, 7049, 88, 8, 13, 5348, 9, 36, 208, 123, 11, 12311, 148, 2696, 2, 1, 1, 1, 1, 1, 1, 1]
preds: [8, 13, 397, 8, 3038, 10180, 7049, 88, 8, 13, 5348, 9, 36, 208, 123, 11, 12311, 148, 2696, 2, 2, 2, 2, 2, 2, 2696, 2, 2]
```
lets assume that the language code is 0, and thats in the beginning, that will be ignored too. how do i make the model to not ignore the index 0 from the labels?
### Who can help?
@ArthurZucker
@Cyrilvallez
@yonigozlan
@molbap
@zucchini-nlp
@itazap
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1nLCDlFyKhqCGu7dhlxJ0JiCRYjG24vbO?usp=sharing
### Expected behavior
I would like the decoder model to not ignore the index 0 from the labels. so that it will be
<img width="182" height="136" alt="Image" src="https://github.com/user-attachments/assets/18a8e465-c235-4ac5-a9e2-f13d41bec964" />
|
https://github.com/huggingface/transformers/issues/42093
|
closed
|
[
"bug"
] | 2025-11-07T15:46:08Z
| 2025-11-07T16:27:10Z
| 1
|
jaaabir
|
vllm-project/vllm
| 28,292
|
[Usage]: Failure to Deploy Llama-3.2-11B-Vision-Instruct Locally via vllm Due to OOM
|
### Your current environment
The output of <code>python collect_env.py</code>
```text
==============================
System Info
==============================
OS : Ubuntu 20.04.5 LTS (x86_64)
GCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version : Could not collect
CMake version : version 3.16.3
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.5.1+cu121
Is debug build : False
CUDA used to build PyTorch : 12.1
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.18 (main, Jun 5 2025, 13:14:17) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.4.131
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version : 535.129.03
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 112
On-line CPU(s) list: 0-108
Off-line CPU(s) list: 109-111
Thread(s) per core: 1
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz
Stepping: 6
CPU MHz: 2294.608
BogoMIPS: 4589.21
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.3 MiB
L1i cache: 896 KiB
L2 cache: 35 MiB
L3 cache: 54 MiB
NUMA node0 CPU(s): 0-55
NUMA node1 CPU(s): 56-111
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gf
|
https://github.com/vllm-project/vllm/issues/28292
|
closed
|
[
"usage"
] | 2025-11-07T12:01:04Z
| 2026-01-06T00:06:43Z
| 5
|
LittleLucifer1
|
huggingface/transformers
| 42,086
|
Does Trainer uses grad scaler for training?
|
I am not able to see the grad scaler usage in Trainer code. If not using it then I need to understand how are we using mixed precision training with fp16 precision without grad scaler.
|
https://github.com/huggingface/transformers/issues/42086
|
closed
|
[] | 2025-11-07T10:10:16Z
| 2025-11-13T07:58:33Z
| 2
|
quic-meetkuma
|
vllm-project/vllm
| 28,283
|
[Bug]: nccl stuck issue
|
### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
I am using a docker container for vLLM. I noticed that when I use `nvidia/cuda:13.0.X-cudnn-devel-ubuntu24.04` with `tp > 1`, it gets stuck here: `INFO 11-07 09:24:25 [pynccl.py:111] vLLM is using nccl==2.27.5`. But it works fine with `nvidia/cuda:12.9.X-cudnn-devel-ubuntu24.04` because I assume `12.9` is the current default now.
My question is: why does the CUDA image version really matter with vLLM? Just asking since I'm not experiencing this with SGLang, where `tp > 1` still works well even if I use either `12.8`, `12.9`, or even `13.0` `nvidia/cuda` image.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28283
|
open
|
[
"bug"
] | 2025-11-07T09:36:01Z
| 2025-11-07T09:40:17Z
| 1
|
seindum
|
pytorch/pytorch
| 167,304
|
RPC cannot run in jetson orin because of the specific uuid of orin
|
### 🐛 Describe the bug
When run RPC demo in jetson orin, the uuid issue were shown as below:
tensorpipe/channel/cuda_ipc/context_impl.cc:65 "uuidStr.substr(0, 4) != "GPU-"Couldn’t obtain valid UUID for GPU #0 from CUDA driver.
The uuid of jetson does not begin with characters “GPU-” like RTX series, the failure message will appear at once.
I think that tensorpipe didnot support jetson because of the specific characters “GPU-“ check, and i do not know how to run RPC in jetson. How should i do to solve that. Thanks.
### Versions
When run RPC demo in jetson orin, the uuid issue were shown as below:
tensorpipe/channel/cuda_ipc/context_impl.cc:65 "uuidStr.substr(0, 4) != "GPU-"Couldn’t obtain valid UUID for GPU #0 from CUDA driver.
The uuid of jetson does not begin with characters “GPU-” like RTX series, the failure message will appear at once.
I think that tensorpipe didnot support jetson because of the specific characters “GPU-“ check, and i do not know how to run RPC in jetson. How should i do to solve that. Thanks.
@scw @svenstaro @JackDanger @infil00p
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @ptrblck @eqy @jerryzh168 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @jjlilley @osalpekar @jiayisuse @mrzzd
|
https://github.com/pytorch/pytorch/issues/167304
|
open
|
[
"oncall: distributed",
"module: cuda",
"module: rpc"
] | 2025-11-07T09:20:00Z
| 2025-11-07T15:33:35Z
| 0
|
mamba824824
|
pytorch/torchrec
| 3,525
|
Could Torchrec support PyTorch's PrivateUse1 Dispatch Key?
|
Hello,
I've noticed that there are many conditional checks like if device.type == "cuda" in our TorchRec codebase. Without modifying TorchRec's source code, such fixed conditional logic might not be flexible enough to conveniently support third-party devices. From what I understand, PyTorch has introduced the PrivateUse1 DispatchKey to address third-party device extension issues. I'd like to ask if our TorchRec repository could add support for PyTorch's PrivateUse1 DispatchKey? This would enable third-party devices to seamlessly adapt TorchRec's functionality through PrivateUse1 without requiring code modifications.
|
https://github.com/meta-pytorch/torchrec/issues/3525
|
open
|
[] | 2025-11-07T07:17:42Z
| 2026-01-05T22:39:04Z
| 1
|
kwgqjj
|
pytorch/pytorch
| 167,291
|
[FSDP] Support param step with fp32
|
### 🚀 The feature, motivation and pitch
In Megatron, we can keep a fp32 version of params. while doing optimizer.step, the gradient is used to update the fp32 version of params, and the cast the fp32 param to fp16 version. Can we do this in FSDP?
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci
|
https://github.com/pytorch/pytorch/issues/167291
|
open
|
[
"oncall: distributed"
] | 2025-11-07T04:37:48Z
| 2025-11-07T15:34:42Z
| 0
|
yikaizhu-baseten
|
vllm-project/vllm
| 28,262
|
[Bug]: [gpt-oss] Responses API incorrect input/output handling
|
### Your current environment
Any env
### 🐛 Describe the bug
There is currently an implementation issue with gpt-oss on the Responses API in vLLM. This can be seen clearly in the [test which continues a conversation between API requests here](https://github.com/vllm-project/vllm/blob/4bf56c79cc252d285d0cb4f5edf323f02af735ca/tests/entrypoints/openai/test_response_api_with_harmony.py#L715).
From the first request, the model outputs the following tokens (whitespace added for clarity):
```
<|channel|>analysis<|message|>
User asks for weather in Paris today. We have no direct API call yet, but we can use get_weather function. Coordinates for Paris: latitude 48.8566, longitude 2.3522. We'll call get_weather.
<|end|>
<|start|>assistant<|channel|>commentary to=functions.get_weather <|constrain|>json<|message|>
{"latitude":48.8566,"longitude":2.3522}
<|call|>
```
When the output items from the first request are passed in as input to the second request, the tokens look like this (whitespace added for clarity):
```
<|start|>user<|message|>
What's the weather like in Paris today?
<|end|>
<|start|>assistant<|message|>
User asks for weather in Paris today. We have no direct API call yet, but we can use get_weather function. Coordinates for Paris: latitude 48.8566, longitude 2.3522. We'll call get_weather.
<|end|>
<|start|>assistant to=functions.get_weather<|channel|>commentary json<|message|>
{"latitude":48.8566,"longitude":2.3522}
<|call|>
<|start|>functions.get_weather<|message|>
20
<|end|>
```
We lose `<|channel|>analysis` on the reasoning message, and we do not set `<|channel|>commentary` on the tool call output ([documentation reference](https://cookbook.openai.com/articles/openai-harmony#handling-tool-calls)).
There are a lot of edge cases and challenges to properly represent Harmony Message metadata when the Responses API input/output types do not include that metadata, but we can improve on the current implementation.
The changes we can make are:
- A reasoning message should use the channel of the message that follows it. For example:
- The reasoning message prior to a function tool call should be on the commentary channel
- If the commentary channel is not enabled (no function tools enabled), all reasoning messages are on the analysis channel
- All other reasoning messages are on the analysis channel
- Set the content_type for function tools to be `<|constrain|>json` always
- Input items which are FunctionCallOutput should be set to be on the commentary channel
- Other types of tool related input types should be on the analysis channel
These changes should would be made to [serving_responses.py](https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/openai/serving_responses.py) and [harmony_utils.py](https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/harmony_utils.py). Similar changes can be done for the chat completions path as well, but that should be out of scope for this issue.
With the changes described above, gpt-oss should have a significantly reduced error rate when outputting header tokens on longer conversations involving tools.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
https://github.com/vllm-project/vllm/issues/28262
|
open
|
[
"bug"
] | 2025-11-07T02:51:56Z
| 2025-11-08T19:39:06Z
| 1
|
alecsolder
|
huggingface/lerobot
| 2,399
|
Are there plans to support LoRa fine-tuning?
|
https://github.com/huggingface/lerobot/issues/2399
|
open
|
[
"question",
"performance",
"training"
] | 2025-11-07T02:37:45Z
| 2025-11-10T10:23:33Z
| null |
Hukongtao
|
|
huggingface/candle
| 3,167
|
Qwen 3-1.7b looks like something is wrong and doesn't stop properly.
|
Candle version: main
Platform: Mac Studio Max M1
Mode: Qwen 3-1.7b, (download by huggingface-cli)
Execute cmd:
git clone https://github.com/huggingface/candle.git
cd candle-examples
cargo run --release --example qwen -- \
--prompt "What is the speed of light?" \
--model 3-1.7b \
--tokenizer-file ../../models/qwen3-1.7b/tokenizer.json \
--weight-files "../../models/qwen3-1.7b/model-00001-of-00002.safetensors,../../models/qwen3-1.7b/model-00002-of-00002.safetensors" \
--temperature 0.3 \
--top-p 0.5 \
--repeat-penalty 1.5 \
--repeat-last-n 16
Got:
```
Qwen 3-1.7B
Running `target/release/examples/qwen --prompt 'What is the speed of light?' --model 3-1.7b --tokenizer-file ../../models/qwen3-1.7b/tokenizer.json --weight-files ../../models/qwen3-1.7b/model-00001-of-00002.safetensors,../../models/qwen3-1.7b/model-00002-of-00002.safetensors --temperature 0.3 --top-p 0.5 --repeat-penalty 1.5 --repeat-last-n 16`
avx: false, neon: true, simd128: false, f16c: false
temp: 0.30 repeat-penalty: 1.50 repeat-last-n: 16
retrieved the files in 300.917µs
Running on CPU, to run on GPU(metal), build this example with `--features metal`
loaded the model in 7.719477208s
What is the speed of light? What are its properties?
The Speed Of Light
What is the speed of light? What are its properties?
The Speed Of Light
What is the speed of light? What are its properties?
The Speed Of Light
What is the speed of light? What are its properties?
The Speed Of Light
What is the speed of light? What are its properties?
The Speed Of Light
What is the speed of light? What are its properties?
The Speed...
^C
```
|
https://github.com/huggingface/candle/issues/3167
|
open
|
[] | 2025-11-07T02:23:05Z
| 2025-11-08T07:52:18Z
| 6
|
xiuno
|
pytorch/pytorch
| 167,276
|
Dynamo Fails to Trace Python Built-in Function print in Compile Mode
|
### 🐛 Describe the bug
Description:
When running a PyTorch model in Compile mode with torch.compile(), the Dynamo tracing mechanism fails to trace the Python built-in print() function, resulting in the following error.
code:
```
import torch
import torch.nn as nn
class SimpleModel(nn.Module):
def forward(self, x):
print(f'Input stats - min: {min(x.flatten())}, max: {max(x.flatten())}, mean: {sum(x.flatten()) / len(x.flatten())}')
return x
def run_eager_and_compile():
device = "cuda" if torch.cuda.is_available() else "cpu"
model = SimpleModel().to(device)
x = torch.randn(2, 3, device=device)
try:
print("Running in Eager mode")
out_eager = model(x)
print("Eager output:", out_eager)
except Exception as e:
print("Eager error:", e)
try:
print("Running in Compile mode")
compiled_model = torch.compile(model, fullgraph=True)
out_compile = compiled_model(x)
print("Compile output:", out_compile)
except Exception as e:
print("Compile error:", e)
if __name__ == "__main__":
run_eager_and_compile()
```
output:
```
Running in Eager mode
Input stats - min: -0.002417617244645953, max: 1.318856120109558, mean: 0.6973526477813721
Eager output: tensor([[ 1.2410, 0.0111, 1.3189],
[ 1.3116, 0.3040, -0.0024]])
Running in Compile mode
Compile error: Failed to trace builtin operator
Explanation: Dynamo does not know how to trace builtin operator `print` with argument types ['<unknown type>'] (has_kwargs False)
Hint: Avoid calling builtin `print` with argument types ['<unknown type>']. Consider using an equivalent alternative function/method to `print`.
Hint: If you are attempting to call a logging function (e.g. `print`), you can try adding it to `torch._dynamo.config.reorderable_logging_functions`.
Hint: Please report an issue to PyTorch.
Developer debug context: builtin print [<class 'torch._dynamo.variables.misc.StringFormatVariable'>] False
from user code:
line 6, in forward
print(f'Input stats - min: {min(x.flatten())}, max: {max(x.flatten())}, mean: {sum(x.flatten()) / len(x.flatten())}')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
### Versions
PyTorch version: 2.7.1+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 9.5.0-6ubuntu2) 9.5.0
Clang version: Could not collect
CMake version: version 4.0.3
Libc version: glibc-2.39
Python version: 3.9.7 (default, Jul 16 2025, 16:34:47) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 580.65.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-14900HX
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 31%
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 4838.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Ghostwrite: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file dat
|
https://github.com/pytorch/pytorch/issues/167276
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2025-11-07T01:34:42Z
| 2025-11-18T19:05:11Z
| 2
|
Blooming-Tree
|
pytorch/pytorch
| 167,266
|
TorchDynamo Tracing Error: Unable to Trace Builtin bool() Operator on Tensor
|
### 🐛 Describe the bug
Description
When compiling a model with torch.compile, TorchDynamo fails to trace the builtin bool() operator when applied to PyTorch tensors, resulting in a compilation error.
Error Details:
Error Type: Tracing failure for builtin operator
Failed Operation: bool operator applied to Tensor
Specific Code: make_causal = bool((mask == 0).all())
Error Message: "Dynamo does not know how to trace builtin operator bool with argument types ['Tensor']"
code:
```
import torch
import torch.nn as nn
import torch._dynamo
torch._dynamo.config.suppress_errors = False
torch._dynamo.config.verbose = True
class BoolTensorModel(nn.Module):
def forward(self, x, mask):
make_causal = bool((mask == 0).all())
print(f"[Forward] make_causal={make_causal}")
return x + 1
def main():
x = torch.randn(2, 3)
mask = torch.zeros(2, 3)
model = BoolTensorModel()
eager_out = model(x, mask)
print("Eager mode output shape::\n", eager_out)
try:
compiled_model = torch.compile(model, fullgraph=True)
compile_out = compiled_model(x, mask)
print("Compiled mode output shape:\n", compile_out)
except Exception as e:
print("Compile error:\n", e)
if __name__ == "__main__":
main()
```
output:
```
[Forward] make_causal=True
Eager mode output shape:
tensor([[-0.0879, 1.7579, 1.2001],
[ 2.2467, 2.0874, 0.1205]])
Compile error:
Failed to trace builtin operator
Explanation: Dynamo does not know how to trace builtin operator `bool` with argument types ['Tensor'] (has_kwargs False)
Hint: Avoid calling builtin `bool` with argument types ['Tensor']. Consider using an equivalent alternative function/method to `bool`.
Hint: If you are attempting to call a logging function (e.g. `print`), you can try adding it to `torch._dynamo.config.reorderable_logging_functions`.
Hint: Please report an issue to PyTorch.
Developer debug context: builtin bool [<class 'torch._dynamo.variables.tensor.TensorVariable'>] False
from user code:
line 10, in forward
make_causal = bool((mask == 0).all())
```
### Versions
PyTorch version: 2.7.1+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 9.5.0-6ubuntu2) 9.5.0
Clang version: Could not collect
CMake version: version 4.0.3
Libc version: glibc-2.39
Python version: 3.9.7 (default, Jul 16 2025, 16:34:47) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 580.65.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-14900HX
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 31%
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 4838.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Ghostwrite: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerabilit
|
https://github.com/pytorch/pytorch/issues/167266
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-dec2025"
] | 2025-11-07T00:26:30Z
| 2025-12-24T03:49:22Z
| 1
|
Blooming-Tree
|
huggingface/lerobot
| 2,398
|
how to accelerate the iteration in dataset
|
hi, i want to get the frames of specific episode index
when `episode_index_target` is large, like 100, it takes a lot of time to run.
any solution to improve the iteration speed ?
thanks.
`lerobot.__version__ == '0.1.0'`
```python
dataset = LeRobotDataset('yananchen/robomimic_lift')
frames = []
for sample in dataset:
if sample["episode_index"] == episode_index_target:
frames.append(sample)
```
|
https://github.com/huggingface/lerobot/issues/2398
|
closed
|
[
"question"
] | 2025-11-06T21:37:33Z
| 2025-11-10T20:52:57Z
| null |
yanan1116
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.