repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/accelerate
| 3,748
|
How pass two layer class by use --fsdp_transformer_layer_cls_to_wrap?
|
https://github.com/huggingface/accelerate/issues/3748
|
closed
|
[] | 2025-08-26T08:56:32Z
| 2025-08-26T09:14:18Z
| null |
sunjian2015
|
|
huggingface/diffusers
| 12,239
|
Support for InfiniteTalk
|
### Model/Pipeline/Scheduler description
https://huggingface.co/MeiGen-AI/InfiniteTalk is a wonderful audio driven video generation model and can also support infinite frame , which is based on wan2.1. The demo and user's workflow is also awesome. some examples: https://www.runninghub.cn/ai-detail/1958438624956203010
### Open source status
- [x] The model implementation is available.
- [x] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
https://huggingface.co/MeiGen-AI/InfiniteTalk
https://github.com/MeiGen-AI/InfiniteTalk
|
https://github.com/huggingface/diffusers/issues/12239
|
open
|
[
"help wanted",
"New pipeline/model",
"contributions-welcome"
] | 2025-08-26T06:57:43Z
| 2025-09-05T00:18:46Z
| 1
|
supermeng
|
huggingface/transformers
| 40,406
|
Cache tokenlizer
|
### Feature request
I am using Grounding DINO, which makes use of the `bert-base-uncanned` tokenlizer. Unfortunately, this model is never downloaded to cache, forcing a remote call to the API. Please allow for tokenlizer to be cached locally.
### Motivation
I want to use my software offline.
### Your contribution
I'm trying to find a way to download it manually as a workaround.
|
https://github.com/huggingface/transformers/issues/40406
|
open
|
[
"Feature request"
] | 2025-08-24T08:36:14Z
| 2025-09-10T11:49:06Z
| 5
|
axymeus
|
huggingface/tokenizers
| 1,851
|
SentencePieceBPE + Unicode NFD preprocessing leads to noise ?
|
Hi,
I have had the issue multiple times, so I assume I am doing something wrong.
**Versions:**
- tokenizers==0.21.4
- transformers==4.55.4
**Training script**
```py
from transformers import PreTrainedTokenizerFast
from pathlib import Path
from read import get_texts_iter_for_tokenizer
from tokenizers import SentencePieceBPETokenizer, normalizers, pre_tokenizers
def main():
output_dir = Path("hf_tokenizer")
output_dir.mkdir(parents=True, exist_ok=True)
# Dump texts to a file
texts = get_texts_iter_for_tokenizer()
# Train SentencePiece model
tokenizer = SentencePieceBPETokenizer()
# Adding normalization and pre_tokenizer
tokenizer.normalizer = normalizers.Sequence([normalizers.NFD()])
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
# Adding special tokens and creating trainer instance
special_tokens = ["<unk>", "<pad>", "<cls>", "<sep>", "<mask>"]
# Training from iterator REMEMBER it's training on test set...
tokenizer.train_from_iterator(texts, special_tokens=special_tokens, show_progress=True)
fast_tokenizer = PreTrainedTokenizerFast(
tokenizer_object=tokenizer,
unk_token="<unk>",
pad_token="<pad>",
cls_token="<cls>",
sep_token="<sep>",
mask_token="<mask>"
)
fast_tokenizer.save_pretrained(str(output_dir))
```
Script to reproduce bug:
```py
from transformers import PreTrainedTokenizerFast
hf_tokenizer = PreTrainedTokenizerFast.from_pretrained("hf_tokenizer")
# Test
print(hf_tokenizer.tokenize("⁊ĩ rẽ dñi u̾sum"))
# ['âģĬ', 'i', 'Ìĥ', 'Ġre', 'Ìĥ', 'Ġdn', 'Ìĥ', 'i', 'Ġu', '̾', 'sum']
print(hf_tokenizer.decode(hf_tokenizer.encode("⁊ĩ rẽ dñi u̾sum"))
# âģĬiÌĥĠreÌĥĠdnÌĥiĠu̾sum
```
I assume I am doing something wrong around preprocessing / postprocessing ?
|
https://github.com/huggingface/tokenizers/issues/1851
|
open
|
[] | 2025-08-24T08:28:08Z
| 2025-09-17T09:33:11Z
| 3
|
PonteIneptique
|
huggingface/coreml-examples
| 17
|
how to get absolute depth,meters?
|
how to get absolute depth,meters?
|
https://github.com/huggingface/coreml-examples/issues/17
|
open
|
[] | 2025-08-24T03:20:58Z
| 2025-08-24T03:20:58Z
| null |
jay25208
|
huggingface/transformers
| 40,398
|
NVIDIA RADIO-L
|
### Model description
While exploring, I came across [nvidia/RADIO-L](https://huggingface.co/nvidia/RADIO-L) and was wondering about its current support.
1. May I ask if RADIO-L is already supported in Transformers?
2. If not, would it be considered suitable to add?
3. If a model requires trust_remote_code=True, what does that signify regarding its suitability for addition to Transformers?
Please share the general criteria for models to be added to Transformers.
Thank you very much for your guidance
cc: @zucchini-nlp @Rocketknight1
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/transformers/issues/40398
|
open
|
[
"New model"
] | 2025-08-23T11:14:42Z
| 2025-08-26T14:44:11Z
| 4
|
Uvi-12
|
pytorch/ao
| 2,862
|
Duplicated tests in test_mx_tensor.py and test_nvfp4_tensor.py?
|
seems like there are some duplicated tests, e.g. https://github.com/pytorch/ao/blob/27f4d7581f8fc6bab4ef37d54b09b6fa76c1ffe6/test/prototype/mx_formats/test_mx_tensor.py#L610 and https://github.com/pytorch/ao/blob/27f4d7581f8fc6bab4ef37d54b09b6fa76c1ffe6/test/prototype/mx_formats/test_nvfp4_tensor.py#L47
|
https://github.com/pytorch/ao/issues/2862
|
open
|
[] | 2025-08-23T03:26:13Z
| 2025-08-23T03:26:25Z
| 0
|
jerryzh168
|
huggingface/diffusers
| 12,222
|
[Contribution welcome] adding a fast test for Qwen-Image Controlnet Pipeline
|
We are looking for help from community to add a fast time for this PR
https://github.com/huggingface/diffusers/pull/12215
You can add a file under this folder:
https://github.com/huggingface/diffusers/tree/main/tests/pipelines/qwenimage
You can reference other tests we added for qwee pipelines [example](https://github.com/huggingface/diffusers/blob/main/tests/pipelines/qwenimage/test_qwenimage.py), as well as controlnet fasts tests [example](https://github.com/huggingface/diffusers/tree/main/tests/pipelines/controlnet_flux)
|
https://github.com/huggingface/diffusers/issues/12222
|
closed
|
[
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-08-22T21:04:50Z
| 2025-08-25T01:58:59Z
| 6
|
yiyixuxu
|
pytorch/executorch
| 13,607
|
"How to Support a Custom Model in HTP Backend" example code is out of date
|
### 📚 The doc issue
In the "How to Support a Custom Model in HTP Backend" section of the QNN backend docs, there are a few imports that do not work. It looks like they might have moved in code, but missed in the docs. Specifically, the imports under `executorch.backends.qualcomm.compiler` and for `to_edge_transform_and_lower_to_qnn` need to be updated in the example code.
### Suggest a potential alternative/fix
_No response_
cc @mergennachin @byjlw @cccclai @cbilgin
|
https://github.com/pytorch/executorch/issues/13607
|
closed
|
[
"module: doc",
"module: qnn"
] | 2025-08-22T20:53:38Z
| 2025-09-30T22:34:54Z
| null |
GregoryComer
|
huggingface/diffusers
| 12,221
|
[Looking for community contribution] support DiffSynth Controlnet in diffusers
|
### Model/Pipeline/Scheduler description
Hi!
We want to add first party support for DiffSynth controlnet in diffusers, and we are looking for some help from the community!
Let me know if you're interested!
### Open source status
- [x] The model implementation is available.
- [x] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
https://huggingface.co/SahilCarterr/Qwen-Image-Blockwise-ControlNet-Canny
https://huggingface.co/SahilCarterr/Qwen-Image-Blockwise-ControlNet-Depth
|
https://github.com/huggingface/diffusers/issues/12221
|
open
|
[
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2025-08-22T20:49:18Z
| 2025-09-11T10:01:08Z
| 5
|
yiyixuxu
|
pytorch/xla
| 9,578
|
API for disabling SPMD?
|
The side effects of use_spmd() do not seem reversible through any obvious APIs.
https://github.com/pytorch/xla/blob/6b6ef5c7d757f955565b2083c48d936bfd758dcd/torch_xla/runtime.py#L191-L231
Is there some mechanism to do this?
|
https://github.com/pytorch/xla/issues/9578
|
open
|
[
"enhancement",
"distributed"
] | 2025-08-22T19:28:18Z
| 2025-08-23T13:49:31Z
| 1
|
jameszianxuTT
|
huggingface/safetensors
| 649
|
How to determine if a file is a safetensor file
|
Is there a good and fast way to determine if a file is a safetensors file. We would like to avoid reading the whole header.
Background we are currently trying to add safetensors as a datatype to the Galaxy project: https://github.com/galaxyproject/galaxy/pull/20754
|
https://github.com/huggingface/safetensors/issues/649
|
open
|
[] | 2025-08-22T09:17:49Z
| 2025-09-03T11:08:30Z
| null |
bernt-matthias
|
huggingface/lerobot
| 1,775
|
What's the finetuning method? Is it all full-finetuning?
|
I could't find any thing about LORA finetuning, is the default method full-finetuning by now?
|
https://github.com/huggingface/lerobot/issues/1775
|
closed
|
[
"question",
"policies"
] | 2025-08-22T06:48:25Z
| 2025-10-07T20:55:10Z
| null |
lin-whale
|
huggingface/lerobot
| 1,774
|
Finetune smolvla with vision encoder
|
### System Info
```Shell
- `lerobot` version: 0.1.0
- Platform: Linux-6.8.0-65-generic-x86_64-with-glibc2.35
- Python version: 3.10.18
- Huggingface_hub version: 0.33.4
- Dataset version: 3.6.0
- Numpy version: 2.2.6
- PyTorch version (GPU?): 2.7.1+cu126 (True)
- Cuda version: 12060
- Using GPU in script?: <fill in>
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [x] My own task or dataset (give details below)
### Reproduction
nothing
### Expected behavior
I found that when attempting to fine-tune the model to grasp objects of different colors but identical shapes, it consistently grasped the wrong object. I found that the output feature differences from the VLM for the same image, such as “grasp the green duck into the box” versus “grasp the yellow duck into the box,” were nearly zero. Is it possible that the VLM has weak color differentiation capabilities? Can the official support fine-tuning the visual encoder together?
|
https://github.com/huggingface/lerobot/issues/1774
|
open
|
[
"question",
"policies",
"good first issue"
] | 2025-08-22T05:20:58Z
| 2025-10-08T11:31:02Z
| null |
THU-yancow
|
huggingface/transformers
| 40,366
|
[Feature] Support fromjson in jinja2 chat template rendering
|
### Feature request
GLM45 requires `fromjson` in jinja2 to deserialize str typed `tool_calls.function.arguments` to dict within chat template so it can iterate over `arguments`'s k-v within jinja2 chat template.
```
{% for tc in m.tool_calls %}
{%- if tc.function %}
{%- set tc = tc.function %}
{%- endif %}
{{ '\n<tool_call>' + tc.name }}
{% set _args = tc.arguments | fromjson %}
{% for k, v in _args.items() %}
<arg_key>{{ k }}</arg_key>
<arg_value>{{ v \| tojson(ensure_ascii=False) if v is not string else v }}</arg_value>
{% endfor %}
</tool_call>{% endfor %}
{% endif %}
```
https://huggingface.co/zai-org/GLM-4.5/blob/main/chat_template.jinja#L75
### Motivation
GLM45 requires `fromjson` in jinja2 to deserialize str typed `tool_calls.function.arguments` to dict within chat template so it can iterate over `arguments`'s k-v within jinja2 chat template.
```
{% for tc in m.tool_calls %}
{%- if tc.function %}
{%- set tc = tc.function %}
{%- endif %}
{{ '\n<tool_call>' + tc.name }}
{% set _args = tc.arguments | fromjson %}
{% for k, v in _args.items() %}
<arg_key>{{ k }}</arg_key>
<arg_value>{{ v \| tojson(ensure_ascii=False) if v is not string else v }}</arg_value>
{% endfor %}
</tool_call>{% endfor %}
{% endif %}
```
https://huggingface.co/zai-org/GLM-4.5/blob/main/chat_template.jinja#L75
### Your contribution
I will submit a PR
|
https://github.com/huggingface/transformers/issues/40366
|
open
|
[
"Feature request"
] | 2025-08-22T05:11:06Z
| 2025-08-22T05:18:45Z
| 1
|
byjiang1996
|
huggingface/peft
| 2,749
|
Set multiple adapters actively when training
|
Hi! In incremental scenarios, I want to train a new adapter while keeping some old adapters actively. Notice that PeftModel can set active adapter by "model.set_adapter()". But every time can set only one adapter, where the type of args "adapter_name" is "str" rather than "List[str]". I also notice that class "PeftMixedModel" can set multiple adapters actively but only support for inference, and this class uses "model.base_model.set_adapter()" to achieve it. So I am not sure can I also set multiple adapters actively when training. My code is as following:
```python
model = AutoModelForCausalLM.from_pretrained()
peft_config = LoraConfig()
model = get_peft_model(model, peft_config, adapter_name="new")
model.load_adapter(adapter_path, adapter_name="old")
model.base_model.set_adapter(["new", "old"])
for name, param in model.named_parameters():
if "lora_A.old" in name or "lora_B.old" in name:
param.requires_grad = False
training_args = TrainingArguments()
trainer = Trainer()
trainer.train()
```
|
https://github.com/huggingface/peft/issues/2749
|
closed
|
[] | 2025-08-21T09:59:25Z
| 2025-09-29T15:04:15Z
| 4
|
Yongyi-Liao
|
pytorch/torchtitan
| 1,612
|
PP doesn't work with FlexAttention
|
Today PP doesn't work with FlexAttention block causal masking, because PP can't receive `eos_id` as a non-Tensor input (nor can it receive a mask function).
https://github.com/pytorch/torchtitan/blob/main/torchtitan/train.py#L433
This regression is coming from a recent refactor https://github.com/pytorch/torchtitan/pull/1424 to move `eos_id` out of `ModelArgs`, to remove dependency from model to tokenizer.
This is blocking optimizations from https://github.com/pytorch/torchtitan/pull/1610.
|
https://github.com/pytorch/torchtitan/issues/1612
|
closed
|
[
"module: pipelining",
"high priority",
"module: flex attention",
"triage review"
] | 2025-08-21T07:25:15Z
| 2025-08-22T15:35:06Z
| 0
|
tianyu-l
|
huggingface/lerobot
| 1,765
|
Questions about using LIBERO dataset (loss starts extremely high)
|
Hello,
I am training on the "**IPEC-COMMUNITY/libero_spatial_no_noops_1.0.0_lerobot**" dataset, but I encountered an issue(here is the dateset:https://huggingface.co/datasets/IPEC-COMMUNITY/libero_spatial_no_noops_1.0.0_lerobot):
At the very beginning of training, the loss is extremely high (around 500).
I would like to clarify a few points:
Is the policy output expected to be relative actions or absolute actions?
Do I need to perform any preprocessing on the dataset? For example:
Normalizing the gripper action to the range [-1, 1]?
Any other scaling or transformation?
What is the exact relationship between the action and state in the dataset?
I noticed that trajectories sometimes look different than expected(shown in the figure below).
Do we need to process either the action or state to align them?
Any guidance on the correct usage of the dataset would be greatly appreciated. Thanks!
<img width="1229" height="592" alt="Image" src="https://github.com/user-attachments/assets/b1102728-4916-405f-9a87-ab190b07f58b" />
|
https://github.com/huggingface/lerobot/issues/1765
|
open
|
[
"question",
"dataset",
"simulation"
] | 2025-08-21T05:06:51Z
| 2025-09-23T09:46:41Z
| null |
hamondyan
|
huggingface/transformers
| 40,330
|
open-qwen2vl-base
|
### Model description
is there any plan to add open-qwen2vl-base model?
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/transformers/issues/40330
|
open
|
[
"New model"
] | 2025-08-21T02:24:01Z
| 2025-08-23T10:18:28Z
| 5
|
olccihyeon
|
huggingface/tokenizers
| 1,850
|
Safe encoding of strings that might contain special token text
|
When feeding untrusted string inputs into an LLM, it's often important not convert any of the input into special tokens, which might indicate message boundaries or other syntax. Among other reasons, this is important for guarding against prompt injection attacks.
tiktoken provides a way to control how the encoding deals with special tokens, using the `allowed_special` and `disallowed_special` arguments. For example.
```python
enc = tiktoken.get_encoding("o200k_base")
enc.encode("<|endoftext|>", disallowed_special=[]) # => [27, 91, 419, 1440, 919, 91, 29]
enc.encode("<|endoftext|>") # => ValueError
enc.encode("<|endoftext|>", allowed_special=set(["<|endoftext|>"]) # => [199999]
```
However, I can't figure out how to avoid tokenizing strings like <|im_start|> into special tokens, when using the tokenizers library. Note that I want to be able to *decode* the special token to its string representation for visualization. However, I want to make sure that when I call `encode`, I don't get a special token -- I tokenize the string representation as if there was no <|im_start|> special token.
Maybe the easiest way to do this is to create two separate tokenizers, by creating new json files, but this is pretty inconvenient.
|
https://github.com/huggingface/tokenizers/issues/1850
|
closed
|
[] | 2025-08-21T00:53:17Z
| 2025-09-01T18:03:59Z
| 5
|
joschu
|
pytorch/ao
| 2,828
|
[fp8 blockwise training] add benchmarking scripts comparing triton quantization kernels vs torch.compile
|
## Summary
- We currently have benchmarking scripts comparing bf16 GEMMs vs Triton fp8 groupwise/blockwise GEMMs vs torch.compile generated fp8 groupwise/blockwise GEMMs [here](https://github.com/pytorch/ao/tree/main/benchmarks/prototype/blockwise_fp8_training)
- However, we have no benchmarks mentioning the quantization kernels and doing memory bandwidth calculations on them.
- We need isolated perf benchmarking for these, in order to (1) evaluate options, such as torch.compile vs handwritten kernels, and (2) measure perf improvements/regeressions from changes
## Example
- An example of a benchmarking script for quantization kernel (with mem bw calcs) can be found [here](https://github.com/pytorch/ao/blob/main/benchmarks/prototype/moe_training/benchmark_rowwise_3d_quant_kernels.py). This can be used as a starting point. For consistency with other benchmarking tooling, please use the same generic infra (`ExperimentConfig`, `ExperimentResult` etc).
## Kernels to benchmark
- [fp8_blockwise_act_quant_lhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L307C5-L307C32)
- [fp8_blockwise_act_quant_rhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L387C5-L387C32)
- [fp8_blockwise_act_quant_transposed_lhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L486)
- [fp8_blockwise_weight_quant_rhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L571)
- [fp8_blockwise_weight_quant_transposed_rhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L672)
- [torch_blockwise_scale_act_quant_lhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L713C5-L713C40) (pytorch reference implementation, bench with torch.compile)
- [torch_blockwise_scale_act_quant_rhs](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L744C5-L744C40) (pytorch reference implementation, bench with torch.compile)
- [torch_blockwise_scale_weight_quant](https://github.com/pytorch/ao/blob/8812365a78c392e866e9007960875cb6d0678fda/torchao/prototype/blockwise_fp8_training/kernels.py#L803) (pytorch reference implementation, bench with torch.compile)
|
https://github.com/pytorch/ao/issues/2828
|
open
|
[] | 2025-08-21T00:35:55Z
| 2025-08-21T00:37:13Z
| 0
|
danielvegamyhre
|
pytorch/torchtitan
| 1,605
|
How could I run the DeepSpeed-Megatron gpt_model in TorchTitan ?
|
Here is the model I would like to run with TorchTitan
https://github.com/deepspeedai/Megatron-DeepSpeed/blob/main/megatron/model/gpt_model.py#L188 .
Any recommendation will be appreciated.
|
https://github.com/pytorch/torchtitan/issues/1605
|
closed
|
[
"question"
] | 2025-08-20T18:55:50Z
| 2025-08-21T02:34:22Z
| null |
githubsgi
|
huggingface/peft
| 2,746
|
Gemma 2/3 Attention: Expected a single attention mask, got 2 instead
|
Hi! I'm getting this error `ValueError: Expected a single attention mask, got 2 instead` at inference (after prompt tuning)--I've only had this happen with the Gemma 2 and 3 models, so it might have something to do with their specific attention mechanism. Is there a workaround (or am I maybe missing something)?
I'm running the following:
```
model_name = "google/gemma-2-2b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
soft_model = get_peft_model(model, prompt_config)
inputs = tokenizer(model_instruction, return_tensors="pt")
outputs = soft_model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_new_tokens=num_gen_tokens,
eos_token_id=tokenizer.eos_token_id,
)
```
|
https://github.com/huggingface/peft/issues/2746
|
closed
|
[] | 2025-08-20T18:08:02Z
| 2025-08-27T02:43:22Z
| 8
|
michelleezhang
|
huggingface/transformers
| 40,323
|
Is there a plan to add DINOv3 into AutoBackbone?
|
### Feature request
Is there a plan to add DINOv3 to AutoBackbone. At present, DINOv2 is already inside, and I think DINOv3 should be able to inherit it directly. Appreciate a lot.
### Motivation
For the convenience of use
### Your contribution
DINOv3 should be able to inherit from DINOv2 directly.
|
https://github.com/huggingface/transformers/issues/40323
|
closed
|
[
"Feature request",
"Vision"
] | 2025-08-20T16:02:45Z
| 2025-11-11T16:22:08Z
| 4
|
Farenweh
|
pytorch/pytorch
| 161,060
|
[Question] How to robustly prevent operator fusion in Inductor to workaround a compilation bug?
|
### 🐛 Describe the bug
I've encountered a Triton compilation failure when using torch.compile with the AOT Inductor backend. The issue appears in a model that uses a computation pattern similar to Rotary Position Embeddings (RoPE).
I'm opening this issue in advance while I work on creating a minimal, self-contained reproducer for a compiler bug, as that process may take some time. My immediate goal is to seek advice on how to effectively workaround the issue.
Source code just belike:
```Python
_to_copy_default_17 = torch.ops.aten._to_copy.default(detach_default_8, dtype = torch.int64, layout = torch.strided, device = device(type='cuda', index=0))
unsqueeze_default_86 = torch.ops.aten.unsqueeze.default(_to_copy_default_17, 1); _to_copy_default_17 = None
unsqueeze_default_87 = torch.ops.aten.unsqueeze.default(unsqueeze_default_86, -1); unsqueeze_default_86 = None
_tensor_constant12 = self._tensor_constant12
mul_tensor_5 = torch.ops.aten.mul.Tensor(unsqueeze_default_87, _tensor_constant12); unsqueeze_default_87 = _tensor_constant12 = None
cos_default_1 = torch.ops.aten.cos.default(mul_tensor_5)
sin_default_1 = torch.ops.aten.sin.default(mul_tensor_5); mul_tensor_5 = None
split_tensor_1 = torch.ops.aten.split.Tensor(transpose_int_1, 64, -1); transpose_int_1 = None
getitem_6 = split_tensor_1[0]
getitem_7 = split_tensor_1[1]; split_tensor_1 = None
mul_tensor_6 = torch.ops.aten.mul.Tensor(getitem_6, cos_default_1)
mul_tensor_7 = torch.ops.aten.mul.Tensor(getitem_7, sin_default_1)
return mul_tensor_7
```
And my demo code is:
```
with torch.inference_mode():
with torch.amp.autocast(
device_type="cuda", enabled=True, dtype=torch.float16
):
exported_model = torch.export.export(
mod = model,
args = (),
kwargs = inputs_dict,
dynamic_shapes = {k: {0:torch.export.Dim.STATIC} for k in inputs_dict.keys()}
)
inductor_configs = {
"max_autotune": False,
}
aoti_package_path = torch._inductor.aoti_compile_and_package(
exported_model,
package_path=os.path.join(os.path.dirname(__file__), "wenqi_ele_0820.pt2"),
inductor_configs=inductor_configs
)
```
The compiler attempts to create a large fused kernel, but the generated Triton code is invalid, leading to a NameError: 'zuf0' is not defined during compilation. I am working on creating a minimal, self-contained reproducer and will provide it as soon as it's ready.
```
E0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] module = src.make_ir(options, codegen_fns, module_map, context)
E0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] File "/home/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/triton/compiler/compiler.py", line 81, in make_ir
E0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns,
E0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] triton.compiler.errors.CompilationError: at 21:12:
E0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] tmp3 = tl.load(in_ptr1 + (0))
E0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] tmp4 = tl.broadcast_to(tmp3, [XBLOCK])
E0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] tmp16 = tl.load(in_ptr2 + (x1), xmask, eviction_policy='evict_last')
E0820 22:49:40.184000 162221 /home/admin/zy429782/miniforge3/envs/pytorch271/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py:539] tmp22 = tl.load(in_ptr3 + (x0), xmask, eviction_policy='evict_last'
|
https://github.com/pytorch/pytorch/issues/161060
|
closed
|
[
"oncall: pt2"
] | 2025-08-20T15:44:48Z
| 2025-08-21T10:10:23Z
| null |
sujuyu
|
pytorch/torchrec
| 3,298
|
apply 2d parallel but how to save and restore weights
|
how to save and restore weights when applying 2d parallel ?
|
https://github.com/meta-pytorch/torchrec/issues/3298
|
closed
|
[] | 2025-08-20T10:42:19Z
| 2025-08-21T01:21:42Z
| 0
|
zxr888
|
pytorch/ao
| 2,811
|
NVFP4Tensor to_copy is wrong?
|
```
>>> from torchao.prototype.mx_formats.nvfp4_tensor import NVFP4Tensor
>>> import torch
>>> torch.ops.aten._to_copy(NVFP4Tensor.to_nvfp4(torch.randn((32, 128))), dtype=torch.bfloat16)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/andrewor/local/pytorch/torch/_ops.py", line 1254, in __call__
return self._op(*args, **kwargs)
File "/home/andrewor/local/ao/torchao/prototype/mx_formats/nvfp4_tensor.py", line 137, in __torch_dispatch__
return NVFP4_OPS_TABLE[func](func, types, args, kwargs)
File "/home/andrewor/local/ao/torchao/prototype/mx_formats/nvfp4_tensor.py", line 316, in nvfp4_to_copy
tensor._data,
AttributeError: 'NVFP4Tensor' object has no attribute '_data'
```
Seems like this should be `tensor.qdata`, and also it should be the [first argument](https://github.com/pytorch/ao/blob/083361bc3f7addc505a0f994a923f4ae9f54388e/torchao/prototype/mx_formats/nvfp4_tensor.py#L93)?
https://github.com/pytorch/ao/blob/083361bc3f7addc505a0f994a923f4ae9f54388e/torchao/prototype/mx_formats/nvfp4_tensor.py#L311-L322
|
https://github.com/pytorch/ao/issues/2811
|
closed
|
[] | 2025-08-19T23:39:31Z
| 2025-08-22T21:59:15Z
| 0
|
andrewor14
|
pytorch/xla
| 9,569
|
Remove excessive warn message in maybe_get_jax as it creates too many log lines during training
|
## 🐛 Bug
The maybe_get_jax() function in torch_xla/_internal/jax_workarounds.py merged in #9521 currently emits a warning message when JAX is not installed. While informative, this warning results in an excessive number of log lines during training workloads, cluttering the logs and making it difficult to spot genuinely important debug messages.
## To Reproduce
Steps to reproduce the behavior:
1. Create Python Virtual Environment (python3 -m venv ptxla_28) on Ubuntu 22.04
2. pip install torch==2.8.0 torchvision; pip install torch_xla==2.8.0
3. Create small python script(let's call it trigger_warning.py)
```
import sys
sys.path.insert(0, 'ptxla_28/lib/python3.10/site-packages')
from torch_xla._internal.jax_workarounds import maybe_get_jax
maybe_get_jax()
```
5. execute the script `bash -c "source ptxla_28/bin/activate && python trigger_warning.py"`
6. You should be able to see the warning message like below
```
WARNING:root:Defaulting to PJRT_DEVICE=CPU
WARNING:root:You are trying to use a feature that requires jax/pallas.You can install Jax/Pallas via pip install torch_xla[pallas]
```
## Expected behavior
Remove or suppress this warning message, or limit it to display only once per process/session instead of for every invocation.
## Environment
- Reproducible on XLA backend [CPU/TPU/CUDA]: CPU
- torch_xla version: 2.8.0
- Relevant Code:
https://github.com/pytorch/xla/blob/0f56dec9a33a993d4c14cb755bdd25490cabba21/torch_xla/_internal/jax_workarounds.py#L61
## Additional context
The current behavior results in thousands of lines of repeated warnings when running workloads that do not require JAX, negatively impacting developer experience. Reducing or removing this warning will significantly clean up logs for users running long or large-scale training jobs, improving usability without sacrificing relevant error reporting.
|
https://github.com/pytorch/xla/issues/9569
|
open
|
[
"performance",
"usability",
"2.8 release"
] | 2025-08-19T20:27:24Z
| 2025-10-11T02:52:17Z
| 10
|
rajkthakur
|
pytorch/TensorRT
| 3,786
|
How to convert a AMP trained model to get best performance and speed?
|
According to the doc: https://docs.pytorch.org/TensorRT/user_guide/mixed_precision.html We can convert model with this project where the param precision are explicitly said in the code. But when I train a model with torch AMP GradScaler where no value precision tagged in model code, Can we use this method to get a conerted chackpoint with best performance and inference speedup?
In fect, we had tried the torch pt->onnx-> tensorrt fp16 pipeline to convert pytorch AMP trained checkpoint into trt model format, but the inference results are noisey. while pt->onnx-> tensorrt fp32 pipeline will get a trt fp32 model the inference slower then what we need.
|
https://github.com/pytorch/TensorRT/issues/3786
|
open
|
[] | 2025-08-19T07:30:31Z
| 2025-10-23T00:20:02Z
| null |
JohnHerry
|
huggingface/transformers
| 40,263
|
[VLMs] How to process a batch that contains samples with and without images?
|
Is there a **standard** way to process a batch that contains samples with and without images?
For example:
```python
from transformers import AutoProcessor
from PIL import Image
import numpy as np
model_id = ... # tested are "google/gemma-3-4b-it", "HuggingFaceM4/idefics2-8b", "HuggingFaceM4/Idefics3-8B-Llama3", "HuggingFaceTB/SmolVLM2-2.2B-Instruct", "llava-hf/llava-1.5-7b-hf", "llava-hf/llava-v1.6-mistral-7b-hf", "OpenGVLab/InternVL3-8B-hf", "Qwen/Qwen2-VL-2B-Instruct","Qwen/Qwen2.5-VL-3B-Instruct"]
processor = AutoProcessor.from_pretrained(model_id)
messages = [
[{"role": "user", "content": [{"type": "text", "text": "What's the capital of France?"}]}],
[{"role": "user", "content": [{"type": "image"}, {"type": "text", "text": "What is it?"}]}],
]
texts = processor.apply_chat_template(messages)
image = Image.fromarray(
np.random.uniform(low=0.0, high=255.0, size=(32, 48, 3)).astype(np.uint8)
)
images = [[], [image]]
processor(images=images, text=texts)
```
This fails for all models I tested.
```python
images=[image] # The only syntax I found that works for some models: llava-hf/llava-1.5-7b-hf, llava-hf/llava-v1.6-mistral-7b-hf, OpenGVLab/InternVL3-8B-hf, Qwen/Qwen2-VL-2B-Instruct, Qwen/Qwen2.5-VL-3B-Instruct
images = [None, [image]] # always fails
images = [None, image] # always fails
images = [[], [image]] # always fails
```
### Expected behavior
There should be a standard / documented way to batch process mixed inputs (some samples with images, some without).
|
https://github.com/huggingface/transformers/issues/40263
|
closed
|
[] | 2025-08-19T05:09:36Z
| 2025-09-18T08:08:51Z
| null |
qgallouedec
|
huggingface/diffusers
| 12,185
|
What's the difference between DreamBooth LoRa and traditional LoRa?
|
I see a lot of examples using DreamBooth LoRa training code. What's the difference between this and traditional LoRa training? Can this DreamBooth LoRa training code be adapted to standard SFT LoRa code? Does disabling with_prior_preservation return normal LoRa training?
|
https://github.com/huggingface/diffusers/issues/12185
|
open
|
[] | 2025-08-19T03:32:30Z
| 2025-08-19T15:04:22Z
| 3
|
MetaInsight7
|
huggingface/trl
| 3,918
|
How to use trl-SFTTrainer to train Qwen-30B-A3B?
|
Has anyone tried using TRL to train Qwen-30B-A3B-Instruct-2507?
|
https://github.com/huggingface/trl/issues/3918
|
open
|
[
"❓ question"
] | 2025-08-19T03:04:36Z
| 2025-08-19T03:11:30Z
| null |
JeffWb
|
huggingface/datasets
| 7,739
|
Replacement of "Sequence" feature with "List" breaks backward compatibility
|
PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training pipeline dependent on 3.6.0; we manage a very large number of separate datasets that get concatenated during training. If just one of those datasets is saved with 4.0.0, they become unusable, and we have no way of "fixing" them. I can load them in 4.0.0 but I can't re-save with the legacy feature type, and I can't load it in 3.6.0 for obvious reasons.
Perhaps I'm missing something here, since the PR says that backward compatibility is preserved; if so, it's not obvious to me how.
|
https://github.com/huggingface/datasets/issues/7739
|
open
|
[] | 2025-08-18T17:28:38Z
| 2025-09-10T14:17:50Z
| 1
|
evmaki
|
huggingface/gsplat.js
| 119
|
How to 4DGS (.splatv)
|
How can I generate the .splatv file and get it running on my local server?
|
https://github.com/huggingface/gsplat.js/issues/119
|
open
|
[] | 2025-08-18T07:35:04Z
| 2025-08-18T07:35:04Z
| null |
CetosEdit
|
huggingface/diffusers
| 12,165
|
Failed to finetune the pre-trained model of 'stable-diffusion-v1-4' on image inpainting task
|
I finetuned the pre-trained model of 'stable-diffusion-inpainting' on image inpainting task, and all work well as the model is trained on image inpainting. But when I finetuned with the pre-trained model of 'stable-diffusion-v1-4' which is trained on text-to-image, the loss is NaN and the result is pure black.
As the two models have different input channels for unet, I have changed the unet input channels of 'stable-diffusion-v1-4' to be fit for image inpainting task. So far, the code can run but the loss is NaN. I do not know where is the problem, how to finetune the pre-trained model of 'stable-diffusion-inpainting' on image inpainting task ? should I change some hyparameters? Any help will be appreciated, thanks!
|
https://github.com/huggingface/diffusers/issues/12165
|
closed
|
[] | 2025-08-17T07:15:36Z
| 2025-09-07T09:35:38Z
| 7
|
micklexqg
|
pytorch/pytorch
| 160,833
|
How to address the bug 'unwaited collective calls' when using DTensor?
|
### 🐛 Describe the bug
I have called .wait() like this:
```
def custom_wait(_dtensor):
_local_t = _dtensor.to_local()
if isinstance(_local_t, AsyncCollectiveTensor):
_local_t.wait()
```
But it still has a BUG:
```
[W817 11:39:12.975673267 ProcessGroup.cpp:266] Warning: At the time of process termination, there are still 348 unwaited collective calls. Please review your program to ensure that:
1. c10d_functional.wait_tensor() is invoked on all tensors returned from c10d_functional collective,
2. c10d_functional.wait_tensor() is invoked on all output tensors of async_op=True torch.distributed collective called under `with allow_inflight_collective_as_graph_input_ctx():`,
before the output tensors of the collective are used. (function ~WorkRegistry)
```
Since Dtensor does not have a method like `DTensor.wait()`, I have no idea how to handle it or how to safely use or delete it.
### Versions
Collecting environment information...
PyTorch version: 2.8.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 550.127.05
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 8
在线 CPU 列表: 0-7
厂商 ID: GenuineIntel
型号名称: Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz
CPU 系列: 6
型号: 158
每个核的线程数: 1
每个座的核数: 8
座: 1
步进: 13
CPU(s) scaling MHz: 93%
CPU 最大 MHz: 4900.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 7200.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 256 KiB (8 instances)
L1i 缓存: 256 KiB (8 instances)
L2 缓存: 2 MiB (8 instances)
L3 缓存: 12 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-7
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[p
|
https://github.com/pytorch/pytorch/issues/160833
|
open
|
[
"high priority",
"triage review",
"needs reproduction",
"oncall: distributed"
] | 2025-08-17T03:54:50Z
| 2026-01-03T06:31:42Z
| null |
arminzhu
|
pytorch/data
| 1,506
|
v0.12.0 (or 0.11.1?) release timeline
|
Hi!
Is there a timeline for the next stable release?
|
https://github.com/meta-pytorch/data/issues/1506
|
open
|
[] | 2025-08-16T21:39:05Z
| 2026-01-02T22:27:59Z
| 3
|
mirceamironenco
|
huggingface/gym-hil
| 27
|
How to close the gripper in gym-hill-sim?
|
Hello all.
I'm using macOS to practice with tutorial gym-hill-sim.
I figured out how to move robot like x,y,z but, it's impossible to close the gripper....
Could you all please share the correct key?
Chatgpt answered ctrl-key but, it's not working!
Thanks in advance.
|
https://github.com/huggingface/gym-hil/issues/27
|
open
|
[] | 2025-08-15T13:46:12Z
| 2025-08-15T13:57:26Z
| null |
cory0619
|
huggingface/peft
| 2,742
|
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
|
Hello, I am fine-tuning the LLaMA-2 7B model on an A100 40 GB GPU. Initially, I was getting a CUDA out-of-memory error. I tried various methods, such as reducing batch size, but none worked. Then I enabled:
model.gradient_checkpointing_enable()
After doing this, the OOM issue was resolved, but now I get the following error during backpropagation:
torch.autograd.backward(
File ".../torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File ".../torch/autograd/graph.py", line 829, in _engine_run_backward
return Variable._execution_engine.run_backward(
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I also tried:
model.enable_input_require_grads()
but the error still persists. I suspect the issue is related to enabling gradient checkpointing.
# In model_init()
reft_model.gradient_checkpointing_enable()
reft_model.enable_input_require_grads()
Is there something I am missing when using gradient checkpointing in this setup?
|
https://github.com/huggingface/peft/issues/2742
|
closed
|
[] | 2025-08-15T06:21:50Z
| 2025-09-23T15:04:07Z
| 4
|
Mishajain1110
|
pytorch/torchtitan
| 1,576
|
API for custom metric reporting?
|
It would be nice if it were easier to report custom metrics for particular models more easily, but currently this seems to require changing `train.py` and/or modifying `MetricsProcessor` in some invasive way.
Could we introduce an easier mechanism for reporting additional metrics for specific models? A specific use case is to log EP token routing metrics, like [shown here](https://github.com/pytorch/torchtitan/issues/1467#issuecomment-3130249678) and whose custom implementation is [here](https://github.com/rakkit/torchtitan/blob/95732cac15e3c48983328961210b9e0b61e02b1d/torchtitan/train.py?plain=1#L581-L585). I also sometimes want to track activation magnitudes at various layers.
One idea I had is to leverage the `extra_metrics` arg of the `MetricsProcessor.log` method, which is currently only used to log the lr and number of toks seen:
https://github.com/pytorch/torchtitan/blob/6fc499f6f5b32151a799188be2208cfb09faed30/torchtitan/train.py?plain=1#L517-L527
We could do something like give `ModelProtocol` a `get_extra_metrics` method:
```py
class ModelProtocol(Protocol):
[...]
def get_extra_metrics(self) -> None | dict:
return None
```
and modify the reporting code to something like:
```py
extra_metrics = {
"n_tokens_seen": global_ntokens_seen,
"lr": lr,
}
custom_metrics = [mp.get_custom_metrics() for mp in self.model_parts]
for cm in custom_metrics:
if cm is not None:
extra_metrics.update(custom_metrics)
self.metrics_processor.log(
self.step,
global_avg_loss,
global_max_loss,
grad_norm.item(),
extra_metrics=extra_metrics,
)
```
This can get a bit confusing in complex PP cases, but it's a start.
Thoughts? CC @tianyu-l @rakkit
|
https://github.com/pytorch/torchtitan/issues/1576
|
open
|
[] | 2025-08-15T01:30:37Z
| 2025-08-16T00:32:18Z
| 4
|
garrett361
|
pytorch/torchtitan
| 1,574
|
Will Dinov3 be included as a model in torchtitan?
|
Newly released models from Meta dropped for Dino, will this be included for torchtitan?
https://github.com/facebookresearch/dinov3
|
https://github.com/pytorch/torchtitan/issues/1574
|
open
|
[] | 2025-08-14T21:08:05Z
| 2025-08-21T03:23:59Z
| 1
|
kmccaffr2023
|
pytorch/TensorRT
| 3,779
|
Announcement: PyTorch org (and TensorRt) will be offered to PyTorch Foundation
|
Hey folks, heads up that as part of PyTorch [moving to the PyTorch Foundation](https://pytorch.org/blog/PyTorchfoundation/). Meta will be handing ownership of the PyTorch github organization over to the PyTorch Foundation, along with all the repos in it.
**What's the impact?**
Technical ownership of the repos given (roadmap, dev work, etc) will continue to be driven by the same people doing it today, and business ownership (marketing efforts, trademark protection, etc) will be given to the foundation.
Meta will be moving out any repos that Meta or LF doesn’t think are a good fit for the foundation custodianship (based largely around the [foundation requirements](https://github.com/pytorch-fdn/foundation-hosted/blob/main/governance/foundation-hosted-project-process.md#eligibility-criteria)) and placing them in the [meta-pytorch](https://github.com/meta-pytorch) github org (previously called `pytorch-labs`)
**What will happen to this repo?**
As a community project, we’ll be letting `pytorch/TensorRt` go to the PyTorch Foundation (so it’ll stay at github.com/pytorch/TensorRt) as [foundation project](https://pytorch.org/blog/pt-foundation-expands/). If the PyTorch Foundation decides not to accept this repo (for not meeting the [foundation requirements](https://github.com/pytorch-fdn/foundation-hosted/blob/main/governance/foundation-hosted-project-process.md#eligibility-criteria)) then we'll default to moving this repo to [meta-pytorch](https://github.com/meta-pytorch).
Please let me know if you have any concerns
|
https://github.com/pytorch/TensorRT/issues/3779
|
open
|
[
"question"
] | 2025-08-14T17:15:12Z
| 2025-08-14T19:53:22Z
| null |
ZainRizvi
|
pytorch/pytorch
| 160,648
|
How to Use Pipeline Parallelism in Multi-input Models
|
### 🚀 The feature, motivation and pitch
I am developing a multimodal model and would like to use the pipeline feature of torch. However, I found that the samples in the introductory docs are rather simple, and they all have only single-input, single-output scenarios. I would like to know how to use the pipeline function for multi-input, single-output models. How to cut the model, can you help me to provide a complete sample or related documents.
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta
|
https://github.com/pytorch/pytorch/issues/160648
|
open
|
[
"oncall: distributed",
"module: pipelining"
] | 2025-08-14T15:41:44Z
| 2025-08-20T03:05:32Z
| null |
Bin1024
|
huggingface/trl
| 3,896
|
How to gather completions before computing rewards in GRPOTrainer
|
Hi,
I found that the `reward_funcs` passed to GRPOTrainer is used per-device.
That is, if I set `num_generation=16`, `per_device_train_batch_size=4`, my customized reward function can only receive `4` completions.
However, my customized reward function calculates rewards depending on a global view over all `16` completions for each question.
How can I implement this?
|
https://github.com/huggingface/trl/issues/3896
|
closed
|
[
"❓ question",
"🏋 Reward",
"🏋 GRPO"
] | 2025-08-14T14:41:42Z
| 2025-09-03T14:09:16Z
| null |
rubickkcibur
|
huggingface/peft
| 2,738
|
Which base model weights are getting frozen after applying LoRA?
|
I have finetuned LLaVA-v1.5-7B with peft LoRA, and I have found out that after adding the LoRA adapters, all the weights are getting frozen except for the newly added LoRA layers and mm_projector weights (non-LoRA). I will be glad to know the freezing logic implemented by peft since not all the base model weights are getting frozen after applying LoRA.
Also, I have not added the mm_projector weights inside the module_to_save.
|
https://github.com/huggingface/peft/issues/2738
|
closed
|
[] | 2025-08-13T17:35:10Z
| 2025-08-14T04:20:42Z
| 1
|
srbh-dl
|
pytorch/tutorials
| 3,518
|
[BUG] - <why does the C++ Libtorch performance slower than pytorch? (show the full code)>
|
### Add Link
none ...
### Describe the bug
i use a same .pt model, test is in a same computer, but libtorch is slower than pytorch 30~40%.
in python, 30 times inference only 18 ms AVG , but in C++ libtorch needs 24ms AVG.
i am using CUDA 12.8 , CUDNN 9.5.1 and libtorch 2.8
my codes are below..
`
#include <chrono>
#include <torch/torch.h>
#include <torch/script.h>
#include <iostream>
#include <vector>
int main() {
// 1. choose device..
torch::Device device = torch::kCPU;
if (torch::cuda::is_available()) {
device = torch::kCUDA;
std::cout << "CUDA is available! Using GPU." << std::endl;
if (torch::cuda::cudnn_is_available()) {
std::cout << "✅ cuDNN is available and will be used." << std::endl;
} else {
std::cout << "❌ cuDNN is NOT available. Performance may be suboptimal." << std::endl;
}
}
// 2. load model
torch::jit::Module module;
try {
module = torch::jit::load("/home/bingyu/profile_model/rf202508011_74.pt", device);
module.eval();
std::cout << "Model loaded successfully." << std::endl;
} catch (const c10::Error& e) {
std::cerr << "Error loading model: " << e.what() << std::endl;
return -1;
}
// 3. defination shapes
const int64_t BATCH_SIZE = 1;
const int64_t JOINT_NUM = 14;
const int64_t STATES_HORIZON = 12;
const int64_t SEQ_LEN = 50;
const int64_t NUM_CAMERAS = 4;
const int64_t IMG_C = 3;
const int64_t IMG_H = 480;
const int64_t IMG_W = 640;
// 4. create input tensor
auto qpos = torch::randn({BATCH_SIZE, STATES_HORIZON, JOINT_NUM}, device);
auto image = torch::randn({BATCH_SIZE, NUM_CAMERAS, IMG_C, IMG_H, IMG_W}, device);
auto noise = torch::randn({BATCH_SIZE, SEQ_LEN, JOINT_NUM}, device);
std::vector<torch::jit::IValue> inputs = {qpos, image, noise};
// 5. warm up ...
std::cout << "\nWarming up model..." << std::endl;
for (int i = 0; i < 5; ++i) {
torch::NoGradGuard no_grad;
module.forward(inputs);
}
std::cout << "Warm-up completed." << std::endl;
// 6. testing..
const int total_times = 10;
double total_elapsed = 0.0;
std::cout << "\nRunning inference..." << std::endl;
for (int i = 0; i < total_times; ++i) {
torch::NoGradGuard no_grad; // 在这个作用域内,不计算梯度
auto start = std::chrono::high_resolution_clock::now();
auto output = module.forward(inputs).toTensor();
auto end = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
total_elapsed += duration.count();
std::cout << "Inference " << i << " time: " << duration.count() << " μs" << std::endl;
}
double avg_time = total_elapsed / total_times;
std::cout << "\nAverage inference time: " << avg_time << " μs ("
<< avg_time / 1000.0 << " ms)" << std::endl;
return 0;
}
`
python code is below..
`
import torch
import os
import time
MODEL_PATH = "/home/bingyu/profile_model/rf202508011_74.pt"
INPUT_SHAPES = [
(1, 12, 14), # qpos
(1, 4, 3, 480, 640), # image
(1, 50, 14) # noise
]
WARMUP_ITER = 10
INFERENCE_RUNS = 10
def main():
if not os.path.exists(MODEL_PATH):
return
device_str = "cuda" if torch.cuda.is_available() else "cpu"
device = torch.device(device_str)
print(f"🚀 usisng device: {device_str.upper()}")
print(f"📂 loading model : {MODEL_PATH}")
try:
model = torch.jit.load(MODEL_PATH)
model.to(device)
model.eval()
print("✅ model success!")
except Exception as e:
print(f"❌ model load failed。\n {e}")
return
try:
inputs = [torch.randn(shape, device=device) for shape in INPUT_SHAPES]
except Exception as e:
print(f"❌ error INPUT_SHAPES。\n {e}")
return
with torch.no_grad():
for _ in range(WARMUP_ITER):
model(*inputs)
# ==================================================================
if device.type == 'cuda':
torch.cuda.synchronize()
timings_ms = []
with torch.no_grad():
for i in range(INFERENCE_RUNS):
if device.type == 'cuda':
start_event = torch.cuda.Event(enable_timing=True)
end_event = torch.cuda.Event(enable_timing=True)
start_event.record()
model(*inputs)
end_event.record()
torch.cuda.synchronize()
elapsed_time = start_event.elapsed_time(end_event)
timings_ms.append(elapsed_time)
else:
start_time = time.perf_counter()
model(*inputs)
end_time = time.perf_counter()
elapsed_time = (end_time - start_time) * 1000
timings_ms.append(elapsed_time)
# =========================================================
|
https://github.com/pytorch/tutorials/issues/3518
|
closed
|
[
"bug",
"question"
] | 2025-08-13T13:25:44Z
| 2025-09-03T21:32:30Z
| null |
Sukidesyo
|
pytorch/xla
| 9,558
|
Performance of Torchax
|
## ❓ Questions and Help
Hello Community,
Will using torchAx be slower than native PyTorch? Is there any transformation layer of tensors which makes it slower ?
|
https://github.com/pytorch/xla/issues/9558
|
open
|
[
"question",
"torchxla2"
] | 2025-08-13T10:05:08Z
| 2025-08-15T21:29:25Z
| null |
yuanfz98
|
huggingface/diffusers
| 12,136
|
How to use Diffusers to Convert Safetensors SDXL 1.0 to Onnx?
|
Hello,
I'm trying to convert a safetensors checkpoint for SDXL to onnx format.
I've tried Optimum already but it fails everytime.
Please help.
|
https://github.com/huggingface/diffusers/issues/12136
|
closed
|
[] | 2025-08-13T06:33:22Z
| 2025-10-31T03:13:28Z
| null |
CypherpunkSamurai
|
huggingface/lerobot
| 1,712
|
Why hasn't the pi0 model learned the ability to place something in the specified positions? Is it because the number of datasets is insufficient?
|
I am creating a tic-tac-toe board and using yellow and green sandbags as pieces. I have collected a dataset of "the entire process of a robotic arm picking up yellow sandbags and placing them in nine different positions on the board". This dataset is used to train the pi0 model to achieve autonomous playing. The collection scope includes: changes in the board scene, motor action status, visual images, and text task instructions. However, when testing the trained pi0 model by giving tasks of placing sandbags in different positions on the board, it turns out that the so101 robotic arm has a poor understanding of position information. It can grab the sandbags just like in the recorded dataset, but most of the time it cannot place them in the specified positions.
|
https://github.com/huggingface/lerobot/issues/1712
|
open
|
[
"question",
"policies"
] | 2025-08-12T10:15:26Z
| 2025-12-22T08:10:47Z
| null |
Alex-Wlog
|
pytorch/pytorch
| 160,405
|
[Expandable block] how to get the best-fit free block
|
To get free expandable block, the algorithm selects a locally optimal solution instead of the globally best-fit block, since the expandable sizes are not sorted. The best-fit block is the block that meets the requirements and has the smallest expandable size. The original code is
```
auto expandable_size = [](Block* b) {
return b->size + (b->next && !b->next->mapped ? b->next->size : 0);
};
auto next = it;
next++;
while ((*it)->expandable_segment_ && next != pool.blocks.end() &&
(*next)->stream == p.stream() &&
expandable_size(*next) < expandable_size(*it)) {
it = next++;
}
```
I have a proposition regarding that
```
auto expandable_size = [](Block* b) {
return b->size + (b->next && !b->next->mapped ? b->next->size : 0);
};
auto min_expandable_block = it;
auto min_expandable_size = expandable_size(*it);
while ((*it)->expandable_segment_ && it != pool.blocks.end() &&
(*it)->stream == p.stream() &&
expandable_size(*it) != (*it)->size) {
if ((*it)->size < min_expandable_size) {
min_expandable_block = it;
min_expandable_size = expandable_size(*it);
}
it++;
}
// it: the first non-expandable block or the last block of given stream
// min_expandable_block: the expandable block with the smallest
// expandable size or the first block found
if ((*it)->size > min_expandable_size) {
it = min_expandable_block;
}
```
Compare the size of the first non-expandable block (if it exists) with the smallest expandable size before the first non-expandable block to determine the best-fit block.
|
https://github.com/pytorch/pytorch/issues/160405
|
open
|
[
"triaged",
"module: CUDACachingAllocator"
] | 2025-08-12T08:38:38Z
| 2025-08-14T05:24:01Z
| null |
HU-qingqing
|
pytorch/torchtitan
| 1,554
|
The ordering of fsdp, ac, tp, pp and complie etc.
|
Based on the code, the ordering of parallelization and optimization appears to be: PP → TP → AC → Compile → FSDP/DDP.
Is it possible to modify this ordering? If not, could you explain the rationale for this specific sequence?
|
https://github.com/pytorch/torchtitan/issues/1554
|
open
|
[
"documentation",
"question"
] | 2025-08-12T04:35:02Z
| 2025-12-12T10:56:00Z
| null |
aoyulong
|
pytorch/torchtitan
| 1,553
|
Inquiry about torchtitan v0.1.0 compatibility with CUDA 12.3
|
Hello,
I would like to inquire about the compatibility of torchtitan with CUDA 12.3.
I am trying to use torchtitan v0.1.0, but I am facing some challenges due to my environment constraints. My computing resources are equipped with CUDA 12.3, and I am unable to upgrade the CUDA version at this moment.
When I attempted to install torchtitan v0.1.0 following the official instructions, I noticed that the required dependencies are built for CUDA 12.6:
```
torch version: torch-2.8.0.dev20250617+cu126
torchao version: torchao-0.12.0.dev20250617+cu126
```
This leads to an incompatibility with my current setup.
Furthermore, I tried using torch v2.7+cu118 to see if it would resolve the issue, but this resulted in import errors.
Could you please provide guidance on how I can successfully install and use torchtitan v0.1.0 in an environment with CUDA 12.3?
Thank you for your time and assistance.
|
https://github.com/pytorch/torchtitan/issues/1553
|
closed
|
[
"question"
] | 2025-08-12T04:17:26Z
| 2025-08-15T14:34:55Z
| null |
Sun2018421
|
pytorch/torchtitan
| 1,552
|
Any example for vpp scheduler for Deepseek/llama
|
I'm learning VPP 1F1B recently and want to figure out different implementation between tortitan and megatron, but i don't know how to build Vpp-1f1b schedule thus i cannot figure out how it works in titan. Is there any example to helpl me build vpp-1f1b example ?
|
https://github.com/pytorch/torchtitan/issues/1552
|
closed
|
[
"question"
] | 2025-08-12T01:40:12Z
| 2025-08-28T22:33:58Z
| null |
YingLaiLin
|
huggingface/transformers
| 40,089
|
Could not import module 'AutoTokenizer'. Are this object's requirements defined correctly?
|
### System Info
- torch @ https://download.pytorch.org/whl/cu124/torch-2.6.0%2Bcu124-cp310-cp310-linux_x86_64.whl
- torchaudio @ https://download.pytorch.org/whl/cu124/torchaudio-2.6.0%2Bcu124-cp310-cp310-linux_x86_64.whl
- torchvision @ https://download.pytorch.org/whl/cu124/torchvision-0.21.0%2Bcu124-cp310-cp310-linux_x86_64.whl
- unsloth==2025.6.12
- unsloth_zoo==2025.6.8
- accelerate==1.8.1
- bitsandbytes==0.46.0
- pydantic==2.11.7
- pydantic_core==2.33.2
- tokenizers==0.21.2
- transformers==4.52.4
- treelite==4.4.1
- treescope==0.1.9
- triton==3.2.0
- trl==0.19.0
- xformers==0.0.29.post3
- sympy==1.13.1
- cut-cross-entropy==25.1.1
- Python 3.10.16
- NVIDIA A10G (CUDA Version: 12.5)
- Ubuntu 24.04.2 LTS
### Who can help?
@ArthurZucker @itazap
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2045](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2044), in _LazyModule.__getattr__(self, name)
2044 try:
-> 2045 module = self._get_module(self._class_to_module[name])
2046 value = getattr(module, name)
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2075](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2074), in _LazyModule._get_module(self, module_name)
2074 except Exception as e:
-> 2075 raise e
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2073](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2072), in _LazyModule._get_module(self, module_name)
2072 try:
-> 2073 return importlib.import_module("." + module_name, self.__name__)
2074 except Exception as e:
File /usr/local/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:992, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1004, in _find_and_load_unlocked(name, import_)
ModuleNotFoundError: No module named 'transformers.models.ipynb_checkpoints'
The above exception was the direct cause of the following exception:
ModuleNotFoundError Traceback (most recent call last)
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2045](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2044), in _LazyModule.__getattr__(self, name)
2044 try:
-> 2045 module = self._get_module(self._class_to_module[name])
2046 value = getattr(module, name)
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2075](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2074), in _LazyModule._get_module(self, module_name)
2074 except Exception as e:
-> 2075 raise e
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2073](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2072), in _LazyModule._get_module(self, module_name)
2072 try:
-> 2073 return importlib.import_module("." + module_name, self.__name__)
2074 except Exception as e:
File /usr/local/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, i
|
https://github.com/huggingface/transformers/issues/40089
|
closed
|
[
"bug"
] | 2025-08-11T21:44:05Z
| 2025-09-08T03:09:11Z
| 3
|
octavianBordeanu
|
huggingface/candle
| 3,052
|
Candle vs. PyTorch performance
|
I'm running https://github.com/huggingface/candle/tree/main/candle-examples/examples/llava vs. https://github.com/fpgaminer/joycaption/blob/main/scripts/batch-caption.py on a Mac m1.
Seeing significant performance difference, Candle seems much slower.
I enabled accelerate and metal features.
Would love some pointers how to improve it.
|
https://github.com/huggingface/candle/issues/3052
|
open
|
[] | 2025-08-11T16:14:17Z
| 2025-11-14T20:05:16Z
| 8
|
ohaddahan
|
huggingface/diffusers
| 12,124
|
For qwen-image training file, Maybe "shuffle" of dataloader should be "False" when custom_instance_prompts is not None and cache_latents is False?
|
### Describe the bug
I think "shuffle" of dataloader should be "False" when custom_instance_prompts is not None and cache_latents is False. Otherwise, it will lead to errors in the correspondence between prompt embedding and image during training, and prompt will not be followed when performing the task of T2I.
### Reproduction
None
### Logs
```shell
```
### System Info
None
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/12124
|
open
|
[
"bug"
] | 2025-08-11T13:15:21Z
| 2025-08-30T01:57:02Z
| 2
|
yinguoweiOvO
|
huggingface/diffusers
| 12,120
|
How to train a lora with distilled flux model, such as flux-schnell???
|
**Is your feature request related to a problem? Please describe.**
I can use flux as base model to train a lora, but it need 20 steps , it cost a lot of time , and I want to train a lora base on distill model to implement use fewer step make a better image, such as based on flux-schnell model train a lora it only need 4 steps can generate a good image !! and I can train many lora like this, only need 4 steps generated
**Describe the solution you'd like.**
I need a script , maybe it locate in examples\dreambooth\train_dreambooth_lora_flux_schennl.py
I want to know to train a lora based on distilled model and get a good result ?
**Describe alternatives you've considered.**
I want to train many lora for base model( flux or flux-schnell), not only one lora , and I want to generated with fewer steps. So , I want to train loras with distilled model ... how to implment it ? I test scripts : [train_dreambooth_lora_flux.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_flux.py) by modify based mode from flux to flux-schnell ,but the result is bad...
**Additional context.**
any other implement method is OK ,
|
https://github.com/huggingface/diffusers/issues/12120
|
open
|
[] | 2025-08-11T03:07:42Z
| 2025-08-11T06:01:45Z
| null |
Johnson-yue
|
huggingface/diffusers
| 12,108
|
Qwen Image and Chroma pipeline breaks using schedulers that enable flow matching by parameter.
|
### Describe the bug
Several Schedulers support flow matching by using the prediction_type='flow_prediction" e.g.
```
pipe.scheduler = UniPCMultistepScheduler(prediction_type="flow_prediction", flow_shift=3.16, timestep_spacing='trailing', use_flow_sigmas=True)
```
However Chroma and Qwen Image will not work with these schedulers failing with the error
```
ValueError: The current scheduler class <class 'diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler'>'s `set_timesteps` does not support custom sigmas schedules. Please check whether you are using the correct scheduler.
```
Can we have this fixed by either changing the schedulers to have the missing attributes and use them, or by rethinking the way these pipelines handle the timesteps .
### Reproduction
```py
import torch
from diffusers import QwenImagePipeline, UniPCMultistepScheduler
pipe = QwenImagePipeline.from_pretrained("Qwen/Qwen-Image",
torch_dtype=torch.bfloat16)
#pipe.scheduler = FlowMatchEulerDiscreteScheduler(shift=3.16, use_beta_sigmas=True)
pipe.scheduler = UniPCMultistepScheduler(prediction_type="flow_prediction", flow_shift=3.16, timestep_spacing='trailing', use_flow_sigmas=True)
pipe.to("mps")
pipe("a nice picture of an rainbow")
```
### Logs
```shell
File "/Volumes/SSD2TB/AI/Diffusers/qwenimagelowmem.py", line 84, in <module>
image = pipe(prompt_embeds=prompt_embeds, prompt_embeds_mask=prompt_embeds_mask,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.11/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py", line 619, in __call__
timesteps, num_inference_steps = retrieve_timesteps(
^^^^^^^^^^^^^^^^^^^
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.11/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py", line 119, in retrieve_timesteps
raise ValueError(
ValueError: The current scheduler class <class 'diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler'>'s `set_timesteps` does not support custom sigmas schedules. Please check whether you are using the correct scheduler.
```
### System Info
- 🤗 Diffusers version: 0.35.0.dev0
- Platform: macOS-15.5-arm64-arm-64bit
- Running on Google Colab?: No
- Python version: 3.11.13
- PyTorch version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.34.3
- Transformers version: 4.52.4
- Accelerate version: 1.7.0
- PEFT version: 0.17.0
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: Apple M3
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/12108
|
open
|
[
"bug"
] | 2025-08-09T21:34:28Z
| 2025-08-09T21:39:30Z
| 0
|
Vargol
|
huggingface/transformers
| 40,056
|
Question: How to write a custome tokenizer form scratch
|
In this guide you introduced how to write a custom model and custom model configuration: [here](https://huggingface.co/docs/transformers/main/en/custom_models), IN addition I want to create a custom tokenizer form scratch why ?
I have a problem of multilevel transcription: the model takes an input utterance and output a 12 multilingual transcript simultaneously . So I want to design a tokenzier such that it take the whole 12 languages as a dict:
```python
{
"lang1": "text text",
"lang2": "text text",
"lang3": "text text",
}
```
and after tokenization
```python
{
"input_ids":
{
"lang1": "ids of lang 1",
"lang2": "ids of lang 2",
"lang3": "ids of lang 2",
}
}
```
How to do so as I can not find docs of building such custom tkenizer from scratch ?
|
https://github.com/huggingface/transformers/issues/40056
|
closed
|
[] | 2025-08-09T16:39:19Z
| 2025-09-24T08:03:02Z
| null |
obadx
|
huggingface/diffusers
| 12,107
|
accelerator.init_trackers error when try with a custom object such as list
|
### Describe the bug
I set multiple prompts with nargs for argument "--validation_prompt " in "train_dreambooth.py":
` parser.add_argument(
"--validation_prompt",
type=str,
default=["A photo of sks dog in a bucket", "A sks cat wearing a coat"],
nargs="*",
help="A prompt that is used during validation to verify that the model is learning.",
)`
but an error occured at ` if accelerator.is_main_process:
tracker_name = "dreambooth-lora"
accelerator.init_trackers(tracker_name, config=vars(args))` :
"ValueError: value should be one of int, float, str, bool, or torch.Tensor"
Is it because tensorboard only support basic Python types and PyTorch tensors but not a custom object such as list?
so how to visualize when has custom object such as list or argument with nargs?
### Reproduction
set the follow argument in "train_dreambooth.py" or other similar demos such as "train_amused.py":
` parser.add_argument(
"--validation_prompt",
type=str,
default=["A photo of sks dog in a bucket", "A sks cat wearing a coat"],
nargs="*",
help="A prompt that is used during validation to verify that the model is learning.",
)`
error occured at ` if accelerator.is_main_process:
tracker_name = "dreambooth-lora"
accelerator.init_trackers(tracker_name, config=vars(args))` with
"ValueError: value should be one of int, float, str, bool, or torch.Tensor"
### Logs
```shell
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39
- Running on Google Colab?: No
- Python version: 3.10.11
- PyTorch version (GPU?): 2.7.1+cu126 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.30.1
- Transformers version: 4.52.4
- Accelerate version: 1.8.1
- PEFT version: 0.15.2
- Bitsandbytes version: 0.45.4
- Safetensors version: 0.5.3
- xFormers version: 0.0.27.post2
- Accelerator: NVIDIA GeForce RTX 3090, 24576 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/12107
|
open
|
[
"bug"
] | 2025-08-09T10:04:06Z
| 2025-08-09T10:04:06Z
| 0
|
micklexqg
|
huggingface/diffusers
| 12,104
|
IndexError: index 0 is out of bounds for dimension 0 with size 0
|
### Describe the bug
When I test the mit-han-lab/nunchaku-flux.1-kontext-dev model, it runs normally in a non-concurrent scenario, but throws an error when I try to run it with concurrent requests.
My GPU is a single RTX 4090D.
How can I enable multi-concurrency support on a single GPU?
Thank you in advance for your help.
Here is my error message:
[2025-08-08 17:14:50.242] [info] Initializing QuantizedFluxModel on device 0
[2025-08-08 17:14:50.382] [info] Loading partial weights from pytorch
[2025-08-08 17:14:51.445] [info] Done.
Injecting quantized module
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 99.47it/s]
Loading pipeline components...: 57%|████████████████████████████████████████████████████████████████████████████████████████▌ | 4/7 [00:00<00:00, 28.54it/s]You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Loading pipeline components...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 19.02it/s]
Generation `height` and `width` have been adjusted to 752 and 1360 to fit the model requirements.
Generation `height` and `width` have been adjusted to 880 and 1168 to fit the model requirements.
43%|███████████████████████████████████████████████████████████████████████████████▎ | 12/28 [00:17<00:23, 1.45s/it]
57%|█████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 16/28 [00:18<00:13, 1.17s/it]
处理图像时出错: index 29 is out of bounds for dimension 0 with size 29
处理图像时出错: index 29 is out of bounds for dimension 0 with size 29
### Reproduction
```
import torch
from diffusers import FluxKontextPipeline
from diffusers.utils import load_image
from concurrent.futures import ThreadPoolExecutor
from nunchaku import NunchakuFluxTransformer2dModel
from nunchaku.utils import get_precision
import time
def get_result(image_path,pipeline):
time_begin = time.time()
image = load_image(
image_path
).convert("RGB")
size = image.size
large_now = 1440
small_now = round(1440 * (min(size)/max(size)) /32) * 32
width,height = (large_now,small_now) \
if size[0]>size[1] else (small_now,large_now)
prompt = "Remove the watermark from the picture"
image = pipeline(
image=image,
prompt=prompt,
guidance_scale=2.5,
num_inference_steps=28,
height=height,
width=width,
).images[0]
image.save(image_path[:-4]+"_result.png")
def nunchaku_test(concurrency,pipeline):
test_images = ["房型图水印.jpg", "卧室水印.png"] * concurrency
test_images = test_images[:concurrency]
overall_start = time.time()
with ThreadPoolExecutor(max_workers=concurrency) as executor:
futures = [executor.submit(get_result, img_path, pipeline) for img_path in test_images]
results = []
for future in futures:
try:
results.append(future.result())
except Exception as e:
print(f"处理图像时出错: {e}")
overall_time = time.time() - overall_start
if __name__ == '__main__':
transformer = NunchakuFluxTransformer2dModel.from_pretrained(
f"/root/autodl-tmp/nunchaku-flux.1-kontext-dev/svdq-{get_precision()}_r32-flux.1-kontext-dev.safetensors"
)
pipeline = FluxKontextPipeline.from_pretrained(
"/root/autodl-tmp/FLUX.1-Kontext-dev", transformer=transformer, torch_dtype=torch.bfloat16
).to("cuda")
nunchaku_test(pipeline,2)
nunchaku_test(pipeline,4)
```
### Logs
```shell
```
### System Info
~/FLUX.1-Kontext-Dev-nunchaku# diffusers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- 🤗 Diffusers version: 0.35.0.dev0
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.12.3
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.33.1
- Transformers version: 4.53.0
- Accelerate version: 1.8.1
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 4090 D, 24564 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/12104
|
closed
|
[
"bug"
] | 2025-08-08T09:20:52Z
| 2025-08-17T22:22:37Z
| 1
|
liushiton
|
pytorch/TensorRT
| 3,766
|
❓ [Question] C++ Windows runtime error
|
## ❓ Question
How can I fix this error?
```
Unknown type name '__torch__.torch.classes.tensorrt.Engine':
File "code/__torch__/torch_tensorrt/dynamo/runtime/_TorchTensorRTModule.py", line 6
training : bool
_is_full_backward_hook : Optional[bool]
engine : __torch__.torch.classes.tensorrt.Engine
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
def forward(self: __torch__.torch_tensorrt.dynamo.runtime._TorchTensorRTModule.TorchTensorRTModule,
x: Tensor) -> Tensor:
```
Run script
```
torch::jit::Module trt_ts_mod;
try {
// Deserialize the ScriptModule from a file using torch::jit::load().
std::cout << "Loading TRT engine from: " << trt_ts_module_path << std::endl;
trt_ts_mod = torch::jit::load(trt_ts_module_path);
std::cout << "TRT engine loaded successfully." << std::endl;
}
catch (const c10::Error& e) {
std::cerr << "c10::Error loading the model from : " << trt_ts_module_path << std::endl;
return -1;
}
catch (const std::exception& e) {
std::cerr << "std::exception occurred while loading the model: " << e.what() << std::endl;
return -1;
}
```
## Environment
CMakeListst.txt
```
cmake_minimum_required(VERSION 3.17)
project(torchtrt_runtime_example LANGUAGES CXX)
find_package(Torch REQUIRED)
find_package(torchtrt REQUIRED)
set(SRCS
main.cpp
)
include_directories("${PRJ_ROOT}/TensorRT/out/install/x64-Release/include")
add_executable(${CMAKE_PROJECT_NAME} ${SRCS})
target_link_libraries(${CMAKE_PROJECT_NAME} PRIVATE torch "-Wl,--no-as-needed" torchtrt_runtime "-Wl,--as-needed")
target_compile_features(${CMAKE_PROJECT_NAME} PRIVATE cxx_std_17)
```
I build self TensorRT and Torch-TensorRT both
- PyTorch Version (e.g., 1.0): libtorch-win-shared-with-deps-2.8.0+cu126
- CPU Architecture: ryzen 2700
- OS (e.g., Linux): Windows 11
- Python version: 3.12
- CUDA version: 12.6
- GPU models and configuration: RTX 3070
|
https://github.com/pytorch/TensorRT/issues/3766
|
open
|
[
"question"
] | 2025-08-08T07:56:17Z
| 2025-08-15T14:32:30Z
| null |
zsef123
|
pytorch/ao
| 2,713
|
[fp8 blockwise training] try using torch._scaled_mm instead of Triton kernels for fp8 gemms
|
We have an initial prototype of DeepSeekV3 style fp8 blockwise training done [here](https://github.com/pytorch/ao/blob/main/torchao/prototype/blockwise_fp8_training/linear.py). Numerics are accurate but performance has not been optimized yet.
Initial tests with a local torchtitan integration on my H100 devgpu show the blockwise GEMM kernels are slower than expected. NCU analysis shows uncoalesced global accesses causing major slowdowns, but rather than optimize these kernels, it's probably a better idea to use `torch._scaled_mm` instead, which recently added support for DSV3 style fp8 GEMMs using a CUTLASS kernel which is likely much more performant than the Triton kernels. This will also be more consistent with our other float8 tensorwise and rowwise training recipes, which use torch._scaled_mm.
We should do the following:
1. Add benchmarking script(s) that compares runtime of:
- Performance of [blockwise_fp8_gemm_1x128_128x128](https://github.com/pytorch/ao/blob/143c3a60451727f9fba56289b6fa74cfdb04b440/torchao/prototype/blockwise_fp8_training/kernels.py#L106) vs torch._scaled_mm
- Performance of [blockwise_fp8_gemm_1x128_128x1](https://github.com/pytorch/ao/blob/143c3a60451727f9fba56289b6fa74cfdb04b440/torchao/prototype/blockwise_fp8_training/kernels.py#L214C5-L214C35) vs torch._scaled_mm
- (see [here](https://github.com/pytorch/ao/blob/143c3a60451727f9fba56289b6fa74cfdb04b440/torchao/prototype/blockwise_fp8_training/linear.py#L26) for context on how/where these gemms are used for context)
- Here is an [example](https://github.com/pytorch/ao/blob/main/benchmarks/float8/bench_grouped_mm.py) benchmark script for something similar that can be used as a starting point.
2. If microbenchmarks show torch._scaled_mm is faster, update the blockwise fp8 linear to use this gemm.
Note torch._scaled_mm has some slightly different stride/mem layout requirements for the inputs. You will see this in the error message that it throws if you try to directly swap it out with the triton gemms.
|
https://github.com/pytorch/ao/issues/2713
|
open
|
[
"good first issue",
"float8"
] | 2025-08-07T20:15:10Z
| 2025-08-07T20:26:11Z
| 0
|
danielvegamyhre
|
huggingface/datasets
| 7,729
|
OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory
|
> Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu
|
https://github.com/huggingface/datasets/issues/7729
|
open
|
[] | 2025-08-07T14:07:23Z
| 2025-09-24T02:17:15Z
| 1
|
SaleemMalikAI
|
huggingface/transformers
| 39,992
|
[gpt-oss] Transform checkpoint from safetensors to state dict
|
Yesterday I was working on gpt-oss. However, loading the weights give me troubles.
For models like Qwen, I did things like this:
1. Create model on meta device
2. FSDP2 shard it, so it can fit in memory
3. On each GPU, it read weights from safetensors in a generator style, to save memory.
4. Chunk the weights and copy to the FSDP’s DTensor.
GPT-oss does not apply this routine. Within `from_pretrained`, the mxfp4 quantizer somehow dequantized the weights, yet I cannot find a very clean way to utilize this capability. I have to modify the process, and initialized a CPU version of the model in the CPU memory.
How can we transform the safetensors to state dict directly?
|
https://github.com/huggingface/transformers/issues/39992
|
closed
|
[] | 2025-08-07T13:24:06Z
| 2025-09-15T08:02:55Z
| 1
|
fingertap
|
huggingface/diffusers
| 12,094
|
[Wan2.2] pipeline_wan miss the 'shift' parameter which used by Wan2.2-A14B-diffusers.
|
**Firstly, I found that the quality of output using diffusers is poor**
Later, I found that the pipeline_wan in diffusers[0.34.0] did not support two-stage processing. I noticed that the community had already updated it, so I installed diffusers[0.35.0-dev] by source code and it worked.
Then I found that the scheduler in diffusers does not support the parameter "shift", but "sample_shift" is an important parameter generated by Wan2.2, which may also lead to differences from the official inference code of Wan2.2. Therefore, the video effect may still be inferior to the original inference code.
https://github.com/Wan-Video/Wan2.2/issues/69
**What I need**
Can the community provide the UniPCMultistepScheduler and DPMSolverMultistepScheduler that support the 'shift' parameter? Or can it be adapted in pipeline_wan so that the shift parameter can be used.
Or is there something wrong with my understanding? How can I correctly use the shift parameter when using diffusers?
Thanks!!
cc @yiyixuxu @a-r-r-o-w
|
https://github.com/huggingface/diffusers/issues/12094
|
closed
|
[] | 2025-08-07T11:37:36Z
| 2025-08-10T08:43:27Z
| 7
|
yvmilir
|
pytorch/torchtitan
| 1,543
|
Minimum number of GPUs needed to pretrain llama4_17bx16e - 8 ?
|
Going by the config files it would be 8 H100 class GPUs, Is 8 a reasonable number ?
|
https://github.com/pytorch/torchtitan/issues/1543
|
closed
|
[] | 2025-08-06T23:54:05Z
| 2025-08-07T20:32:35Z
| 3
|
githubsgi
|
pytorch/tutorials
| 3,512
|
Redirect for prototype/ -> unstable/
|
### 🚀 Describe the improvement or the new tutorial
When I search ["flight recorder pytorch" on Google](https://www.google.com/search?q=pytorch+flight+recorder&sca_esv=56a8724cb68766c6&ei=_7yTaKLqN4ra5NoP38nhqAg&oq=pytorch+flight+recorder&gs_lp=Egxnd3Mtd2l6LXNlcnAiF3B5dG9yY2ggZmxpZ2h0IHJlY29yZGVyKgIIADIIEAAYgAQYsAMyCRAAGLADGAgYHjILEAAYsAMYCBgKGB4yDhAAGIAEGLADGIYDGIoFMg4QABiABBiwAxiGAxiKBTIOEAAYgAQYsAMYhgMYigUyCBAAGLADGO8FMggQABiwAxjvBTILEAAYgAQYsAMYogRIngtQAFgAcAF4AJABAJgBAKABAKoBALgBAcgBAJgCAaACA5gDAIgGAZAGCZIHATGgBwCyBwC4BwDCBwMyLTHIBwM&sclient=gws-wiz-serp)
The top link is https://docs.pytorch.org/tutorials/prototype/flight_recorder_tutorial.html
whereas now these tutorials are under https://docs.pytorch.org/tutorials/unstable/flight_recorder_tutorial.html
Can we add a redirect? I knew that the tutorial should be there since i actively work on PyTorch, but for new users this will be confusing and also when `prototype/` is in URL path, the site doesn't have any CSS and is scary-looking
### Existing tutorials on this topic
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/tutorials/issues/3512
|
closed
|
[] | 2025-08-06T20:40:07Z
| 2025-08-07T18:07:35Z
| 2
|
H-Huang
|
huggingface/lerobot
| 1,687
|
When using AMP to train a model, why are the saved model weights still in fp32?
|
<img width="1668" height="95" alt="Image" src="https://github.com/user-attachments/assets/406a1879-f2f2-43c6-8341-8733873ee911" />
|
https://github.com/huggingface/lerobot/issues/1687
|
open
|
[
"question",
"policies"
] | 2025-08-06T12:42:40Z
| 2025-08-12T08:52:00Z
| null |
Hukongtao
|
huggingface/diffusers
| 12,084
|
Will `cosmos-transfer1` be supported in diffusers in the future?
|
Hi @a-r-r-o-w and @yiyixuxu :)
First of all, thank you for recently enabling cosmos-predict1 models (text2world and video2world) in the diffusers library — it's super exciting to see them integrated!
I was wondering if there are any plans to also support [cosmos-transfer1](https://github.com/nvidia-cosmos/cosmos-transfer1) in diffusers in the future?
Thanks again for your great work! 🙌
|
https://github.com/huggingface/diffusers/issues/12084
|
open
|
[] | 2025-08-06T11:22:28Z
| 2025-08-19T12:11:33Z
| 3
|
rebel-shshin
|
huggingface/lerobot
| 1,683
|
SmolVLMWithExpertModel
|
Excuse me, I would like to know about each module. In this class, I would like to know how to define inputs.
|
https://github.com/huggingface/lerobot/issues/1683
|
open
|
[
"question",
"policies"
] | 2025-08-06T10:30:21Z
| 2025-08-12T08:52:21Z
| null |
xjushengjie
|
huggingface/lerobot
| 1,674
|
How to train smolvla for multi-task
|
I have trained smolvla for aloha_sim_transfer_cube and aloha_sim_insertion, and smolvla performs well in each single task. Now I'd like to train smolvla for multi-task ---- one model can complete the two tasks above. What should I do Now?
|
https://github.com/huggingface/lerobot/issues/1674
|
closed
|
[] | 2025-08-06T02:40:01Z
| 2025-10-15T02:52:29Z
| null |
w673
|
huggingface/diffusers
| 12,079
|
API Suggestion: Expose Methods to Convert to Sample Prediction in Schedulers
|
**What API design would you like to have changed or added to the library? Why?**
My proposal is for schedulers to expose `convert_to_sample_prediction` and `convert_to_prediction_type` methods, which would do the following:
1. `convert_to_sample_prediction`: Converts from a given `prediction_type` to `sample_prediction` (e.g. $x_0$-prediction). This function would accept a `prediction_type` argument which defaults to `self.config.prediction_type`.
2. `convert_to_prediction_type`: Converts back from `sample_prediction` to the scheduler's `prediction_type`. This is intended to be the inverse function of `convert_to_sample_prediction`.
The motivating use case I have in mind is to support guidance strategies such as [Adaptive Projected Guidance (APG)](https://arxiv.org/abs/2410.02416) and [Frequency-Decoupled Guidance (FDG)](https://arxiv.org/abs/2506.19713) which prefer to operate with sample / $x_0$-predictions. A code example will be given below.
The reason I think schedulers should expose these methods explicitly is that performing these operations depend on the scheduler state and definition. For example, the prediction type conversion code in `EulerDiscreteScheduler` depends on the `self.sigmas` schedule:
https://github.com/huggingface/diffusers/blob/ba2ba9019f76fd96c532240ed07d3f98343e4041/src/diffusers/schedulers/scheduling_euler_discrete.py#L650-L663
As a possible alternative, code that uses a scheduler could instead try to infer the prediction type conversion logic from the presence of `alphas_cumprod` (for a DDPM-style conversion) or `sigmas` (for an EDM-style conversion) attributes. However, I think this is unreliable because a scheduler could use `alphas_cumprod` or `sigmas` in a non-standard way. Since schedulers essentially already implement the `convert_to_sample_prediction` logic in their `step` methods, I think it could be relatively easy to implement these methods, and calling code would not have to guess how to do the conversion.
A potential difficulty is ensuring that these methods work well with the `step` method, for example if they are called outside of a denoising loop (so internal state like `self.step_index` may not be properly initialized) or if the conversion can be non-deterministic (for example, when `gamma > 0` in `EulerDiscreteScheduler`).
**What use case would this enable or better enable? Can you give us a code example?**
The motivating use case is to support guidance strategies which prefer to operate with $x_0$-predictions. For this use case, we want to convert the denoising model prediction to `sample_prediction`, run the guider's `__call__` logic, and then convert back to the scheduler's `prediction_type` (as schedulers currently expect `model_outputs` in that `prediction_type`).
There may be other potential use cases as well that I haven't thought of.
As a concrete example, we can imagine modifying `EulerDiscreteScheduler` as follows:
```python
class EulerDiscreteScheduler(SchedulerMixin, ConfigMixin):
...
def convert_to_sample_prediction(
self,
model_output: torch.Tensor,
timestep: Union[float, torch.Tensor],
sample: torch.Tensor,
prediction_type: Optional[str] = None,
s_churn: float = 0.0,
s_tmin: float = 0.0,
s_tmax: float = float("inf"),
s_noise: float = 1.0,
generator: Optional[torch.Generator] = None,
) -> torch.Tensor:
if prediction_type is None:
prediction_type = self.config.prediction_type
# NOTE: there's a potential catch here if self.step_index isn't properly initialized
sigma = self.sigmas[self.step_index]
gamma = min(s_churn / (len(self.sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigma <= s_tmax else 0.0
sigma_hat = sigma * (gamma + 1)
# NOTE: another potential problem is ensuring consistent computation with `step` if the conversion
# can be non-deterministic (as below)
if gamma > 0:
noise = randn_tensor(
model_output.shape, dtype=model_output.dtype, device=model_output.device, generator=generator
)
eps = noise * s_noise
sample = sample + eps * (sigma_hat**2 - sigma**2) ** 0.5
# Compute predicted original sample (x_0) from sigma-scaled predicted noise
# NOTE: "original_sample" should not be an expected prediction_type but is left in for
# backwards compatibility
if self.config.prediction_type == "original_sample" or self.config.prediction_type == "sample":
pred_original_sample = model_output
elif self.config.prediction_type == "epsilon":
pred_original_sample = sample - sigma_hat * model_output
elif self.config.prediction_type == "v_prediction":
# denoised = model_output * c_out + input * c_skip
pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1))
else:
raise Valu
|
https://github.com/huggingface/diffusers/issues/12079
|
open
|
[] | 2025-08-06T02:24:46Z
| 2025-08-06T02:24:46Z
| 0
|
dg845
|
huggingface/candle
| 3,047
|
Can the safetensor files from OpenAI's new gpt-oss-20b work with any existing setup?
|
Is the new gpt-oss-20b a totally different architecture or can I use an existing candle setup, swap out the files and start playing around with gpt-oss-20b?
|
https://github.com/huggingface/candle/issues/3047
|
open
|
[] | 2025-08-06T01:59:59Z
| 2025-08-06T02:01:52Z
| 1
|
zcourts
|
huggingface/diffusers
| 12,078
|
Problem with provided example validation input in the Flux Control finetuning example
|
### Describe the bug
The help page for the Flux control finetuning example, https://github.com/huggingface/diffusers/blob/main/examples/flux-control/README.md, provides a sample validation input, a pose condition image
[<img src="https://huggingface.co/api/resolve-cache/models/Adapter/t2iadapter/3c291e0547a1b17bed93428858cdc9b0265c26c7/openpose.png?%2FAdapter%2Ft2iadapter%2Fresolve%2Fmain%2Fopenpose.png=&etag=%2287cc79e12fe5a5bba31ac3098ee7837400b41ffa%22" width=256>]().
The pose conditioned model trained by the script does not process this image properly because it is in BGR format, apparent when comparing it to the openpose spec:
[<img src="https://github.com/ArtificialShane/OpenPose/raw/master/doc/media/keypoints_pose.png" width=256>]().
It doesn't appear that the validation image is loaded in BGR format properly, in the below line:
https://github.com/huggingface/diffusers/blob/ba2ba9019f76fd96c532240ed07d3f98343e4041/examples/flux-control/train_control_lora_flux.py#L127.
In my personal experiments, the validation output does not make sense. Below is an example of what my run uploaded to wandb:
<img width="1310" height="698" alt="Image" src="https://github.com/user-attachments/assets/0edc3c88-cfa5-4fae-a6b1-295839136dba" />
### Reproduction
I ran the below in the command line:
```
accelerate launch --config_file=/mnt/localssd/huggingface/accelerate/deepspeed.yaml train_control_lora_flux.py \
--pretrained_model_name_or_path="black-forest-labs/FLUX.1-dev" \
--dataset_name="raulc0399/open_pose_controlnet" \
--output_dir="/mnt/localssd/pose-control-lora" \
--mixed_precision="bf16" \
--train_batch_size=1 \
--rank=64 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--use_8bit_adam \
--learning_rate=1e-4 \
--report_to="wandb" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=5000 \
--validation_image="openpose.png" \
--validation_prompt="A couple, 4k photo, highly detailed" \
--seed="0" \
--cache_dir="/mnt/localssd/huggingface"
```
### Logs
```shell
```
### System Info
```
- 🤗 Diffusers version: 0.34.0
- Platform: Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.8
- PyTorch version (GPU?): 2.7.1+cu126 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.34.3
- Transformers version: 4.54.1
- Accelerate version: 1.9.0
- PEFT version: 0.17.0
- Bitsandbytes version: 0.46.1
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
- Using GPU in script?: Yes.
- Using distributed or parallel set-up in script?: Yes.
```
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/12078
|
open
|
[
"bug"
] | 2025-08-05T22:29:35Z
| 2025-08-07T08:47:45Z
| 1
|
kzhang2
|
huggingface/lerobot
| 1,672
|
How to resume training?
|
My old setting of training:
```
# batch_size: 64
steps: 20000
# output_dir: outputs/train
```
in outputs/train/ there are 020000 folder and last folder,eash has pretrained_model and training_state
When I want to resume training, I read configs/train.py
so I set
```
resume: true
output_dir: outputs/train/
# or output_dir: outputs/train/checkpoints/last/pretrained_model/
# or output_dir: outputs/train/checkpoints/last/pretrained_model/train_config.json
```
All got this:
Traceback (most recent call last):
File "/miniconda3/envs/lerobot/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "miniconda3/envs/lerobot/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "//code/lerobot_diy/src/lerobot/scripts/train.py", line 394, in <module>
train()
File "/code/lerobot_diy/src/lerobot/configs/parser.py", line 225, in wrapper_inner
response = fn(cfg, *args, **kwargs)
File "/code/lerobot_diy/src/lerobot/scripts/train.py", line 215, in train
optimizer, lr_scheduler = make_optimizer_and_scheduler(cfg, policy)
File "//code/lerobot_diy/src/lerobot/optim/factory.py", line 38, in make_optimizer_and_scheduler
optimizer = cfg.optimizer.build(params)
AttributeError: 'NoneType' object has no attribute 'build'
How to write command of output dir?
Thanks!
|
https://github.com/huggingface/lerobot/issues/1672
|
closed
|
[] | 2025-08-05T14:57:32Z
| 2025-08-06T03:04:28Z
| null |
milong26
|
huggingface/transformers
| 39,921
|
[Gemma3N] Not able to add new special tokens to model/tokenizer due to projection error
|
### System Info
```
- transformers==4.54.1
- Platform: Linux-5.15.0-1084-aws-x86_64-with-glibc2.31
- Python version: 3.13
- TRL version: 0.19.1
- Huggingface_hub version: 0.33.4
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
```
Hi,
The transformers model class for 'gemma-3n` has issues as below (pasting stacktrace):
```
trainer.train()
~~~~~~~~~~~~~^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/trainer.py", line 2237, in train
return inner_training_loop(
args=args,
...<2 lines>...
ignore_keys_for_eval=ignore_keys_for_eval,
)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/trainer.py", line 2578, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/trl/trainer/sft_trainer.py", line 914, in training_step
return super().training_step(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/trainer.py", line 3792, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/trl/trainer/sft_trainer.py", line 868, in compute_loss
(loss, outputs) = super().compute_loss(
~~~~~~~~~~~~~~~~~~~~^
model, inputs, return_outputs=True, num_items_in_batch=num_items_in_batch
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/trainer.py", line 3879, in compute_loss
outputs = model(**inputs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/accelerate/utils/operations.py", line 818, in forward
return model_forward(*args, **kwargs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/accelerate/utils/operations.py", line 806, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/peft/peft_model.py", line 1850, in forward
return self.base_model(
~~~~~~~~~~~~~~~^
input_ids=input_ids,
^^^^^^^^^^^^^^^^^^^^
...<6 lines>...
**kwargs,
^^^^^^^^^
)
^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/peft/tuners/tuners_utils.py", line 222, in forward
return self.model.forward(*args, **kwargs)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/utils/generic.py", line 961, in wrapper
output = func(self, *args, **kwargs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/models/gemma3n/modeling_gemma3n.py", line 2276, in forward
outputs = self.model(
input_ids=input_ids,
...<14 lines>...
**lm_kwargs,
)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13
|
https://github.com/huggingface/transformers/issues/39921
|
open
|
[
"Usage",
"Good Second Issue",
"bug"
] | 2025-08-05T14:43:37Z
| 2025-08-19T19:37:39Z
| 14
|
debasisdwivedy
|
huggingface/transformers
| 39,910
|
Question: Llama4 weight reshaping
|
Hi all
I am trying to extract the original Llama4 MoE weights, specifically:
- `experts.w1` (aka `experts.moe_w_in_eD_F`)
- `experts.w3` (aka `experts.moe_w_swiglu_eD_F`)
I need both of these in the shape `[E, D, N]`, where:
- E is the number of experts (16 for Scout)
- D is the embedding dimension (5120)
- N is the intermediate dimension (8192)
I tried just splitting `experts.gate_up_proj` in half along the last dimension to get w1 and w3, but although the dimensions match, the model is outputting nonsense, so I assume the actual order of the weights is wrong.
Could someone help me make sense of this snippet (from `convert_llama4_weights_to_hf`)?
Why is this hard coded indexing / reshaping being done and do you have any suggestions for how to get the original weight back?
```python
elif re.search(r"(gate|up)_proj", new_key):
path = new_key.split(".")
gate_key = re.sub(r"(gate|up)_proj", lambda m: "gate_proj", new_key)
up_key = re.sub(r"(gate|up)_proj", lambda m: "up_proj", new_key)
if gate_key == new_key:
state_dict[new_key] = torch.cat(current_parameter, dim=concat_dim)
elif new_key == up_key:
if "experts" not in new_key:
state_dict[new_key] = torch.cat(current_parameter, dim=concat_dim)
else:
# gate_proj = moe_w_in_eD_F = w1
gate_proj = state_dict.pop(gate_key)
gate_proj = [
gate_proj.reshape(num_experts, -1, 8, 1024)[:, :, k, :].reshape(num_experts, -1, 1024)
for k in range(8)
]
gate_proj = torch.cat(gate_proj, dim=-1)
# up_proj = moe_w_swiglu_eD_F = w3
up_proj = [
k.reshape(num_experts, -1, 8, 1024).reshape(num_experts, -1, 1024)
for k in current_parameter
]
up_proj = torch.cat(up_proj, dim=-1)
gate_up_proj = torch.cat((gate_proj, up_proj), dim=-1)
new_key = new_key.replace("up_proj", "gate_up_proj")
state_dict[new_key] = gate_up_proj.contiguous()
tqdm.write(f"Processing: {key.ljust(50)} ->\t {new_key}, {state_dict[new_key].shape}")
```
Thank you!
|
https://github.com/huggingface/transformers/issues/39910
|
closed
|
[] | 2025-08-05T10:19:25Z
| 2025-08-13T09:35:52Z
| 0
|
gskorokhod
|
huggingface/datasets
| 7,724
|
Can not stepinto load_dataset.py?
|
I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" -->
|
https://github.com/huggingface/datasets/issues/7724
|
open
|
[] | 2025-08-05T09:28:51Z
| 2025-08-05T09:28:51Z
| 0
|
micklexqg
|
huggingface/lerobot
| 1,670
|
How does leroBot address the issue of training heterogeneous datasets?
|
Specifically, suppose I have a dataset A and dataset B. In dataset A, both the state and action are represented as (x, y, z, gripper), where x, y, and z denote the distances moved along the x, y, and z axes, respectively, and gripper represents the on/off state of the gripper. In dataset B, both the state and action are the angles of the corresponding joints of the robotic arm. How can I use these two datasets together for training?
|
https://github.com/huggingface/lerobot/issues/1670
|
open
|
[
"question",
"processor"
] | 2025-08-05T08:20:08Z
| 2025-08-12T09:01:57Z
| null |
mahao18cm
|
huggingface/lerobot
| 1,667
|
How many episode to have a good result of SmolVLA
|
### System Info
```Shell
Hello, I'm trying to do a simple task like dual hand pick banana to a basket using SmolVLA,may I know how many episode to train for having a good result?
Many thanks
Julien
```
### Reproduction
I've used 100 episode for training, looks like the arm can not pick the banana accurately, sometimes the arms just stay on the head of banana
### Expected behavior
left hand pick banana and hand it to right hand then right hand put banana into basket
|
https://github.com/huggingface/lerobot/issues/1667
|
closed
|
[
"question",
"policies"
] | 2025-08-05T05:12:12Z
| 2025-10-17T11:27:14Z
| null |
chejulien
|
pytorch/torchtitan
| 1,527
|
Any model fp8 training
|
### Bug description
Do you have a further plan to extend training models from llama and deepseek to any model from huggingface transformers library? I've seen an issue where a user asked about qwen but in recent days other companies have announced their excellent MOE models with weights and configs on huggingface, and it would be great to train them using torchtitan
### Versions
Latest versions
|
https://github.com/pytorch/torchtitan/issues/1527
|
closed
|
[
"question"
] | 2025-08-05T00:21:01Z
| 2025-08-05T22:46:15Z
| null |
pizzaball
|
pytorch/torchtitan
| 1,525
|
Transformer is running with float32 instead of bfloat16 !
|
### Bug description
Modified the Llama3 modle.py to print dtype as follows and ran just 1 rank. The
```
def forward(
self,
tokens: torch.Tensor,
eos_id: int | None = None,
input_batch: torch.Tensor | None = None,
):
"""
Perform a forward pass through the Transformer model.
Args:
tokens (torch.Tensor): Input token indices if pipeline parallelism is not enabled.
If pipeline parallelism is enabled, this will be the input token indices
for the ranks on the first pipeline stage. This will be the activation of the
previous pipeline stage if the current rank is not on the first stage.
input_batch (torch.Tensor): The input batch read from the dataloader.
This will always be the input batch regardless of the pipeline stage.
This field is required for non-first PP stages to perform document
masking attention (to analyze the boundary of the document).
Returns:
torch.Tensor: Output logits after applying the Transformer model.
"""
if self.model_args.use_flex_attn:
init_attention_mask(
input_batch if input_batch is not None else tokens, eos_id=eos_id
)
print (f"tokens.dtype {tokens.dtype}")
# passthrough for nonexistent layers, allows easy configuration of pipeline parallel stages
h = self.tok_embeddings(tokens) if self.tok_embeddings else tokens
print (f"h.dtype {h.dtype}")
for layer in self.layers.values():
h = layer(h, self.freqs_cis)
print (f"h.dtype {h.dtype}")
h = self.norm(h) if self.norm else h
print (f"h.dtype {h.dtype}")
output = self.output(h) if self.output else h
print (f"output.dtype {h.dtype}")
return output
```
Seeing only float32 datatypes as follows.
```
tokens.dtype torch.int64
h.dtype torch.float32
h.dtype torch.float32
h.dtype torch.float32
h.dtype torch.float32
h.dtype torch.float32
h.dtype torch.float32
h.dtype torch.float32
h.dtype torch.float32
output.dtype torch.float32
```
The config is:
`model.toml', 'dump_folder': './outputs', 'description': 'Llama 3 debug training', 'use_for_integration_test': True, 'print_args': True}, 'profiling': {'enable_profiling': False, 'save_traces_folder': 'profile_trace', 'profile_freq': 10, 'enable_memory_snapshot': False, 'save_memory_snapshot_folder': 'memory_snapshot'}, 'metrics': {'log_freq': 1, 'enable_tensorboard': False, 'disable_color_printing': False, 'save_tb_folder': 'tb', 'save_for_all_ranks': False, 'enable_wandb': False}, 'model': {'name': 'llama3', 'flavor': 'debugmodel', 'tokenizer_path': './tests/assets/tokenizer', 'converters': [], 'print_after_conversion': False}, 'optimizer': {'name': 'AdamW', 'lr': 0.0008, 'beta1': 0.9, 'beta2': 0.95, 'eps': 1e-08, 'weight_decay': 0.1, 'implementation': 'fused', 'early_step_in_backward': False}, 'lr_scheduler': {'warmup_steps': 2, 'decay_ratio': 0.8, 'decay_type': 'linear', 'min_lr_factor': 0.0}, 'training': {'dataset': 'c4_test', 'dataset_path': None, 'local_batch_size': 8, 'global_batch_size': -1, 'seq_len': 2048, 'max_norm': 1.0, 'steps': 10, 'enable_cpu_offload': False, 'mixed_precision_param': 'bfloat16', 'mixed_precision_reduce': 'float32', 'compile': False, 'gc_freq': 50, 'gc_debug': False, 'seed': None, 'deterministic': False}, 'parallelism': {'data_parallel_replicate_degree': 1, 'enable_compiled_autograd': False, 'data_parallel_shard_degree': -1, 'fsdp_reshard_after_forward': 'default', 'tensor_parallel_degree': 1, 'disable_loss_parallel': False, 'enable_async_tensor_parallel': False, 'pipeline_parallel_degree': 1, 'pipeline_parallel_split_points': [], 'module_fqns_per_model_part': None, 'pipeline_parallel_first_stage_less_layers': 1, 'pipeline_parallel_last_stage_less_layers': 1, 'pipeline_parallel_layers_per_stage': None, 'pipeline_parallel_schedule': '1F1B', 'pipeline_parallel_schedule_csv': '', 'pipeline_parallel_microbatch_size': 1, 'context_parallel_degree': 1, 'context_parallel_rotate_method': 'allgather', 'expert_parallel_degree': 1}, 'checkpoint': {'enable_checkpoint': False, 'folder': 'checkpoint', 'interval': 10, 'initial_load_path': None, 'initial_load_model_only': True, 'initial_load_in_hf': False, 'last_save_model_only': False, 'last_save_in_hf': False, 'export_dtype': 'float32', 'async_mode': 'disabled', 'keep_latest_k': 10, 'load_step': -1, 'exclude_from_loading': [], 'enable_first_step_checkpoint': False, 'create_seed_checkpoint': False}, 'activation_checkpoint': {'mode': 'selective', 'selective_ac_option': '2', 'per_op_sac_force_recompute_mm_shapes_by_fqns': ['moe.router.gate']}, 'float8': {'enable_fsdp_float8_all_gather': False, 'precompute_float8_dynamic_scale_for_fsdp': False, 'recipe_name': None, 'filter_fqns': ['output'], 'emulate': False, 'moe_fqns_prototype': []}, 'mx': {'mxfp8_dim1_cast_kernel_choice': '
|
https://github.com/pytorch/torchtitan/issues/1525
|
open
|
[
"question"
] | 2025-08-04T22:37:20Z
| 2025-08-14T21:25:04Z
| null |
githubsgi
|
huggingface/lerobot
| 1,666
|
Please add multi gpu training support
|
MultiGPU training currently does not work with lerobot as mentioned here https://github.com/huggingface/lerobot/issues/1377
Please add this support.
|
https://github.com/huggingface/lerobot/issues/1666
|
closed
|
[
"enhancement",
"question",
"policies"
] | 2025-08-04T18:06:40Z
| 2025-10-17T09:53:59Z
| null |
nahidalam
|
huggingface/lerobot
| 1,663
|
No way to train on subset of features
|
Currently, when loading a policy from a config.json, the input_features seem to be ignored and re-generated from the dataset provided. However, it may not always be desirable to train on all features, perhaps if I have multiple camera views but I only want to train on one.
I would prefer that config.json features are not overwritten, but this would be a breaking change. Do you have suggestions on how we could implement this behavior?
|
https://github.com/huggingface/lerobot/issues/1663
|
open
|
[
"question",
"policies",
"processor"
] | 2025-08-04T15:19:35Z
| 2025-08-12T09:03:47Z
| null |
atyshka
|
pytorch/tutorials
| 3,507
|
Feedback about Optimizing Model Parameters Page
|
There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/basics/optimization_tutorial.html
Within the section [Full implementation](https://docs.pytorch.org/tutorials/beginner/basics/optimization_tutorial.html#full-implementation), the loop does not contain the `zero_grad` function on top of the backward propagation block as is recommended in the paragraph preceding this section.
Actual code:
```python
# Backpropagation
loss.backward()
optimizer.step()
optimizer.zero_grad()
```
Recommended code:
```python
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
If you could instruct me how to make this change on the documentation, I would be glad to do that.
|
https://github.com/pytorch/tutorials/issues/3507
|
open
|
[] | 2025-08-04T14:50:13Z
| 2025-08-04T14:50:13Z
| 0
|
madhaven
|
huggingface/diffusers
| 12,060
|
Is there any DiT block defined in the huggingface/diffusers OR huggingface/transformers project?
|
**Is your feature request related to a problem? Please describe.**
I want to make some experiments about DiT based flow-matching model, I need an implementation of the common DiT block, but did not found it in both huggingface/diffusers and huggingface/transformers. Is there any implementation about it with just some other file names?
**Describe the solution you'd like.**
A clear DiT implementation
**Describe alternatives you've considered.**
**Additional context.**
|
https://github.com/huggingface/diffusers/issues/12060
|
open
|
[] | 2025-08-04T09:40:43Z
| 2025-08-04T10:19:00Z
| 2
|
JohnHerry
|
pytorch/xla
| 9,537
|
What are some large model use cases for torch-xla?
|
## ❓ Questions and Help
I’ve observed that torch-xla has been actively developed for GPU support recently. Are there any benchmark comparisons between torch-xla and standard PyTorch, particularly for large-scale model training? Additionally, regarding frameworks such as Megatron-LM, is there any plan for official support within torch-xla moving forward?
|
https://github.com/pytorch/xla/issues/9537
|
closed
|
[
"question",
"xla:gpu"
] | 2025-08-04T09:04:32Z
| 2025-08-06T08:24:30Z
| null |
south-ocean
|
huggingface/diffusers
| 12,052
|
Wan 2.2 with LightX2V offloading tries to multiply tensors from different devices and fails
|
### Describe the bug
After @sayakpaul great work in https://github.com/huggingface/diffusers/pull/12040 LightX2V now works. However what doesn't work is adding both a lora and offloading to the transformer_2. I can get away with either (i.e. offload both transformers but add a lora only to transformer and NOT to transformer_2, OR offload just transformer and add a lora to both transformer_2 and transformer).
However offloading transformer_2 is quite important, since it causes 2x the VRAM to be used, and even a Q4_K_S model with LightX2V will use >24gb vram (as opposed to <9GB VRAM as in ComfyUI).
### Reproduction
The script is the same as the one posted by Paul in the #12040 PR with the addition of offloading
```python
import torch
from diffusers import WanImageToVideoPipeline
from huggingface_hub import hf_hub_download
import requests
from PIL import Image
from diffusers.loaders.lora_conversion_utils import _convert_non_diffusers_wan_lora_to_diffusers
from io import BytesIO
import safetensors.torch
# Load a basic transformer model
pipe = WanImageToVideoPipeline.from_pretrained(
"Wan-AI/Wan2.2-I2V-A14B-Diffusers",
torch_dtype=torch.bfloat16
)
lora_path = hf_hub_download(
repo_id="Kijai/WanVideo_comfy",
filename="Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors"
)
# This is what is different
self.pipe.vae.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level")
self.pipe.transformer.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level")
# Without this line it works but uses 2x the VRAM
self.pipe.transformer_2.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level")
self.pipe.text_encoder.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level")
pipe.to("cuda")
pipe.load_lora_weights(lora_path)
# print(pipe.transformer.__class__.__name__)
# print(pipe.transformer.peft_config)
org_state_dict = safetensors.torch.load_file(lora_path)
converted_state_dict = _convert_non_diffusers_wan_lora_to_diffusers(org_state_dict)
pipe.transformer_2.load_lora_adapter(converted_state_dict)
image_url = "https://cloud.inference.sh/u/4mg21r6ta37mpaz6ktzwtt8krr/01k1g7k73eebnrmzmc6h0bghq6.png"
response = requests.get(image_url)
input_image = Image.open(BytesIO(response.content)).convert("RGB")
frames = pipe(input_image, "animate", num_inference_steps=4, guidance_scale=1.0)
```
### Logs
```shell
[t+1m44s256ms] [ERROR] Traceback (most recent call last):
[t+1m44s256ms] File "/server/tasks.py", line 50, in run_task
[t+1m44s256ms] output = await result
[t+1m44s256ms] ^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/src/inference.py", line 424, in run
[t+1m44s256ms] output = self.pipe(
[t+1m44s256ms] ^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[t+1m44s256ms] return func(*args, **kwargs)
[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/diffusers/pipelines/wan/pipeline_wan_i2v.py", line 754, in __call__
[t+1m44s256ms] noise_pred = current_model(
[t+1m44s256ms] ^^^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[t+1m44s256ms] return self._call_impl(*args, **kwargs)
[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[t+1m44s256ms] return forward_call(*args, **kwargs)
[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/diffusers/hooks/hooks.py", line 189, in new_forward
[t+1m44s256ms] output = function_reference.forward(*args, **kwargs)
[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/diffusers/models/transformers/transformer_wan.py", line 639, in forward
[t+1m44s256ms] temb, timestep_proj, encoder_hidden_states, encoder_hidden_states_image = self.condition_embedder(
[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[t+1m44s256ms] return self._
|
https://github.com/huggingface/diffusers/issues/12052
|
closed
|
[
"bug"
] | 2025-08-03T12:43:13Z
| 2025-08-11T15:53:41Z
| 4
|
luke14free
|
pytorch/tutorials
| 3,506
|
Feedback about 在 Google Colab 中运行教程
|
There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/colab.html
The content in this page clearly shows how to upload or download your dataset into your Google Drive or your Desktop
|
https://github.com/pytorch/tutorials/issues/3506
|
open
|
[] | 2025-08-03T03:51:11Z
| 2025-12-09T19:11:27Z
| 1
|
KevinAllen66
|
huggingface/peft
| 2,699
|
UserWarning: Found missing adapter keys while loading the checkpoint
|
I have been fine-tuning different LLM models (mainly Llama family) since last year and use peft with lora config all the time with no issues.
Just recently I was fine-tuning the llama 70B on multiple GPU using accelerate then saving the adapter once training is done. (This was always my setup since last year)
However now I want to load the adapter into the base model as follows:
```
base_model = AutoModelForCausalLM.from_pretrained(model_id, dtype= torch.float16, device_map = 'auto', attn_implementation = 'flash_attention_2')
model = PeftModel.from_pretrained(base_model, adapter_path)
```
Now I am getting this warning:
```
UserWarning: Found missing adapter keys while loading the checkpoint:
```
Then it lists some Lora weights. I tried changing LoraConfig parameters but still the problem
Persists.
Can anyone please tell me what is the issue here and how to fix it.
I am using the latest version of peft, transformers, accelerate,
trl
Note: I am also using the same format for model during the training and inference.
I have already looked at this and seems same issue, but I load my model using AutoModelForCasaulLM in both cases:
https://github.com/huggingface/peft/issues/2566
Note: This is the warning: `[base_model.model.model.layers.0.self_attn, q_proj.lora_A.default.weight, base_model.model.model.layers.0.self_attn, q_proj.lora_B.default.weight, base_model.model.model.layers.0.self_attn, k_proj.lora_A.default.weight, base_model.model.model.layers.0.self_attn, k_proj.lora_B.default.weight`, ...
|
https://github.com/huggingface/peft/issues/2699
|
closed
|
[] | 2025-08-02T20:49:31Z
| 2025-11-09T15:03:46Z
| 41
|
manitadayon
|
pytorch/tutorials
| 3,505
|
Why am I 2:4 sparse slower than dense in the decode stage of LLaMA2‑7B?
|
## Description
Hi
<img width="1000" height="800" alt="Image" src="https://github.com/user-attachments/assets/0e08ab66-423a-4ef0-a876-8e6e735affad" />
As shown in the figure, during the decoding phase, the 2:4 sparsity model is about 12% slower than the dense model, the questions are as follows:
- Is the decode phase dominated by GEMV / small‑N GEMM operations, which therefore cannot trigger the 2:4 sparse Tensor Core path?
- Even so, why is the 2:4 sparsity model slower than the dense model?
- If we increase N>1 (e.g., batch multiple requests or generate multiple tokens at once so it becomes a GEMM), can we observe measurable 2:4 sparsity speed‑up?
- Are there any sparse kernels or recommended practices for GEMV (matrix‑vector) that can take advantage of 2:4 sparsity?
## Environment
NVIDIA GeForce RTX 4090, 8.9, P2`
=== Python / OS ===
3.11.13 Linux-6.5.0-18-generic-x86_64-with-glibc2.35
=== PyTorch / CUDA / cuDNN ===
torch: 2.2.2+cu121
cuda: 12.1
cudnn: 8902
device: NVIDIA GeForce RTX 4090
sm capability: (8, 9)
=== cuBLASLt ===
cuBLASLt version: 0
=== TensorRT ===
TensorRT not installed
[2to4_sparsity.zip](https://github.com/user-attachments/files/21557839/2to4_sparsity.zip)
Thanks!
|
https://github.com/pytorch/tutorials/issues/3505
|
closed
|
[
"question"
] | 2025-08-02T03:44:06Z
| 2025-08-09T03:14:49Z
| null |
wang-qitong
|
huggingface/diffusers
| 12,044
|
AttributeError: 'bool' object has no attribute '__module__'. Did you mean: '__mod__'?
|
I am train the Flux.1-dev model and get this error. I found the solution to bring diffuser to version 0.21.0 but then it would beconflict with some other libraries. Is there any solution for this?
```
Traceback (most recent call last):
File "/home/quyetnv/t2i/ai-toolkit/run.py", line 120, in <module>
main()
File "/home/quyetnv/t2i/ai-toolkit/run.py", line 108, in main
raise e
File "/home/quyetnv/t2i/ai-toolkit/run.py", line 96, in main
job.run()
File "/home/quyetnv/t2i/ai-toolkit/jobs/ExtensionJob.py", line 22, in run
process.run()
File "/home/quyetnv/t2i/ai-toolkit/jobs/process/BaseSDTrainProcess.py", line 1518, in run
self.sd.load_model()
File "/home/quyetnv/t2i/ai-toolkit/toolkit/stable_diffusion_model.py", line 788, in load_model
pipe: Pipe = Pipe(
File "/home/quyetnv/.venv/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 197, in __init__
self.register_modules(
File "/home/quyetnv/.venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 212, in register_modules
library, class_name = _fetch_class_library_tuple(module)
File "/home/quyetnv/.venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 877, in _fetch_class_library_tuple
library = not_compiled_module.__module__.split(".")[0]
AttributeError: 'bool' object has no attribute '__module__'. Did you mean: '__mod__'?
```
my version diffusers was installed from requirement of ai-toolkit is 0.35.0 dev3
|
https://github.com/huggingface/diffusers/issues/12044
|
closed
|
[] | 2025-08-02T01:37:30Z
| 2025-08-21T01:27:19Z
| 3
|
qngv
|
pytorch/torchtitan
| 1,515
|
MiCS (Mixture of Communicators for Scaling)
|
Wondering if MiCS (Mixture of Communicators for Scaling) has been considered as a feature in TorchTitan. Would appreciate thoughts on the topic.
|
https://github.com/pytorch/torchtitan/issues/1515
|
closed
|
[
"question"
] | 2025-08-01T22:15:13Z
| 2025-08-05T19:55:36Z
| null |
githubsgi
|
huggingface/optimum
| 2,333
|
Support for exporting t5gemma-2b-2b-prefixlm-it to onnx
|
### Feature request
I’ve tried to export t5gemma-2b-2b-prefixlm-it to onnx using optimum. But it outputs: ValueError: Trying to export a t5gemma model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type t5gemma to be supported natively in the ONNX export.
Task: "text2text-generation"
### Motivation
I’ve tried, but nothing works...
### Your contribution
config.json
{
"architectures": [
"T5GemmaForConditionalGeneration"
],
"classifier_dropout_rate": 0.0,
"decoder": {
"attention_bias": false,
"attention_dropout": 0.0,
"attn_logit_softcapping": 50.0,
"classifier_dropout_rate": 0.0,
"cross_attention_hidden_size": 2304,
"dropout_rate": 0.0,
"final_logit_softcapping": 30.0,
"head_dim": 256,
"hidden_activation": "gelu_pytorch_tanh",
"hidden_size": 2304,
"initializer_range": 0.02,
"intermediate_size": 9216,
"is_decoder": true,
"layer_types": [
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention"
],
"max_position_embeddings": 8192,
"model_type": "t5_gemma_module",
"num_attention_heads": 8,
"num_hidden_layers": 26,
"num_key_value_heads": 4,
"query_pre_attn_scalar": 256,
"rms_norm_eps": 1e-06,
"rope_theta": 10000.0,
"sliding_window": 4096,
"torch_dtype": "bfloat16",
"use_cache": true,
"vocab_size": 256000
},
"dropout_rate": 0.0,
"encoder": {
"attention_bias": false,
"attention_dropout": 0.0,
"attn_logit_softcapping": 50.0,
"classifier_dropout_rate": 0.0,
"dropout_rate": 0.0,
"final_logit_softcapping": 30.0,
"head_dim": 256,
"hidden_activation": "gelu_pytorch_tanh",
"hidden_size": 2304,
"initializer_range": 0.02,
"intermediate_size": 9216,
"layer_types": [
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention"
],
"max_position_embeddings": 8192,
"model_type": "t5_gemma_module",
"num_attention_heads": 8,
"num_hidden_layers": 26,
"num_key_value_heads": 4,
"query_pre_attn_scalar": 256,
"rms_norm_eps": 1e-06,
"rope_theta": 10000.0,
"sliding_window": 4096,
"torch_dtype": "bfloat16",
"use_cache": true,
"vocab_size": 256000
},
"eos_token_id": [
1,
107
],
"initializer_range": 0.02,
"is_encoder_decoder": true,
"model_type": "t5gemma",
"pad_token_id": 0,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0.dev0",
"use_cache": true
}
|
https://github.com/huggingface/optimum/issues/2333
|
closed
|
[
"Stale"
] | 2025-08-01T16:39:52Z
| 2026-01-03T02:51:13Z
| 2
|
botan-r
|
huggingface/transformers
| 39,842
|
Expected behavior of `compute_result` is hard to expect and inconsistent
|
In trainer there exists a parameter `compute_result` given to `compute_metrics` when `batch_eval_metrics` is given to True.
https://github.com/huggingface/transformers/blob/1e0665a191f73f6b002209c3dfcda478baac6bac/src/transformers/trainer.py#L370-L375
I think there are several problems for `compute_result`,
1. User can't expect (1) what happen if `batch_eval_metrics` is given (2) what is given to `compute_result` and when it change from True or False (3) what's HF's intention to implement `compute_metrics` with `compute_result`. since there are very few (only 3 line) instruction for this.
2. `compute_metrics` sometimes called with `compute_result` and sometimes not, EVEN WHEN `batch_eval_metrics` is present. See below lines.
https://github.com/huggingface/transformers/blob/1e0665a191f73f6b002209c3dfcda478baac6bac/src/transformers/trainer.py#L4534-L4547
Creating this issue because I spend long time figuring out this.
|
https://github.com/huggingface/transformers/issues/39842
|
closed
|
[] | 2025-08-01T11:43:28Z
| 2025-10-04T08:02:41Z
| 3
|
MilkClouds
|
huggingface/transformers
| 39,841
|
MistralCommonTokenizer does not match PreTrainedTokenizer
|
### System Info
on docker
os: ubuntu 24.04
transformers: 4.55.0.dev0
mistral_common: 1.8.3
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Command to lauch container:
```bash
docker run --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:latest --model mistralai/Voxtral-Mini-3B-2507
```
### Expected behavior
The output will finish in:
```bash
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer_group.py", line 24, in __init__
vllm-1 | self.tokenizer = get_tokenizer(self.tokenizer_id, **tokenizer_config)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer.py", line 309, in get_tokenizer
vllm-1 | tokenizer = get_cached_tokenizer(tokenizer)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer.py", line 104, in get_cached_tokenizer
vllm-1 | tokenizer_all_special_tokens = tokenizer.all_special_tokens
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | AttributeError: 'MistralCommonTokenizer' object has no attribute 'all_special_tokens'. Did you mean: '_all_special_ids'?
```
vLLM docker server uses the pretrained tokenizer format:
https://github.com/vllm-project/vllm/blob/49314869887e169be080201ab8bcda14e745c080/vllm/transformers_utils/tokenizer.py#L97-L101
Which must include: `all_special_ids`, `all_special_tokens`, `all_special_tokens_extended` default properties. However, MistralCommonTokenizer does not have implemented them. Is there a plan to standarize both tokenizers?
|
https://github.com/huggingface/transformers/issues/39841
|
closed
|
[
"bug"
] | 2025-08-01T09:16:24Z
| 2025-11-23T08:03:33Z
| 3
|
Fhrozen
|
huggingface/transformers
| 39,839
|
pack_image_features RuntimeError when vision_feature_select_strategy="full"
|
### System Info
transformers 4.54.0
### Who can help?
@zucchini-nlp
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers.models.llava_next import LlavaNextForConditionalGeneration, LlavaNextProcessor
from PIL import Image
import requests
import torch
model = LlavaNextForConditionalGeneration.from_pretrained(
"llava-hf/llava-v1.6-vicuna-7b-hf",
vision_feature_select_strategy="full",
torch_dtype=torch.float16,
device_map="auto",
)
processor = LlavaNextProcessor.from_pretrained("llava-hf/llava-v1.6-vicuna-7b-hf")
image = Image.open("/data/coco/train2017/000000000009.jpg")
prompt = "USER: <image>\nWhat is shown in this image? ASSISTANT:"
inputs = processor(images=image, text=prompt, truncation=True, return_tensors="pt", vision_feature_select_strategy = "full").to("cuda")
input_embeds = model(inputs.input_ids, pixel_values=inputs.pixel_values, image_sizes=inputs.image_sizes, vision_feature_select_strategy="full")
```
### Expected behavior
I encountered a bug when running to the line
`input_embeds = model(inputs.input_ids, pixel_values=inputs.pixel_values, image_sizes=inputs.image_sizes, vision_feature_select_strategy="full")`
I got:
```
in pack_image_features
image_feature = image_feature.view(num_patch_height, num_patch_width, height, width, -1)
RuntimeError: shape '[2, 2, 24, 24, -1]' is invalid for input of size 9453568
```
the shape of image_feature is [4, 577, 4096] currently, I want to know how to fix this?
|
https://github.com/huggingface/transformers/issues/39839
|
closed
|
[
"bug"
] | 2025-08-01T07:55:40Z
| 2025-09-08T08:02:56Z
| 2
|
llnnnnnn
|
huggingface/gsplat.js
| 117
|
How to generate a Mesh mesh?
|
I need a scene where Gaussian Splatting and Mesh are mixed, and I don't know if GSPLAT generates Mesh or not.
|
https://github.com/huggingface/gsplat.js/issues/117
|
open
|
[] | 2025-08-01T03:29:22Z
| 2025-08-01T03:29:22Z
| null |
ZXStudio
|
pytorch/ao
| 2,649
|
Deprecation for Float8DynamicActivationFloat8WeightConfig (version 1) and Float8WeightOnlyConfig (version 1) and the models
|
This issue is tracking the deprecation of the (1) configs (2) model checkpoints quantized with these configs.
What is deprecated:
1. We added version 2 config in https://github.com/pytorch/ao/pull/2463, and switched the default version to 2 in https://github.com/pytorch/ao/pull/2650, the version 1 config is now deprecated, please use version 2 config to quantize the model
2. the quantized checkpoints quantized with version 1 config previously is deprecated as well, and we plan to remove the support to load these checkpoints after pytorch 2.11 release (around 9 months from now)
Timeline:
0.13.0: annouce deprecation for version 1 config
after we migrated all tensor subclasses: remove support for version 1 config
after pytorch 2.11 release: remove support for version 1 checkpoints
|
https://github.com/pytorch/ao/issues/2649
|
open
|
[
"tracker"
] | 2025-07-31T22:45:07Z
| 2025-10-02T20:48:54Z
| 0
|
jerryzh168
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.