repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/torchtitan
| 1,506
|
Correct MoE auxiliary-loss-free load balancing?
|
A very small question: why is the second `expert_bias_delta` assignment used here?
https://github.com/pytorch/torchtitan/blob/cf30b2902718790cbe91900414c3201b6d7680b0/torchtitan/experiments/llama4/optimizer.py#L39-L43
This looks different than Algorithm 1 of https://arxiv.org/pdf/2408.15664, which would instead just be (IIUC):
```py
expert_bias_delta = moe.load_balance_coeff * torch.sign(
moe.tokens_per_expert.mean() - moe.tokens_per_expert
)
moe.expert_bias.add_(expert_bias_delta)
```
CC @tianyu-l , who implemented this, I think.
|
https://github.com/pytorch/torchtitan/issues/1506
|
closed
|
[] | 2025-07-31T20:24:47Z
| 2025-08-01T15:34:42Z
| 2
|
garrett361
|
huggingface/diffusers
| 12,038
|
Dataset structure for train_text_to_image_lora.py
|
Hello. I am trying to use **train_text_to_image_lora.py** script following the instructions https://github.com/huggingface/diffusers/tree/main/examples/text_to_image
I get errors on data structure and don't know what is the issue on my side.
I have a folder **data** where I have folder **image** and **csv** file.
C:/Users/XXX//data/
├── images/
│ ├── image1.jpg
│ ├── image2.jpg
│ └── ...
└── captions.csv
**Image** folder contain images and **csv** file contains two columns (image names and captions)
image, caption
image1.jpg, A dragon flying through fire
image2.jpg, A knight in shining armor
Please can you let me know how I should organize my dataset to be able to run the training.
|
https://github.com/huggingface/diffusers/issues/12038
|
open
|
[] | 2025-07-31T16:10:38Z
| 2025-08-01T16:44:48Z
| 1
|
HripsimeS
|
huggingface/lerobot
| 1,632
|
Are there plans to support distributed training?
|
[train.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/train.py) currently only supports single-GPU training. Is there a plan to support distributed training in the future?
|
https://github.com/huggingface/lerobot/issues/1632
|
closed
|
[
"question",
"policies"
] | 2025-07-31T03:31:46Z
| 2025-10-17T12:10:40Z
| null |
Hukongtao
|
huggingface/candle
| 3,039
|
Request support for Qwen2.5-vl or Fast-VLM
|
I'm trying to call some image-to-text visual models using candle, if anyone knows how to use Qwen2.5-vl or Fast-VLM, can you share it? Appreciate
|
https://github.com/huggingface/candle/issues/3039
|
open
|
[] | 2025-07-31T02:41:33Z
| 2025-08-04T12:21:35Z
| 1
|
826327700
|
huggingface/transformers
| 39,801
|
ValueError: This model does not support cache_implementation='static'. Please check the following issue: https://github.com/huggingface/transformers/issues/28981
|
### System Info
_prepare_cache_for_generation
raise ValueError(
ValueError: This model does not support cache_implementation='static'. Please check the following issue: https://github.com/huggingface/transformers/issues/28981
I got this error and i have no clue of how to solve it. I tried different implementations from different people and I always have the same problem.
I used this code: https://mer.vin/2024/11/finetune-llama-3-2-vision-radiology-images/
import os
from unsloth import FastVisionModel
import torch
from datasets import load_dataset
from transformers import TextStreamer
from unsloth import is_bf16_supported
from unsloth.trainer import UnslothVisionDataCollator
from trl import SFTTrainer, SFTConfig
# 1. Load the model
model, tokenizer = FastVisionModel.from_pretrained(
"unsloth/Llama-3.2-11B-Vision-Instruct",
load_in_4bit = True,
use_gradient_checkpointing = "unsloth",
)
model = FastVisionModel.get_peft_model(
model,
finetune_vision_layers = True,
finetune_language_layers = True,
finetune_attention_modules = True,
finetune_mlp_modules = True,
r = 16,
lora_alpha = 16,
lora_dropout = 0,
bias = "none",
random_state = 3407,
use_rslora = False,
loftq_config = None,
)
# 2. Load the dataset
dataset = load_dataset("unsloth/Radiology_mini", split = "train")
instruction = "You are an expert radiographer. Describe accurately what you see in this image."
def convert_to_conversation(sample):
conversation = [
{ "role": "user",
"content" : [
{"type" : "text", "text" : instruction},
{"type" : "image", "image" : sample["image"]} ]
},
{ "role" : "assistant",
"content" : [
{"type" : "text", "text" : sample["caption"]} ]
},
]
return { "messages" : conversation }
pass
converted_dataset = [convert_to_conversation(sample) for sample in dataset]
# 3. Before training
FastVisionModel.for_inference(model)
image = dataset[0]["image"]
instruction = "You are an expert radiographer. Describe accurately what you see in this image."
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": instruction}
]}
]
input_text = tokenizer.apply_chat_template(messages, add_generation_prompt = True)
inputs = tokenizer(
image,
input_text,
add_special_tokens = False,
return_tensors = "pt",
).to("cuda")
print("\nBefore training:\n")
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128,
use_cache = True, temperature = 1.5, min_p = 0.1)
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
pip install unsloth
export HF_TOKEN=xxxxxxxxxxxxx
### Expected behavior
Start fine-tuning
|
https://github.com/huggingface/transformers/issues/39801
|
closed
|
[
"bug"
] | 2025-07-30T20:59:45Z
| 2025-09-07T08:02:42Z
| 2
|
jpitalopez
|
huggingface/lerobot
| 1,631
|
🥚 Filtering Eggs on Moving Table: Dirt/Breakage Detection Feasibility
|
Hi 👋
Thanks a lot for your work on lerobot!
I am exploring the use of lerobot to filter eggs based on dirt or breakage while they move past the robot on a conveyor table. The goal is to detect anomalies in real time and eventually eject faulty eggs.
Some specific questions I have:
* Do you have any advice or feedback on using lerobot in this kind of setup?
* Are there known pros/cons with fast-moving objects and image-based anomaly detection?
* Would it make sense to multiply robots along the line (e.g., several cameras/models at different angles or points)?
* Is there support or a best practice for triggering actions (e.g. pneumatic ejection) once a faulty egg is detected?
I am happy to fine-tune a model or adapt an existing one if that’s viable.
Any insights would be super helpful 🙏
Thanks again!
|
https://github.com/huggingface/lerobot/issues/1631
|
open
|
[
"question",
"policies"
] | 2025-07-30T18:35:12Z
| 2025-08-12T09:07:41Z
| null |
KannarFr
|
pytorch/ao
| 2,631
|
What is the intention of "NF4WeightOnlyConfig" ?
|
Hi guys, I confuse about how this class is structured in project.
1. Why "NF4WeightOnlyConfig" does not work the same way like others config? Such as:
```python
from torchao.dtypes._nf4tensor_api import NF4WeightOnlyConfig
from torchao import quantize_
config = NF4WeightOnlyConfig()
quantize_(model,config)
```
I actually can do that, but why you place it in [private module](https://github.com/pytorch/ao/blob/4b119edb6d1e04b7d2cf98856a5366e28f75d6f7/torchao/dtypes/_nf4tensor_api.py#L15) ?
2. Why it's not `dataclass` ? So it means the default ` block_size: int = 64` and `scaler_block_size: int = 256` should be fixed ?
3. I want to train QLora with native pytorch model. And it would be great if i can use NF4. But the structure make me confuse, so what is the correct way/ best practice to use this?
Thank you.
|
https://github.com/pytorch/ao/issues/2631
|
closed
|
[] | 2025-07-30T13:57:44Z
| 2025-07-31T16:05:50Z
| null |
hieubnt235
|
huggingface/optimum
| 2,330
|
Patch Release to support `transformers~=4.53`
|
### System Info
```shell
optimum[onnxruntime-gpu]==1.26.1
torch==2.7.1
vllm==0.10.0
docker run --rm -it --platform linux/amd64 ghcr.io/astral-sh/uv:debian bash
```
### Who can help?
@JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
The latest release is more than 1 month old. It supports `transformers>=4.36,<4.53.0` with `onnxruntime-gpu` extra. This is incompatible with `vllm==0.10.0`, which requires `transformers>=4.53.2`. `vllm==0.10.0` is required to use with `torch==2.7.1`. My system is required to use `torch==2.7.1` due to the medium CVE in previous versions.
https://nvd.nist.gov/vuln/detail/CVE-2025-2953
In the current main branch, the requirements has been changed to `transformers>=4.36,<4.54.0`, which would mitigate the issue.
Is it possible to create a patch release based on the current main branch?
```bash
> uv pip compile <(echo "optimum[onnxruntime-gpu]>=1.23"; echo "vllm>=0.10")
x No solution found when resolving dependencies:
`-> Because only the following versions of optimum[onnxruntime-gpu] are available:
optimum[onnxruntime-gpu]<=1.23.0
optimum[onnxruntime-gpu]==1.23.1
optimum[onnxruntime-gpu]==1.23.2
optimum[onnxruntime-gpu]==1.23.3
optimum[onnxruntime-gpu]==1.24.0
optimum[onnxruntime-gpu]==1.25.0
optimum[onnxruntime-gpu]==1.25.1
optimum[onnxruntime-gpu]==1.25.2
optimum[onnxruntime-gpu]==1.25.3
optimum[onnxruntime-gpu]==1.26.0
optimum[onnxruntime-gpu]==1.26.1
and optimum[onnxruntime-gpu]>=1.23.0,<=1.23.2 depends on transformers<4.46.0, we can conclude that optimum[onnxruntime-gpu]>=1.23.0,<1.23.1
depends on transformers<4.46.0.
And because optimum[onnxruntime-gpu]>=1.23.1,<=1.23.2 depends on transformers<4.46.0 and transformers<4.46.0, we can conclude that
optimum[onnxruntime-gpu]>=1.23.0,<1.23.3 depends on transformers<4.46.0.
And because optimum[onnxruntime-gpu]==1.23.3 depends on transformers<4.47.0 and transformers>=4.36,<4.49.0, we can conclude that
optimum[onnxruntime-gpu]>=1.23.0,<1.25.0 depends on transformers<4.49.0.
And because optimum[onnxruntime-gpu]>=1.25.0,<=1.25.3 depends on transformers>=4.36,<4.52.0 and transformers>=4.36,<4.52.0, we can conclude that
optimum[onnxruntime-gpu]>=1.23.0,<1.25.2 depends on transformers<4.52.0.
And because optimum[onnxruntime-gpu]>=1.25.2,<=1.25.3 depends on transformers>=4.36,<4.52.0 and transformers>=4.36,<4.52.0, we can conclude that
optimum[onnxruntime-gpu]>=1.23.0,<1.26.0 depends on transformers<4.52.0.
And because optimum[onnxruntime-gpu]>=1.26.0 depends on transformers>=4.36,<4.53.0 and transformers>=4.36,<4.53.0, we can conclude that
optimum[onnxruntime-gpu]>=1.23.0 depends on transformers<4.53.0.
And because vllm==0.10.0 depends on transformers>=4.53.2 and only vllm<=0.10.0 is available, we can conclude that vllm>=0.10.0 and
optimum[onnxruntime-gpu]>=1.23.0 are incompatible.
And because you require optimum[onnxruntime-gpu]>=1.23 and vllm>=0.10.0, we can conclude that your requirements are unsatisfiable.
```
### Expected behavior
Able to install `optimum[onnxruntime-gpu]>=1.26` and `vllm>=0.10.0`.
```bash
> uv pip compile <(echo "optimum[onnxruntime-gpu] @ git+https://github.com/huggingface/optimum@689c0b5d38aabe265ab1eb334a6ca5bc3ca3574d"; echo "vllm>=0.10")
Resolved 152 packages in 359ms
# This file was autogenerated by uv via the following command:
# uv pip compile /dev/fd/63
aiohappyeyeballs==2.6.1
# via aiohttp
aiohttp==3.12.15
# via
# fsspec
# vllm
aiosignal==1.4.0
# via aiohttp
annotated-types==0.7.0
# via pydantic
anyio==4.9.0
# via
# httpx
# openai
# starlette
# watchfiles
astor==0.8.1
# via depyf
attrs==25.3.0
# via
# aiohttp
# jsonschema
# referencing
blake3==1.0.5
# via vllm
cachetools==6.1.0
# via vllm
cbor2==5.6.5
# via vllm
certifi==2025.7.14
# via
# httpcore
# httpx
# requests
# sentry-sdk
cffi==1.17.1
# via soundfile
charset-normalizer==3.4.2
# via requests
click==8.2.1
# via
# ray
# rich-toolkit
# typer
# uvicorn
cloudpickle==3.1.1
# via vllm
coloredlogs==15.0.1
# via onnxruntime-gpu
compressed-tensors==0.10.2
# via vllm
cupy-cuda12x==13.5.1
# via ray
datasets==4.0.0
# via optimum
depyf==0.19.0
# via vllm
dill==0.3.8
# via
# datasets
# depyf
# multiprocess
diskcache==5.6.3
# via vllm
distro==1.9.0
# via openai
dnspython==2.7.0
# via email-validator
einops==0.8.1
# via vllm
email-validator==2.2.0
# via
# fastapi
# pydantic
|
https://github.com/huggingface/optimum/issues/2330
|
closed
|
[
"bug"
] | 2025-07-30T02:40:41Z
| 2025-07-31T02:54:31Z
| 1
|
yxtay
|
pytorch/xla
| 9,519
|
Behaviour of xm.all_gather() in SPMD mode
|
## ❓ Questions and Help
I would like to confirm whether my MLIR compiler's handling of `xm.all_gather()` when running Torch-XLA in SPMD mode is correct.
Say I have the following:
- A tensor `t` with shape [8192, 784]
- A 2D named mesh `(batch, model)` of 8 devices in a [2, 4] configuration:
```
Device Mesh:
0 1 2 3
4 5 6 7
```
Now I do the following steps:
1. Move the tensor to the XLA device: `t = t.to(torch_xla.device())`
2. Shard dim 0 of t across the batch dimension and replicate dim 1: `xs.mark_sharding(t, mesh, ("batch", None))`
3. Perform an all-gather operation across dim 0:
```python
# Pair devices across batch rows
groups = [[0, 4], [1, 5], [2, 6], [3, 7]]
y = xm.all_gather(t, 0, groups=groups, pin_layout=False)
y = y.to("cpu")
```
The shape of the final `y` tensor is [16384, 784] where `y[:8192] == y[8192:] == t`. Is this the correct behaviour?
|
https://github.com/pytorch/xla/issues/9519
|
open
|
[
"question",
"distributed"
] | 2025-07-29T19:05:08Z
| 2025-07-30T17:40:04Z
| null |
hshahTT
|
huggingface/lerobot
| 1,622
|
Why is LeRobot’s policy ignoring additional camera streams despite custom `input_features`?
|
I'm training a SO101 arm policy with 3 video streams (`front`, `above`, `gripper`) and a state vector. The dataset can be found at this [link](https://huggingface.co/datasets/aaron-ser/SO101-Dataset/tree/main).
I created a custom JSON config (the `train_config.json` below) that explicitly lists the three visual streams under `policy.input_features`, and despite disabling the preset config loading with `"use_policy_training_preset": false`, the policy never takes into account any feed that isn't the front observations. Disabling the preset however is not mandatory as previous hackathons with multiple streams such as the [following](https://huggingface.co/LeRobot-worldwide-hackathon/91-AM-PM-smolvla-pouring-liquid/blob/main/train_config.json) used the preset config.
I pass into `lerobot.scripts.train` the `train_config.json` file shared below with the `--config_path` parameter. Although the initial printout of the config is correct with all three streams, after training finishes, the saved `train_config.json` file inside `aaron-ser/SO101-Model` only contains:
**aaron-ser/SO101-Model train_config.json snippet**
```
"input_features": {
"observation.state": { ... },
"observation.images.front": { ... },
"output_features": { ... }
```
Dropping the `above` and `gripper` streams although the HF dataset includes all three streams and I explicitly passed them in the JSON file.
What internal step or configuration is overriding my custom `input_features` and keeping only the front camera? How can I ensure LeRobot trains on all provided video streams?
**train_config.json**
```
{
"dataset": {
"repo_id": "aaron-ser/SO101-Dataset",
"root": null,
"episodes": null,
"image_transforms": {
"enable": false,
"max_num_transforms": 3,
"random_order": false,
"tfs": {
"brightness": {
"weight": 1.0,
"type": "ColorJitter",
"kwargs": {
"brightness": [
0.8,
1.2
]
}
},
"contrast": {
"weight": 1.0,
"type": "ColorJitter",
"kwargs": {
"contrast": [
0.8,
1.2
]
}
},
"saturation": {
"weight": 1.0,
"type": "ColorJitter",
"kwargs": {
"saturation": [
0.5,
1.5
]
}
},
"hue": {
"weight": 1.0,
"type": "ColorJitter",
"kwargs": {
"hue": [
-0.05,
0.05
]
}
},
"sharpness": {
"weight": 1.0,
"type": "SharpnessJitter",
"kwargs": {
"sharpness": [
0.5,
1.5
]
}
}
}
},
"revision": null,
"use_imagenet_stats": true,
"video_backend": "torchcodec"
},
"env": null,
"policy": {
"type": "act",
"n_obs_steps": 1,
"normalization_mapping": {
"VISUAL": "MEAN_STD",
"STATE": "MEAN_STD",
"ACTION": "MEAN_STD"
},
"input_features": {
"observation.state": {
"type": "STATE",
"shape": [
6
]
},
"observation.images.front": {
"type": "VISUAL",
"shape": [
3,
720,
1280
]
},
"observation.images.above": {
"type": "VISUAL",
"shape": [
3,
720,
1280
]
},
"observation.images.gripper": {
"type": "VISUAL",
"shape": [
3,
720,
1280
]
}
},
"output_features": {
"action": {
"type": "ACTION",
"shape": [
6
]
}
},
"device": "cuda",
"use_amp": false,
"push_to_hub": true,
"repo_id": "aaron-ser/SO101-Model",
"private": null,
"tags": null,
"license": null,
|
https://github.com/huggingface/lerobot/issues/1622
|
open
|
[
"question",
"policies"
] | 2025-07-29T14:07:14Z
| 2025-09-23T14:01:54Z
| null |
Aaron-Serpilin
|
huggingface/trl
| 3,797
|
How to view the training parameters after training is completed
|
How to view the training parameters after training is completed?I am using GRPOTrainer for training, but after training multiple times, I have forgotten the parameters I set. How can I view the saved training parameters?
|
https://github.com/huggingface/trl/issues/3797
|
open
|
[
"❓ question",
"🏋 GRPO"
] | 2025-07-29T09:42:52Z
| 2025-07-29T13:07:50Z
| null |
Tuziking
|
huggingface/optimum
| 2,329
|
Support for exporting paligemma to onnx
|
### Feature request
I’ve tried to export google/paligemma-3b-mix-224 to onnx using optimum. But it outputs: "ValueError: Trying to export a paligemma model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as custom_onnx_configs. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an
issue at https://github.com/huggingface/optimum/issues if you would like the model type paligemma to be supported natively in the ONNX export."
### Motivation
I’ve tried everything but nothing works =(
(Using custom configs, using torch.onnx.export, etc)
### Your contribution
Actually, it seems to me that I can’t help… =(
|
https://github.com/huggingface/optimum/issues/2329
|
closed
|
[
"Stale"
] | 2025-07-29T08:58:41Z
| 2025-09-06T02:04:25Z
| 2
|
DashaMed555
|
pytorch/torchtitan
| 1,482
|
Is there documentation on what exactly are 'dp_shard_mod_ep' and 'dp_shard_in_ep'] ?
|
Wondering where I can find detail on 'dp_shard_mod_ep', 'dp_shard_in_ep'] ?
https://github.com/pytorch/torchtitan/blob/5bab356c29dfababd8f16ab7d8e3d50cba6326e5/torchtitan/distributed/parallel_dims.py#L70
|
https://github.com/pytorch/torchtitan/issues/1482
|
open
|
[
"documentation"
] | 2025-07-29T06:48:43Z
| 2025-08-21T03:24:48Z
| null |
githubsgi
|
pytorch/helion
| 392
|
ImportError: cannot import name 'triton_key' from 'torch._inductor.runtime.triton_compat'
|
Does Helion require nightly PyTorch? (I'm using 2.7.1)
|
https://github.com/pytorch/helion/issues/392
|
closed
|
[
"question"
] | 2025-07-29T04:34:20Z
| 2025-08-25T21:20:54Z
| null |
HanGuo97
|
huggingface/transformers
| 39,744
|
_supports_static_cache disappear
|
### System Info
transformers main branch
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I see the attr `_supports_static_cache` disappeared in the model. I used to check if `model._supports_static_cache` before setting `cache_implementation=True`. For now, can I assume all models support static cache?
### Expected behavior
All models support static cache as `_supports_static_cache` is deprecated. Or do we have other method to check if the model support static cache?
|
https://github.com/huggingface/transformers/issues/39744
|
closed
|
[
"bug"
] | 2025-07-29T02:36:04Z
| 2025-07-29T08:17:00Z
| 4
|
jiqing-feng
|
pytorch/torchtitan
| 1,478
|
Is FSDP+TP+EP supported for Llama4 ?
|
Wondering if FSDP+TP+EP is supported for pre-training LLama4 ?
|
https://github.com/pytorch/torchtitan/issues/1478
|
closed
|
[
"question"
] | 2025-07-28T22:55:43Z
| 2025-08-21T02:36:59Z
| null |
githubsgi
|
pytorch/pytorch
| 159,295
|
Invalid onnx model is exported for model where data is assigned using a mask and index
|
### 🐛 Describe the bug
Exporting a model to onnx which assigns data with a mask and index produces a model which does not work.
Exporting the model:
```python
import torch
import torch.nn as nn
class TestModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, R):
B = R.shape[0]
r = torch.zeros((B, 2), dtype=R.dtype, device=R.device)
mask = R > 0
r[mask, 0] = R[mask]
return r
device = torch.device("cpu")
model = TestModel()
dummy_input = torch.ones((2,)).to(device)
torch.onnx.export(
model,
dummy_input,
"test_model.onnx",
export_params=True,
opset_version=11,
do_constant_folding=True,
input_names=['input'],
output_names=['output'],
)
```
Using the model:
```python
import onnxruntime as ort
import numpy as np
with open("test_model.onnx", "rb") as f:
session = ort.InferenceSession(f.read(), providers=["CPUExecutionProvider"])
_ = session.run(None, {"input": np.array([0, 1], dtype=np.float32)})
```
You will get an error
```
2025-07-28 15:31:24.7808412 [E:onnxruntime:, sequential_executor.cc:572 onnxruntime::ExecuteKernel] Non-zero status code returned while running Reshape node. Name:'/Reshape' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\cpu\tensor\reshape_helper.h:47 onnxruntime::ReshapeHelper::ReshapeHelper input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1}, requested shape:{2,1}
```
### Versions
Collecting environment information...
PyTorch version: 2.7.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Enterprise (10.0.22631 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: N/A
Python version: 3.12.5 (tags/v3.12.5:ff3bc82, Aug 6 2024, 20:45:27) [MSC v.1940 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
Is CUDA available: False
CUDA runtime version: 12.9.86
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070
Nvidia driver version: 576.88
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.9\bin\cudnn_ops64_9.dll
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel(R) Xeon(R) W-2255 CPU @ 3.70GHz
Manufacturer: GenuineIntel
Family: 179
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3696
MaxClockSpeed: 3696
L2CacheSize: 10240
L2CacheSpeed: None
Revision: 21767
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.22.1
[pip3] torch==2.7.1
[pip3] torchaudio==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
cc @justinchuby
|
https://github.com/pytorch/pytorch/issues/159295
|
closed
|
[
"module: onnx",
"triaged"
] | 2025-07-28T20:54:13Z
| 2025-09-03T20:13:32Z
| null |
cgaudreau-ubisoft
|
pytorch/TensorRT
| 3,722
|
❓ [Question] Exporting models using FlashAttention package
|
I'd love to export a PyTorch model to TensorRT. In this model I use flash-attn package to speed-up attention. Is this supported?
|
https://github.com/pytorch/TensorRT/issues/3722
|
open
|
[
"question"
] | 2025-07-28T11:54:07Z
| 2025-07-28T17:42:23Z
| null |
s1ddok
|
pytorch/pytorch
| 159,249
|
[ONNX] How to export RMS Norm
|
### 🚀 The feature, motivation and pitch
I'm converting a Pytorch model to ONNX format, but I got this error:
```
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::rms_norm' to ONNX opset version 20 is not supported
```
### Alternatives
I have read the ONNX documentation. They said that this operator is only supported by the opset version >= 23
### Additional context
_No response_
cc @justinchuby
|
https://github.com/pytorch/pytorch/issues/159249
|
closed
|
[
"module: onnx",
"triaged"
] | 2025-07-28T09:34:51Z
| 2025-07-30T14:22:46Z
| null |
HuynhNguyenPhuc
|
huggingface/lerobot
| 1,607
|
how to control a so-101 with trained ACT model?
|
https://huggingface.co/initie/test_pick_result
This is my pre-trained model for grabbing the switch on the desk by ACT model.
How to run this policy model on the Anaconda?
Already by way of example,
python -m lerobot.record --robot.type=so101_follower
--robot.port=COM3
--robot.id=ammd_follower_arm
--robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}, side: {type: opencv, index_or_path: 1, width: 640, height: 480, fps: 30} }"
--display_data=True
--dataset.repo_id="initie/eval_test_pick"
--dataset.single_task="Grab the switch"
--policy.path=initie/test_pick_result
--teleop.type=so101_leader --teleop.port=COM5
--teleop.id=ammd_leader_arm --dataset.reset_time_s=5
This is the example code from Lerobot tutorial, but when i run these codes, I had to record 10 episodes again.
I just wanna run a pre-trained model, not record an episode again. I'm curious about a simple code that only "runs" that model not including recording
|
https://github.com/huggingface/lerobot/issues/1607
|
open
|
[
"question",
"policies"
] | 2025-07-28T05:23:24Z
| 2025-10-15T03:28:50Z
| null |
initia1013
|
huggingface/lerobot
| 1,602
|
How to perform multi-GPU training for SMoVLA?
|
I noticed that the paper used 4 GPUs for pretraining, but the current training code doesn’t seem to support it. Could you provide the corresponding code?
|
https://github.com/huggingface/lerobot/issues/1602
|
closed
|
[] | 2025-07-27T09:46:04Z
| 2025-07-28T08:40:01Z
| null |
QZepHyr
|
huggingface/hmtl
| 72
|
How to create a website
|
https://github.com/huggingface/hmtl/issues/72
|
open
|
[] | 2025-07-27T09:30:22Z
| 2025-07-27T09:30:22Z
| null |
Chi23-ike
|
|
huggingface/text-generation-inference
| 3,304
|
using trtllm-build instead of optimum-nvidia for engine building or optimum-nvidia wrong version ?
|
Hello,
I'm experiencing significant issues when trying to use Text Generation Inference (TGI) with TensorRT-LLM as the backend.
**Problem 1: Version Compatibility**
I cannot use the latest version of TGI due to a known bug (see: https://github.com/huggingface/text-generation-inference/issues/3296).
I'm therefore using version: `ghcr.io/huggingface/text-generation-inference:3.3.4-trtllm`
However, this version uses TensorRT-LLM v0.17.0.post1, while the latest optimum-nvidia version ([[v0.1.0b9](https://github.com/huggingface/optimum-nvidia/releases/tag/v0.1.0b9)]) uses TensorRT-LLM 0.16.0.
When I try to launch TGI with my engine built using optimum-nvidia, I get the following error:
```
root@5ddf177112d7:/usr/local/tgi/bin# /usr/local/tgi/bin/text-generation-launcher --model-id "/engines/llama-3.2-3b-instruct-optimum/GPU/engines" --tokenizer-name "/models/llama-3.2-3b-instruct" --executor-worker "/usr/local/tgi/bin/executorWorker"
2025-07-27T06:16:40.717109Z INFO text_generation_backends_trtllm: backends/trtllm/src/main.rs:293: Successfully retrieved tokenizer /models/llama-3.2-3b-instruct
[2025-07-27 06:16:40.717] [info] [ffi.hpp:164] Initializing TGI - TensoRT-LLM Backend (v0.17.0.post1)
[2025-07-27 06:16:40.747] [info] [ffi.hpp:173] [FFI] Detected 1 Nvidia GPU(s)
[2025-07-27 06:16:40.758] [info] [backend.cpp:22] Detected single engine deployment, using leader mode
[TensorRT-LLM][INFO] Engine version 0.16.0 found in the config file, assuming engine(s) built by new builder API.
[TensorRT-LLM][INFO] Initializing MPI with thread mode 3
[TensorRT-LLM][INFO] Initialized MPI
[TensorRT-LLM][INFO] Refreshed the MPI local session
[TensorRT-LLM][INFO] MPI size: 1, MPI local size: 1, rank: 0
[TensorRT-LLM][INFO] Rank 0 is using GPU 0
[TensorRT-LLM][INFO] TRTGptModel maxNumSequences: 64
[TensorRT-LLM][INFO] TRTGptModel maxBatchSize: 64
[TensorRT-LLM][INFO] TRTGptModel maxBeamWidth: 1
[TensorRT-LLM][INFO] TRTGptModel maxSequenceLen: 4096
[TensorRT-LLM][INFO] TRTGptModel maxDraftLen: 0
[TensorRT-LLM][INFO] TRTGptModel mMaxAttentionWindowSize: (4096) * 28
[TensorRT-LLM][INFO] TRTGptModel enableTrtOverlap: 0
[TensorRT-LLM][INFO] TRTGptModel normalizeLogProbs: 1
[TensorRT-LLM][INFO] TRTGptModel maxNumTokens: 262144
[TensorRT-LLM][INFO] TRTGptModel maxInputLen: 4095 = maxSequenceLen - 1 since chunked context is enabled
[TensorRT-LLM][INFO] TRTGptModel If model type is encoder, maxInputLen would be reset in trtEncoderModel to maxInputLen: 4096 = maxSequenceLen.
[TensorRT-LLM][INFO] Capacity Scheduler Policy: MAX_UTILIZATION
[TensorRT-LLM][INFO] Context Chunking Scheduler Policy: None
[TensorRT-LLM][INFO] Loaded engine size: 6981 MiB
[TensorRT-LLM][ERROR] IRuntime::deserializeCudaEngine: Error Code 6: API Usage Error (The engine plan file is not compatible with this version of TensorRT, expecting library version 10.8.0.43 got
..)
Error: Runtime("[TensorRT-LLM][ERROR] Assertion failed: Failed to deserialize cuda engine. (/usr/src/text-generation-inference/target/release/build/text-generation-backends-trtllm-479f10d4b58ebb37/out/build/_deps/trtllm-src/cpp/tensorrt_llm/runtime/tllmRuntime.cpp:239)")
```
**Problem 2: Building Engine with trtllm-build**
I attempted to build my engine directly using `trtllm-build`, but when launching TGI, I encounter this error:
```
2025-07-27T06:15:55.033318Z INFO text_generation_backends_trtllm: backends/trtllm/src/main.rs:293: Successfully retrieved tokenizer /models/llama-3.2-3b-instruct
[2025-07-27 06:15:55.034] [info] [ffi.hpp:164] Initializing TGI - TensoRT-LLM Backend (v0.17.0.post1)
[2025-07-27 06:15:55.101] [info] [ffi.hpp:173] [FFI] Detected 1 Nvidia GPU(s)
terminate called after throwing an instance of 'nlohmann::json_abi_v3_11_3::detail::parse_error'
what(): [json.exception.parse_error.101] parse error at line 1, column 1: attempting to parse an empty input; check that your input string or stream contains the expected JSON
```
The error suggests it cannot find a JSON file, but the `config.json` file is present in the engine directory:
```bash
root@5ddf177112d7:/usr/local/tgi/bin# ls -l /engines/llama-3.2-3b-instruct/
total 3033324
-rw-r--r-- 1 root root 7848 Jul 26 17:21 config.json
-rw-r--r-- 1 root root 3106108276 Jul 26 17:21 rank0.engine
```
**Environment:**
- Model: llama-3.2-3b-instruct
- TGI Version: 3.3.4-trtllm
- TensorRT-LLM Version: v0.17.0.post1
Could you please help resolve these compatibility issues or provide guidance on the correct workflow for using TensorRT-LLM with TGI?
### Information
- [x] Docker
- [ ] The CLI directly
### Tasks
- [x] An officially supported command
- [ ] My own modifications
### Reproduction
**1/ Build your engine :**
`docker run --rm -it --gpus=1 --shm-size=1g -v "/home/jyce/unmute.mcp/volumes/llm-tgi/engines:/engines" -v "/home/jyce/unmute.mcp/volumes/llm-tgi/models:/models" huggingface/optimum-nvidia:v0.1.0b8-py310 bash
`
```
optimum-cli export trtllm \
--tp=1 \
--pp=1 \
--max-batch-size
|
https://github.com/huggingface/text-generation-inference/issues/3304
|
open
|
[] | 2025-07-27T06:24:29Z
| 2025-10-06T09:56:29Z
| 4
|
psykokwak-com
|
huggingface/transformers
| 39,705
|
[i18n-<bn>] Translating docs to <Bengali>
|
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the Bengali-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [x] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go 🔥
-->
|
https://github.com/huggingface/transformers/issues/39705
|
open
|
[
"WIP"
] | 2025-07-27T06:18:20Z
| 2025-07-27T11:58:32Z
| 1
|
ankitdutta428
|
huggingface/transformers
| 39,699
|
No flag to support Conditional Parameter Loading for gemma-3n-E2B models in transformer
|
### System Info
Hi,
While a lot has been mentioned about gemma-3n-E2B and gemma-3n-E4B about the COnditional parameter loading and reduced memory loading
There is no configuration currently visible in transformers for supporting that.
Is it possible to get the related configuration/code/documentation to make it work to get an actual lower memory model?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import torch
from transformers import AutoProcessor, AutoModelForImageTextToText
GEMMA_MODEL_ID = "google/gemma-3n-E2B-it"
print("Loading processor")
processor = AutoProcessor.from_pretrained(GEMMA_MODEL_ID)
print("Loadind model")
model = AutoModelForImageTextToText.from_pretrained(
GEMMA_MODEL_ID, torch_dtype="auto", device_map=None).to("cpu")
There is no flag for doing Conditional parameter Loading or PLE
### Expected behavior
Some flag using which Conditional Parameter Loading can be enabled and save on the memory
|
https://github.com/huggingface/transformers/issues/39699
|
closed
|
[
"bug"
] | 2025-07-26T18:08:00Z
| 2025-09-03T08:02:58Z
| 2
|
aakashgaur01
|
huggingface/tokenizers
| 1,835
|
Can you provide binary releases?
|
It seems that binaries are not available in recent versions.
tokenizers module is essential for the latest models, and it would be preferable if it could be easily installed.
Setting up a Rust compilation environment can be cumbersome, and it's almost impossible to do so offline.
Could we possibly distribute something in binary form via PyPI or here?
|
https://github.com/huggingface/tokenizers/issues/1835
|
closed
|
[] | 2025-07-26T16:07:12Z
| 2025-09-08T13:49:52Z
| 4
|
goldenmomonga
|
huggingface/lerobot
| 1,599
|
Evaluation results of VLA models on MetaWorld Benchmark
|
Thank you for this excellent work! I noticed that the paper mentions evaluation results of VLA models on MetaWorld. However, in the original papers for Octo and π₀, results are only reported on the LIBERO benchmark, and I haven’t found their MetaWorld evaluations in other related studies. I’d like to know how Octo and π₀ were specifically evaluated on MetaWorld in this work, including implementation details (e.g., for π₀, was it full finetune or only fine-tuning the action expert?). Additionally, the MetaWorld MT50 dataset on LeRobot appears to lack data for one task—is this the real data used for fine-tuning VLAs?
|
https://github.com/huggingface/lerobot/issues/1599
|
open
|
[
"enhancement",
"question",
"policies",
"simulation"
] | 2025-07-26T11:18:54Z
| 2025-08-12T09:17:44Z
| null |
Zooy138
|
huggingface/transformers
| 39,686
|
CRITICAL ISSUE REPORT! GEMMA 3 1B CANNOT RUN!
|
How to reproduce:
Run this:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the base model in FP16
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/gemma-3-1b-pt",
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map="mps",
)
# Load and configure the tokenizer
tokenizer = AutoTokenizer.from_pretrained("unsloth/gemma-3-1b-pt", trust_remote_code=True)
# Generate the text
prompt = "<bos>Once upon a time"
inputs = tokenizer(prompt, return_tensors="pt").to(base_model.device)
outputs = base_model.generate(inputs.input_ids, max_length=50)
# Decode the generated text
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```
Error:
```
(yuna) yuki@yuki AI % python gener.py
k_out_updated = k_out_shifted.index_copy(2, update_position, key_states)
Traceback (most recent call last):
File "/Users/yuki/Documents/AI/gener.py", line 19, in <module>
outputs = base_model.generate(inputs.input_ids, max_length=50)
File "/opt/anaconda3/envs/yuna/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/opt/anaconda3/envs/yuna/lib/python3.10/site-packages/transformers/generation/utils.py", line 2623, in generate
result = self._sample(
File "/opt/anaconda3/envs/yuna/lib/python3.10/site-packages/transformers/generation/utils.py", line 3649, in _sample
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
```
System: macOS Tahoe, MacBook Pro M1 with 16 GB of RAM
|
https://github.com/huggingface/transformers/issues/39686
|
closed
|
[] | 2025-07-26T00:22:27Z
| 2025-07-28T12:07:50Z
| 5
|
yukiarimo
|
huggingface/lerobot
| 1,592
|
Time spent on imitation learning training (ACT)
|
I use colab to make a policy with ACT model.
The note said, "Training with the ACT policy for 100,000 steps typically takes about 1.5 hours on an NVIDIA A100 GPU,", and I used A100 model in colab too.
However the expected time is 13 hours, which seems to be much longer than the standard value of 1.5 hours.
Is it correct that it takes this much time in a colab environment?
I used dataset from
https://huggingface.co/datasets/initie/test_pick
and there is no problem with the operation of the training code.
|
https://github.com/huggingface/lerobot/issues/1592
|
closed
|
[
"question",
"policies"
] | 2025-07-25T06:36:35Z
| 2025-10-08T08:32:32Z
| null |
initia1013
|
huggingface/datasets
| 7,699
|
Broken link in documentation for "Create a video dataset"
|
The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken.
https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset
<img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" />
|
https://github.com/huggingface/datasets/issues/7699
|
open
|
[] | 2025-07-24T19:46:28Z
| 2025-07-25T15:27:47Z
| 1
|
cleong110
|
huggingface/transformers
| 39,637
|
[BUG] Run 111B+ Teacher distributed inference and 8B Student distributed training on multi-node H200 GPUs using the Transformers Trainer without encountering OOM errors?
|
Hello, first off, apologies if this information is already available elsewhere. I've searched through the documentation and existing issues but haven't found a clear answer to my question.
I have access to 2 to 4 nodes (16 to 32 GPUs in total), each equipped with 8x140GB H200 GPUs. My objective is to perform large-scale distributed inference using a massive 111B-parameter Teacher model (CohereLabs/c4ai-command-a-03-2025) and simultaneously conduct online knowledge distillation (soft-logit based) from this 111B Teacher model to a smaller 8B Student model (CohereLabs/c4ai-command-r7b-12-2024).
Is there a way to simultaneously run distributed inference for Teacher models larger than 111B and distributed training for Student models in a multi-node setup, utilizing Hugging Face Transformers' Trainer?
The Transformers version I'm using is v4.51.3. I've observed the use of model = deepspeed.tp_model_init within the def deepspeed_init function in src/transformers/integrations/deepspeed.py. I attempted to apply this code, but it resulted in a torch.distributed.DistBackendError.
I would be very grateful if someone could explain what would be most suitable for my use case. A minimal working example would be the icing on the cake. Surely, if the Open LLM Leaderboard shows that online knowledge distillation (soft-logit) is possible with large models exceeding 111B, there must be a straightforward way to achieve what I want, but I'm unsure how everyone else does it.
For reference, below is the script I'm currently working with:
`deepspeed --num_nodes 2 --num_gpus 8 \
--hostfile $HOSTFILE \
--master_addr $MASTER_ADDR \
--master_port=62535 \
train.py \
--teacher CohereLabs/c4ai-command-a-03-2025 \
--student CohereLabs/c4ai-command-r7b-12-2024 \
--epochs 1 --batch_size 1 --seq_len 4096 --temperature 1.0 --max_samples 150 --lr 1e-6 2>&1 | tee -a "./train.log" `
```import deepspeed
import torch.distributed as dist
import os, math, argparse, warnings, torch, random, multiprocessing as mp
from datasets import load_dataset, concatenate_datasets
from transformers import (AutoTokenizer, AutoModelForCausalLM,
PreTrainedTokenizerBase)
from torch.nn.utils.rnn import pad_sequence
import torch.nn.functional as F
from datetime import timedelta
from deepspeed.runtime.utils import see_memory_usage
os.environ["TOKENIZERS_PARALLELISM"] = "false"
os.environ.setdefault("NCCL_ASYNC_ERROR_HANDLING", "1")
warnings.filterwarnings("ignore", category=UserWarning)
mp.set_start_method("spawn", force=True)
def get_args():
p = argparse.ArgumentParser()
p.add_argument("--teacher", default="")
p.add_argument("--student", default="")
p.add_argument("--dataset", default="")
p.add_argument("--split", default="train")
p.add_argument("--epochs", type=int, default=1)
p.add_argument("--batch_size", type=int, default=1,
help="per-GPU micro-batch")
p.add_argument("--seq_len", type=int, default=4096)
p.add_argument("--temperature", type=float, default=1.0)
p.add_argument("--lr", type=float, default=1e-6)
p.add_argument("--max_samples", type=int, default=0,
help="0=1000 ")
p.add_argument("--local_rank", type=int, default=-1,
help="deepspeed/torch launcher GPU index")
p.add_argument("--cache_path", default="")
p.add_argument("--hf_token", default="")
p = deepspeed.add_config_arguments(p)
return p.parse_args()
def main():
timeout_seconds = 3600
timeout_duration = timedelta(seconds=timeout_seconds)
dist.init_process_group(
backend="nccl",
timeout=timeout_duration
)
args = get_args()
deepspeed.init_distributed()
rank, world = deepspeed.comm.get_rank(), deepspeed.comm.get_world_size()
device = torch.device("cuda", deepspeed.comm.get_local_rank())
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(args.student,
use_fast=True, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# tokenizer token_id
tokenizer.eos_token_id = tokenizer.convert_tokens_to_ids(tokenizer.eos_token)
tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)
# Teacher (inference only)
teacher_model = AutoModelForCausalLM.from_pretrained(
args.teacher, torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True, device_map=None,
cache_dir=args.cache_path,token=args.hf_token)
see_memory_usage("After load model", force=True)
teacher_model.config.eos_token_id = tokenizer.eos_token_id
teacher_model.config.pad_token_id = tokenizer.pad_token_id
teacher_engine = deepspeed.init_inference(
teacher_model,
mp_size=world,
dtype=torch.bfloat16,
replace_with_kernel_inject=True,
replace_method="auto")
|
https://github.com/huggingface/transformers/issues/39637
|
closed
|
[] | 2025-07-24T15:05:38Z
| 2025-09-01T08:03:18Z
| 3
|
seona21
|
huggingface/lerobot
| 1,586
|
Real-world deploy on ALOHA Robot
|
How could I deploy the policies on the ALOHA robot? And how could I deploy in the real world?
|
https://github.com/huggingface/lerobot/issues/1586
|
open
|
[
"question",
"robots"
] | 2025-07-24T12:52:06Z
| 2025-08-21T16:18:26Z
| null |
LogSSim
|
huggingface/diffusers
| 11,984
|
A compatibility issue when using custom Stable Diffusion with pre-trained ControlNets
|
I have successfully fine-tuned a Stable Diffusion v1.5 model using the Dreambooth script, and the results are excellent. However, I've encountered a compatibility issue when using this custom model with pre-trained ControlNets. Since the Dreambooth process modifies the U-Net weights, the original ControlNet is no longer aligned with the fine-tuned model, leading to a significant degradation in control and image quality.
My goal is to find a way to make them compatible again. It's important to clarify that I am trying to avoid a full, separate fine-tuning of the ControlNet on my custom model. That process is data- and resource-intensive, which defeats the purpose of a lightweight personalization method like Dreambooth. I have tried modifying the train_dreambooth.py script to incorporate ControlNet, but results have been consistently poor.
Is there a dedicated script or a recommended workflow in diffusers to fine-tune a Stable Diffusion with ControlNet via Dreambooth? Any guidance or pointers would be greatly appreciated. Thanks a lot!
|
https://github.com/huggingface/diffusers/issues/11984
|
closed
|
[] | 2025-07-24T09:16:55Z
| 2025-07-24T15:15:20Z
| 6
|
ScienceLi1125
|
huggingface/lighteval
| 868
|
How to calculate perplexity from an OpenAI compatible API
|
Hello,
I'm new to LightEval. I want to use LightEval to evaluate an LLM model that is served via an API. The API is OpenAI compatible. It also returns logprobs for each token. Is there a built-in function to evaluate the perplexity score? I'm asking because I see that it’s not implemented.
https://github.com/huggingface/lighteval/blob/d805f9fa0a84da9ca4c0c6a638bbed149a7012a3/src/lighteval/models/litellm_model.py#L322
Any help or guidance is greatly appreciated. Thanks.
|
https://github.com/huggingface/lighteval/issues/868
|
open
|
[] | 2025-07-24T07:27:05Z
| 2025-07-24T07:27:05Z
| null |
mrtpk
|
huggingface/lerobot
| 1,580
|
Environment_State in act and SmolVLA policy
|
Hi, Thanks for the awesome work!
I have been noticing a variable called observation.environment_state in the act policy. What is exactly the feature environment_state. Thanks!
|
https://github.com/huggingface/lerobot/issues/1580
|
closed
|
[
"question",
"policies"
] | 2025-07-24T03:32:31Z
| 2025-10-08T13:09:33Z
| null |
kasiv008
|
pytorch/tutorials
| 3,488
|
💡 [REQUEST] - tutorial on torchrl LLM API
|
### 🚀 Describe the improvement or the new tutorial
I’d like to write a tutorial about TorchRL LLM post-training API including data formatting for RL, multi-turn conversation handling, tool usage etc
@svekars what’s the policy on open-source models usage? Can I load and use a small model (0.5B) freely?
### Existing tutorials on this topic
I don’t think there are any
### Additional context
_No response_
|
https://github.com/pytorch/tutorials/issues/3488
|
open
|
[
"tutorial-proposal"
] | 2025-07-23T21:23:25Z
| 2025-07-23T22:26:43Z
| 3
|
vmoens
|
huggingface/transformers.js
| 1,379
|
Why Do I Get Different Outputs in Python and JavaScript for the Same ONNX Model?
|
Hi ,
I'm running inference on the same ONNX model (t5-small-new) using both Python and JavaScript (via ONNX Runtime). However, I'm noticing that the outputs are different between the two environments, even though the inputs and model are the same. The output of the Python code is correct while JS is not accurate.
Python Code:
```
from optimum.onnxruntime import ORTModelForSeq2SeqLM
from transformers import AutoTokenizer
model = ORTModelForSeq2SeqLM.from_pretrained(
"t5-small-new",
use_cache=True
)
tokenizer = AutoTokenizer.from_pretrained("t5-small-new")
inputs = tokenizer("My Input", return_tensors="pt")
outputs = model.generate(**inputs)
print("Prediction:", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
JS code:
```
const inputText = "My Input";
const tokenizer = await window.AutoTokenizer.from_pretrained("t5-small-new");
const model = await window.AutoModelForSeq2SeqLM.from_pretrained("t5-small-new", {
dtype: "fp32",
device: "wasm",
});
const encoded = await tokenizer(inputText, {
return_tensors: "pt",
});
const output = await model.generate({
input_ids: encoded.input_ids,
attention_mask: encoded.attention_mask,
use_cache: true,
});
const decoded = await tokenizer.decode(output[0], {
skip_special_tokens: true,
});
console.log("JS Prediction:", decoded);
```
My model uses `decoder_model_merged.onnx`, `encoder_model.onnx`, and `decoder_model.onnx`.
Could you guide me on what is happening and why I get different results?
|
https://github.com/huggingface/transformers.js/issues/1379
|
closed
|
[
"question"
] | 2025-07-23T20:13:57Z
| 2025-08-29T23:43:21Z
| null |
mahdin75
|
pytorch/executorch
| 12,756
|
How to get ExecuTorch version in C++?
|
I using ExecuTorch in my C++ application and I want to get ExecuTorch version at compile time or runtime.
But I haven't found some `#define` or `const std::string` like `EXECUTORCH_VERSION` or function like `get_version()`.
For example, PyTorch has [`TORCH_VERSION`](https://github.com/pytorch/pytorch/blob/fe8f556006b3397b7bdf844ba9a6cf329c0c1846/torch/csrc/api/include/torch/version.h.in#L16) and TFLite has [`TFLITE_VERSION_STRING`](https://github.com/tensorflow/tensorflow/blob/56a01a65e8055a234cd2198eefaef1ef4f7b087f/tensorflow/lite/version.h#L27). Is something like this available in ExecuTorch?
cc @larryliu0820 @JacobSzwejbka @lucylq @mergennachin @byjlw
|
https://github.com/pytorch/executorch/issues/12756
|
open
|
[
"module: runtime",
"module: user experience"
] | 2025-07-23T19:17:40Z
| 2025-09-16T21:45:11Z
| null |
eltimen
|
huggingface/transformers
| 39,618
|
SageAttention for attention implementation?
|
### Feature request
I've noticed it's been a while now, but transformers still only has flash attention as the fastest attention backend for calls like these:
<img width="1307" height="780" alt="Image" src="https://github.com/user-attachments/assets/3f3d62f6-a166-4ca6-97a0-49263fd93299" />
Are there any plans to add sageattention as well?
### Motivation
It's become increasingly involved to have to monkey patch sage attention support for every new model that comes out, and for older models that used older versions of transformers, I've had to do unholy things like this:
<img width="1296" height="705" alt="Image" src="https://github.com/user-attachments/assets/c5f4ff6a-094a-48f4-9339-17de1ece43d0" />
### Your contribution
I have an example of a patch I had to do so I will upload that here
[llama_nar.py.txt](https://github.com/user-attachments/files/21393926/llama_nar.py.txt)
|
https://github.com/huggingface/transformers/issues/39618
|
open
|
[
"Feature request"
] | 2025-07-23T19:10:47Z
| 2025-07-25T12:30:37Z
| 4
|
Many0therFunctions
|
huggingface/diffusers
| 11,977
|
how to load a finetuned model especially during validation phase
|
<img width="1034" height="743" alt="Image" src="https://github.com/user-attachments/assets/c4e9318f-10aa-4b91-9d60-e28a3be38f8a" />
As the above, I have finetuned the model and want to validate it, but the given demo which is train_dreambooth_sd3.py still uses
"pipeline = StableDiffusion3Pipeline.from_pretrained(
args.pretrained_model_name_or_path,
transformer=transformer,
text_encoder=text_encoder_one,
text_encoder_2=text_encoder_two,
text_encoder_3=text_encoder_three,
) " .
I wonder why it still load from args.pretrained_model_name_or_path as it has saved the finetuned model in the save_path which is "os.path.join(args.output_dir, f"checkpoint-{global_step}")".
so, how to how to load the finetuned model during validation phase?
Another confusion, what is the difference between " StableDiffusion3Pipeline.from_pretrained() " and "SD3Transformer2DModel.from_pretrained" as the following:
<img width="1034" height="743" alt="Image" src="https://github.com/user-attachments/assets/7d9e5915-8aa2-4678-b39f-6ecb4480a02b" />
|
https://github.com/huggingface/diffusers/issues/11977
|
open
|
[] | 2025-07-23T11:54:16Z
| 2025-07-24T09:19:11Z
| null |
micklexqg
|
pytorch/executorch
| 12,749
|
How to run a executorch model directly from memory instead of saving it as a disk file
|
### 📚 The doc issue
Hi,
I wanted to know if there is any ExecuTorch runtime API that can accept a *.pte model available in the memory (in some sort of a buffer format) and use it to do load and infer?
So far, I could only find a few which require the model to be passed as a disk file.
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/executorch/issues/12749
|
open
|
[
"module: extension"
] | 2025-07-23T11:09:04Z
| 2025-09-02T06:22:00Z
| null |
vikasbalaga
|
huggingface/lerobot
| 1,579
|
Is there a video backend supporting nondestructive encoding?
|
I saved images during recording through not deletng folder `images`. When I try to compare the first frame.png in `images` folder and dataset=make_dataset(config)'s first image, I found the saved png file is nondestructive. But the image I got by lerobot is not.
How I find:
in `def save_episode`
```
# img_dir = self.root / "images"
# if img_dir.is_dir():
# shutil.rmtree(self.root / "images")
```
This has been moved in latest version. now:
```
def encode_episode_videos(self, episode_index: int) -> None:
...
encode_video_frames(img_dir, video_path, self.fps, overwrite=True)
shutil.rmtree(img_dir)
```
I saved some images through recording with one channel filled with zero. Then read the saved png through cv2, it showed it has a 0-filled channel.
Then I try to check whether I can get the same image through lerobot
so I did this in train.py
```
raw_dataloader = torch.utils.data.DataLoader(
dataset,
num_workers=cfg.num_workers,
batch_size=cfg.batch_size,
shuffle=False,
sampler=sampler,
pin_memory=device.type == "cuda",
drop_last=False,
)
image_tensor=peek_batch["observation.images.side_depth"][0]
image_np = (image_tensor * 255).permute(1, 2, 0).cpu().numpy().astype(np.uint8)
```
Sadly,`image_np` is really different from real png, it doesn't have a 0-filled channel, and its average data shows larger.
|
https://github.com/huggingface/lerobot/issues/1579
|
open
|
[
"question",
"dataset"
] | 2025-07-23T08:38:39Z
| 2025-08-12T09:22:26Z
| null |
milong26
|
huggingface/candle
| 3,032
|
`matmul` (and others) Precision issues between Candle & PyTorch
|
We noticed there's some precision discrepancy in matrix multiplication and the linear layer between between Candle and PyTorch. This matters a lot when reproducing LLMs originated from PyTorch into Candle. We used the `hf_hub::api::Api` to get the safetensors from the hub and for testing the precision issues for each modules independently. This also occurs for the `BF16` dtype in `Cuda`.
Here's a shortened list of tests (for brevity) between `candle_core::tensor::Tensor::matmul` and `torch.matmul`
```
❌ test_0: MSE=0.0000000004096404, MAE=0.00001550 (dims: 2048x256, dtype: F32, device: Cpu)
❌ test_1: MSE=0.0000000003628351, MAE=0.00001453 (dims: 2048x256, dtype: F32, device: Cpu)
...
❌ test_48: MSE=0.0000000000824194, MAE=0.00000633 (dims: 512x1024, dtype: F32, device: Cpu)
❌ test_49: MSE=0.0000000003840639, MAE=0.00001534 (dims: 2048x256, dtype: F32, device: Cpu)
```
We did notice `candle_nn::Embedding` performed at 0-tolerance (tested indirectly), which probably means the the loaded weights themselves are working precisely.
Have you guys tried validating your implementation with the PyTorch at 0-tolerance (within the same CPU/GPU architecture)? Is there any proper way to mitigate this? We need it for our implementation. Thank you.
|
https://github.com/huggingface/candle/issues/3032
|
closed
|
[] | 2025-07-23T04:07:08Z
| 2025-09-27T21:25:51Z
| 4
|
andrew-shc
|
huggingface/lerobot
| 1,578
|
Lerobot metaworld dataset only provides 49 tasks
|
https://huggingface.co/datasets/lerobot/metaworld_mt50
There are only 49 tasks and "Push the puck to a goal" task repeates twice
|
https://github.com/huggingface/lerobot/issues/1578
|
open
|
[
"question",
"simulation"
] | 2025-07-23T04:03:17Z
| 2025-08-12T09:23:12Z
| null |
chenkang455
|
huggingface/lerobot
| 1,577
|
test failed after training SVLA
|
I collected 76 sets of data and used the same calibration file as during collection. However, after training for 24k steps, the model obtained was unable to complete the grasping task during inference. Can anyone help me deal with the problem?
[dataset](https://huggingface.co/datasets/Xiaoyan97/orange_block_pickplace)
|
https://github.com/huggingface/lerobot/issues/1577
|
open
|
[
"question",
"policies"
] | 2025-07-23T03:59:26Z
| 2025-08-12T09:23:26Z
| null |
Liu-Xiaoyan97
|
huggingface/lerobot
| 1,576
|
Multiple Dataset training
|
How to train multiple lerobot dataset? is there any function I can use it
|
https://github.com/huggingface/lerobot/issues/1576
|
open
|
[
"question",
"dataset"
] | 2025-07-23T03:46:03Z
| 2025-10-10T09:30:06Z
| null |
JustinKai0527
|
huggingface/transformers
| 39,596
|
Does transformers support python3.13 -- disable-gil or python3.14 free threading?
|
Does transformers support python3.13 -- disable-gil or python3.14 free threading?
I got an error when trying to install transformers on these two python versions.
|
https://github.com/huggingface/transformers/issues/39596
|
closed
|
[] | 2025-07-23T02:34:03Z
| 2025-08-30T08:02:54Z
| 2
|
SoulH-qqq
|
huggingface/transformers.js
| 1,374
|
nanoVLM support
|
### Question
I would like to know if there is any plan to support models built with nanoVLM [https://github.com/huggingface/nanoVLM], thanks.
|
https://github.com/huggingface/transformers.js/issues/1374
|
open
|
[
"question"
] | 2025-07-22T11:43:57Z
| 2025-07-23T09:02:15Z
| null |
sbrzz
|
huggingface/diffusers
| 11,971
|
What is the minimum memory requirement for model training?
|
Hello, I would like to try training an SDXL model using my own dataset. What is the minimum memory size required for the model?
|
https://github.com/huggingface/diffusers/issues/11971
|
closed
|
[] | 2025-07-22T07:52:28Z
| 2025-07-22T08:26:27Z
| null |
WWWPPPGGG
|
pytorch/torchtitan
| 1,439
|
Duplicate definition of vocab_size?
|
Hi @wwwjn @H-Huang @tianyu-l thanks for the amazing work on deepseek v3
Have a minor question: why is there a definition of vocab size here
https://github.com/pytorch/torchtitan/blob/4e73af3e2c5f99ad3cb5a21612e615a64b0b75e7/torchtitan/models/deepseek_v3/__init__.py#L50-L51C9
which then gets overridden by the tokenizer's vocab size here?
https://github.com/pytorch/torchtitan/blob/4e73af3e2c5f99ad3cb5a21612e615a64b0b75e7/torchtitan/models/deepseek_v3/model/args.py#L96-L100
|
https://github.com/pytorch/torchtitan/issues/1439
|
closed
|
[] | 2025-07-21T22:59:11Z
| 2025-07-23T04:09:56Z
| 1
|
vwxyzjn
|
huggingface/transformers
| 39,565
|
Model forward execution in full eager mode?
|
I know there is a flag `attn_implementation` which could trigger specialized attention kernel implementation. Besides this, does everything run in native PyTorch eager mode? Does `transformers` have any other custom op or kernel?
```python
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B", device_map="auto", torch_dtype=torch.bfloat16, attn_implementation=None)
model.forward(input_tokens)
```
I'm asking this to see if `transformers` can be used as a numerical baseline to verify other inference backend
|
https://github.com/huggingface/transformers/issues/39565
|
closed
|
[] | 2025-07-21T21:49:05Z
| 2025-08-21T08:34:59Z
| 3
|
22quinn
|
huggingface/lerobot
| 1,564
|
How are Episode Stats used?
|
I'm looking to create a subset of an episode (ie sec 2-4) in a 30 second episode, and wanted to know how episode_stats are used later on for training / inference?
Are they used to normalize model inputs or are they used somewhere else as well?
ie. in modeling_act.py
```
self.normalize_inputs = Normalize(
config.input_features, config.normalization_mapping, dataset_stats)
```
|
https://github.com/huggingface/lerobot/issues/1564
|
closed
|
[
"question",
"policies",
"processor"
] | 2025-07-21T19:06:21Z
| 2025-08-12T09:27:29Z
| null |
andlyu
|
huggingface/lerobot
| 1,561
|
will you release the libero ft&eval setting?
|
hello your smolVLA is a wonderful work ,i notice that you finetuned it on the **libero** and evalaute at the same time.but i couldn't achieve the same or similar success rate**(just 76% ,much lower than your '96%')**
**have you use the async inference in libero?**
I think it must be the different hyperparameters with yours,so could you release the script(finetune.py & eval.py) or just tell me your ft&eval settings.here is my emal 602225349@qq.com
thx u in advance~
|
https://github.com/huggingface/lerobot/issues/1561
|
closed
|
[
"enhancement",
"question",
"policies"
] | 2025-07-21T13:57:13Z
| 2025-09-23T09:25:04Z
| null |
JuilieZ
|
huggingface/transformers
| 39,554
|
Why `is_causal` is not used in `flash_attention_forward` ?
|
I want to perform bidirectional attention in the Qwen3 model to train an embedding model, so I passed `is_causal=False` in the model `forward` (I manually added `is_causal` arguments in all `forward` method such as `Qwen3Model` and `Qwen3Attention` in`modeling_qwen3.py`):
```python
class Qwen3Attention(nn.Module):
"""Multi-headed attention from 'Attention Is All You Need' paper"""
...
def forward(
self,
hidden_states: torch.Tensor,
position_embeddings: tuple[torch.Tensor, torch.Tensor],
attention_mask: Optional[torch.Tensor],
past_key_value: Optional[Cache] = None,
cache_position: Optional[torch.LongTensor] = None,
is_causal: Optional[bool] = True, # I add is_causal here
**kwargs: Unpack[FlashAttentionKwargs],
) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]:
...
attn_output, attn_weights = attention_interface(
self,
query_states,
key_states,
value_states,
attention_mask,
dropout=0.0 if not self.training else self.attention_dropout,
scaling=self.scaling,
sliding_window=self.sliding_window, # diff with Llama
is_causal=is_causal, # and is_causal from the argument is passed to the attention_interface (e.g. `flash_attention_2`, `sdpa_attention_forward`)
**kwargs,
)
```
I can successfully change the causality of the attention in `sdpa_attention_forward`. However, I realized that it does not change the causality in the attention in `flash_attention_forward`. After diving into the implementation of `flash_attention_forward`, I found the reason in `flash_attention_forward` located at `transformers/integrations/flash_attention.py`:
```python
def flash_attention_forward(
module: torch.nn.Module,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
attention_mask: Optional[torch.Tensor],
dropout: float = 0.0,
scaling: Optional[float] = None,
sliding_window: Optional[int] = None,
softcap: Optional[float] = None,
**kwargs,
) -> tuple[torch.Tensor, None]:
...
# FA2 always relies on the value set in the module, so remove it if present in kwargs to avoid passing it twice
kwargs.pop("is_causal", None)
attn_output = _flash_attention_forward(
query,
key,
value,
attention_mask,
query_length=seq_len,
is_causal=module.is_causal, # here module is `Qwen3Attention`
dropout=dropout,
softmax_scale=scaling,
sliding_window=sliding_window,
softcap=softcap,
use_top_left_mask=_use_top_left_mask,
target_dtype=target_dtype,
attn_implementation=module.config._attn_implementation,
**kwargs,
)
```
As you can see, the `is_causal` argument is popped, and the `is_causal` of `Qwen3Attention` is used as the argument. Note that `Qwen3Attention.is_causal` is never changed, and its default value is `True`, so the `is_causal` argument passed into `_flash_attention_forward` will always be `True` regardless of any change.
After I add a line of code to alter the `Qwen3Attention.is_causal`, i.e. `self.is_causal = is_causal` before passing the arguments into `attention_interface`, I can change the causality of `flash_attention_forward`. So I would like to know if it is a feature or a bug? Thank you!!
|
https://github.com/huggingface/transformers/issues/39554
|
closed
|
[
"Flash Attention"
] | 2025-07-21T12:08:00Z
| 2025-11-11T12:32:41Z
| 9
|
lucaswychan
|
huggingface/peft
| 2,660
|
Custom models LoRA
|
Is there any way to fine-tune models that are not in the support list or custom models?
Currently, many public models have their LLM parts from Qwen. Can LLaMA-Factory use the Qwen template and only fine-tune the LLM part? Thank you
|
https://github.com/huggingface/peft/issues/2660
|
closed
|
[] | 2025-07-21T11:52:30Z
| 2025-07-24T12:53:34Z
| 6
|
stillbetter
|
huggingface/lerobot
| 1,559
|
Is the current model framework suitable for using automatic mixed precision?
|
I saw that `.to(torch.float32)` and `.to(torch.bfloat16)` were used in many places in the Pi0 model code. Then I implemented parallel training of Pi0 based on accelerate, and found that if I want to use AMP, the code will report an error of dtype mismatch. I want to know whether the existing code is suitable for automatic mixed precision? If not, how should it be modified?
|
https://github.com/huggingface/lerobot/issues/1559
|
open
|
[
"question",
"policies"
] | 2025-07-21T10:45:26Z
| 2025-08-12T09:27:59Z
| null |
xliu0105
|
huggingface/transformers
| 39,549
|
Is there plan to integrate ColQwen2.5 into Transformers?
|
### Model description
Is ColQwen2ForRetrieval integrated into the transformers library, and are there plans to add [ColQwen2.5](https://github.com/illuin-tech/colpali/blob/main/colpali_engine/models/qwen2_5/colqwen2_5/modeling_colqwen2_5.py) in the future?
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
https://github.com/illuin-tech/colpali/blob/main/colpali_engine/models/qwen2_5/colqwen2_5/modeling_colqwen2_5.py
https://github.com/huggingface/transformers/pull/38391
|
https://github.com/huggingface/transformers/issues/39549
|
closed
|
[
"New model"
] | 2025-07-21T10:08:47Z
| 2025-11-03T23:31:08Z
| 0
|
rebel-thkim
|
huggingface/diffusers
| 11,966
|
How about forcing the first and last block on device when groupoffloading is used?
|
**Is your feature request related to a problem? Please describe.**
When group offloading is enabled, the offload and onload cannot be streamed between steps and this is really a big time comsuming problem.
**Describe the solution you'd like.**
Is it possible to add an option that could make the first and last block forced on device to avoid offload and onload?
@a-r-r-o-w Could you please give some help? Thanks so much.
|
https://github.com/huggingface/diffusers/issues/11966
|
open
|
[
"contributions-welcome",
"group-offloading"
] | 2025-07-21T08:38:30Z
| 2025-12-02T15:30:23Z
| 13
|
seed93
|
huggingface/tokenizers
| 1,829
|
The parameter in initial_alphabet of the "class BpeTrainer(Trainer)" does not allow more than one character to initialized
|
Hi everyone,
I am working on Tamil and Sinhala languages which are morphologically rich languages, in these languages a character is actually a combination of multiple unicode codepoints (similar to emojis) so it would be greatly beneficial to initialize the BPE alphabet with graphemes instead of the characters. Is there any work around for this which i can use to initialize the BPE algorithm? Thanks in advance!!
|
https://github.com/huggingface/tokenizers/issues/1829
|
open
|
[] | 2025-07-21T08:30:21Z
| 2025-07-21T08:30:21Z
| 0
|
vmenan
|
huggingface/lerobot
| 1,554
|
How to use local datasets to train and evaluate
|
Due to network issues, I want to use only local datasets during training and evaluation and prevent huggingface from uploading data or retrieve datasets on the hub.Is there any good solution?
|
https://github.com/huggingface/lerobot/issues/1554
|
closed
|
[
"question",
"dataset"
] | 2025-07-21T07:54:07Z
| 2025-10-08T12:58:32Z
| null |
zym123321
|
pytorch/tutorials
| 3,481
|
[BUG] - Broken links of PyTorch Libraries(torchao, torchrec etc) on the right side of the tutorial index page
|
### Add Link
https://docs.pytorch.org/tutorials/index.html
### Describe the bug
Those links to the "PyTorch Libraries" section on the side bar are broken, they should pointed to `https://docs.pytorch.org/ao` instead of `https://docs.ppytorch.org/ao`, same for other libraries. I searched the codebase and seems these broken links come from cppdocs auto compilation. Is there a pointer to how I can start to get a fix PR? Thank you!
<img width="1850" height="918" alt="Image" src="https://github.com/user-attachments/assets/393df9fd-e0cd-405b-855f-bae2046e0ae4" />
[cppdocs repo:]( https://github.com/search?q=repo%3Apytorch%2Fcppdocs%20ppytorch&type=code)
<img width="2024" height="1295" alt="Image" src="https://github.com/user-attachments/assets/b4b8997c-58c7-46ad-a1b5-339d99eea862" />
### Describe your environment
MacOS,
Google Chrome
cc @svekars @sekyondaMeta @AlannaBurke
|
https://github.com/pytorch/tutorials/issues/3481
|
closed
|
[
"bug",
"website"
] | 2025-07-21T06:56:26Z
| 2025-07-22T15:46:47Z
| 2
|
sniper35
|
huggingface/optimum
| 2,324
|
AutoConfig.from_dict Missing in transformers==4.51.3 — Incompatibility with optimum==1.26.1
|
### System Info
```shell
I am running into a critical compatibility issue between optimum and recent versions of transformers.
❗ Error Summary
When using:
transformers==4.51.3
optimum==1.26.1
onnx==1.17.0
onnxruntime==1.20.0
The following runtime error is thrown when attempting to load an ONNX model using ORTModelForTokenClassification.from_pretrained:
AttributeError: type object 'AutoConfig' has no attribute 'from_dict'
This traces back to:
config = AutoConfig.from_pretrained(...)
# ↓ internally calls:
return CONFIG_MAPPING[pattern].from_dict(config_dict, **unused_kwargs)
However, in transformers>=4.48, the method AutoConfig.from_dict appears to have been deprecated or removed. This causes optimum to break at runtime when trying to load ONNX models.
📦 Package Versions
transformers - 4.51.3
optimum - 1.26.1
onnx - 1.17.0
onnxruntime - 1.20.0
torch - 2.2.6
Due to a security advisory, we're required to upgrade to transformers>=4.48. However, even with the latest optimum==1.26.1, it appears optimum is not yet updated for compatibility with changes introduced in recent transformers versions.
ASK:
Is support for transformers>=4.48 (particularly 4.51.3) planned in an upcoming optimum release?
Could this AutoConfig.from_dict dependency be refactored or conditionally patched to restore compatibility?
Is there a compatibility roadmap available between transformers and optimum for ONNX workflows?
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
Use transformers==4.51.3 and optimum==1.26.1
Load an exported ONNX model using ORTModelForTokenClassification.from_pretrained(...)
Observe the AttributeError about AutoConfig.from_dict
### Expected behavior
When using optimum==1.26.1 with transformers>=4.48 (specifically 4.51.3), the following should work without error:
from optimum.onnxruntime import ORTModelForTokenClassification
model = ORTModelForTokenClassification.from_pretrained("path/to/onnx/model")
The model should load successfully using the ONNX Runtime backend.
Internally, AutoConfig.from_pretrained(...) should function correctly regardless of changes in the transformers API (e.g., deprecation/removal of from_dict).
ONNX workflows should remain compatible with newer transformers versions, allowing teams to benefit from critical updates and security patches without breaking ONNX integration.
|
https://github.com/huggingface/optimum/issues/2324
|
open
|
[
"bug"
] | 2025-07-21T06:04:58Z
| 2025-08-01T07:10:20Z
| 5
|
rratnakar09
|
huggingface/diffusers
| 11,964
|
KeyError when loading LoRA for Flux model: missing lora_unet_final_layer_adaLN_modulation_1 weights
|
I'm trying to run Overlay-Kontext-Dev-LoRA locally by loading the LoRA weights using the pipe.load_lora_weights() function. However, I encountered the following error during execution:
> KeyError: 'lora_unet_final_layer_adaLN_modulation_1.lora_down.weight'
```
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image
Load the pipeline with a specific torch data type for GPU optimization
pipe = DiffusionPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Kontext-dev",
torch_dtype=torch.bfloat16
)
Move the entire pipeline to the GPU
pipe.to("cuda")
Load LoRA weights (this will also be on the GPU)
pipe.load_lora_weights("ilkerzgi/Overlay-Kontext-Dev-LoRA")
prompt = "Place it"
input_image = load_image("img2.png")
The pipeline will now run on the GPU
image = pipe(image=input_image, prompt=prompt).images[0]
image.save("output_image.png")
```
Environment:
diffusers version: 0.35.0.dev0
Python: 3.10
Running locally on a ubuntu environment with RTX 4090
> Additional Note:
> The model file size is also quite large. I may need to quantize it before running it on the 4090 to avoid out-of-memory issues.
>
> Would appreciate any help or suggestions on how to resolve the loading issue. Thank you!
|
https://github.com/huggingface/diffusers/issues/11964
|
open
|
[] | 2025-07-21T05:16:34Z
| 2025-07-21T09:14:00Z
| 1
|
NEWbie0709
|
huggingface/transformers
| 39,545
|
Is the new Intel–Weizmann speculative decoding algorithm integrated into Transformers?
|
Hi,
I recently read about a new speculative decoding algorithm developed by Intel Labs and the Weizmann Institute, which reportedly improves inference speed by up to 2.8×, even when using draft and target models with different vocabularies or architectures.
References:
- [Intel Newsroom](https://newsroom.intel.com/artificial-intelligence/intel-weizmann-institute-speed-ai-with-speculative-decoding-advance?utm_source=chatgpt.com)
- [CTech Article](https://www.calcalistech.com/ctechnews/article/h1z7pydlex)
Several sources (including Intel press releases and third-party writeups) claim that this algorithm has already been integrated into the Hugging Face Transformers library.
However, I haven’t found any reference to this new version in the official Transformers documentation
My Questions:
1. Has this Intel–Weizmann speculative decoding algorithm actually been integrated into transformers?
2. If so, where can I find documentation or usage examples for how to enable it?
Thanks in advance for your help! This looks like a powerful advancement, and I'd love to test it.
|
https://github.com/huggingface/transformers/issues/39545
|
closed
|
[] | 2025-07-21T02:47:48Z
| 2025-07-22T12:15:54Z
| 4
|
NEWbie0709
|
huggingface/lerobot
| 1,552
|
Support smolvla training on Intel GPU
|
Current script is only supporting `cuda`, `mps` and `cpu`.
With PyTorch 2.7 with Intel GPU support, once PyTorch is installed, Intel GPU can be utilized in the training script.
|
https://github.com/huggingface/lerobot/issues/1552
|
open
|
[
"enhancement",
"question",
"policies"
] | 2025-07-21T01:47:38Z
| 2025-10-09T07:40:10Z
| null |
xiangyang-95
|
huggingface/transformers
| 39,542
|
ValueError: You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time
|
### System Info
- `transformers` version: 4.53.2
- Platform: **Ubuntu 22.04** Linux 5.15.0-139-generic
- **Python 3.10.18** + ipykernel 6.29.5
- Pytorch 2.7.1+cu118
### Who can help?
@ArthurZucker
@SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
 I want to build a new MT model with **bert-based encoder** and a **decoder from opus-mt-en-zh** (loaded as `MarianMTModel`), BUT when I execute `Trainer.train()`, It report ValueError: `You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time`. This is code about my model and trainer.
 Thanks for helping!
```Python
# ManchuBERT Encoder + Opus-MT-zh Decoder
import torch
from torch import nn
from transformers.modeling_outputs import Seq2SeqLMOutput
def get_extended_attention_mask(attention_mask, input_shape, device, dtype=torch.float32):
"""
attention_mask: [B, seq_len]
return: [B, 1, 1, seq_len]
"""
mask = attention_mask[:, None, None, :] # [B, 1, 1, seq_len]
mask = mask.to(dtype=dtype)
mask = (1.0 - mask) * -10000.0
return mask
class ManchuZhMT(nn.Module):
def __init__(self, bert, marian):
super().__init__()
self.decoder_embeddings = marian.model.decoder.embed_tokens
self.embeddings = bert.embeddings
self.encoder = bert.encoder
self.decoder = marian.model.decoder
self.lm_head = marian.lm_head
self.final_logits_bias = marian.final_logits_bias
self.config = marian.config
def forward(self,
input_ids=None,
attention_mask=None,
decoder_input_ids=None,
decoder_attention_mask=None,
labels=None,
**kwargs):
hidden_states = self.embeddings(input_ids=input_ids)
attention_mask = attention_mask.to(dtype=torch.float32)
extended_mask = get_extended_attention_mask(attention_mask, input_ids.shape, input_ids.device)
enc_out = self.encoder(hidden_states=hidden_states,
attention_mask=extended_mask,
return_dict=True)
dec_out = self.decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=enc_out.last_hidden_state,
encoder_attention_mask=extended_mask,
return_dict=True)
logits = self.lm_head(dec_out.last_hidden_state) + self.final_logits_bias
loss = None
if labels is not None:
loss_fct = nn.CrossEntropyLoss(ignore_index=-100)
loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1))
return Seq2SeqLMOutput(loss=loss, logits=logits)
def prepare_inputs_for_generation(self, *args, **kwargs):
return self.decoder.prepare_inputs_for_generation(*args, **kwargs)
def _prepare_encoder_decoder_kwargs_for_generation(self, *args, **kwargs):
return self.decoder._prepare_encoder_decoder_kwargs_for_generation(*args, **kwargs)
model = ManchuZhMT(manchu_model, chn_model)
print(model)
# freeze Decoder + LM Head
for p in model.decoder.parameters():
p.requires_grad = False
for p in model.lm_head.parameters():
p.requires_grad = False
```
```Python
# Add LoRA for Encoder
from peft import LoraConfig, get_peft_model, TaskType
num_layers = len(model.encoder.layer)
target_modules = []
for i in range(num_layers):
target_modules.extend([
f"encoder.layer.{i}.attention.self.query",
f"encoder.layer.{i}.attention.self.key",
f"encoder.layer.{i}.attention.self.value",
f"encoder.layer.{i}.attention.output.dense",
f"encoder.layer.{i}.intermediate.dense",
f"encoder.layer.{i}.output.dense",
])
lora_config = LoraConfig(
task_type=TaskType.SEQ_2_SEQ_LM,
target_modules=target_modules,
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
)
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
```
```Python
# Start Train!
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
args = Seq2SeqTrainingArguments(
output_dir="./lora_with_bert",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=10,
learning_rate=3e-4,
fp16=True,
save_strategy="epoch",
predict_with_generate=True,
logging_steps=100,
report_to="none",
)
trainer = Seq2SeqTrainer(
model=model,
args=args,
train_dataset=tokenized_ds["train"],
eval_dataset=tokenized_ds["val"],
tokenizer=manchu_tok,
)
trainer.train()
trainer.save_model("./lora_with_bert/final")
```
### Expected behav
|
https://github.com/huggingface/transformers/issues/39542
|
closed
|
[
"Usage",
"Good First Issue",
"trainer",
"bug"
] | 2025-07-21T01:06:27Z
| 2025-08-22T05:53:51Z
| 10
|
xjackzenvey
|
huggingface/transformers
| 39,551
|
InformerForPrediction [I would like to seek your opinions, everyone, How can I set the dynamic real features for prediction]
|
Here is the description cited from the docs of InformerForPrediction:
> future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features)) — Required time features for the prediction window, which the model internally will add to future_values. These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These could also be so-called “age” features, which basically help the model know “at which point in life” a time-series is. Age features have small values for distant past time steps and increase monotonically the more we approach the current time step. Holiday features are also a good example of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where the position encodings are learned from scratch internally as parameters of the model, the Time Series Transformer requires to provide additional time features. The Time Series Transformer only learns additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features`.
Hi, I have a question regarding inference in time series forecasting models.
When making predictions, how can I obtain or construct the dynamic_real_features for the future steps (i.e., for the prediction_length)?
More specifically, how should I concatenate the corresponding dynamic_real_features and time_features during inference?
Is it appropriate to use all-zero placeholders for the future dynamic_real_features?
Will this affect prediction performance, considering that during training the model has access to real values for these features over the full context + prediction window?
On a related note:
In time series forecasting, is it necessary for all timestamps in the input window to be equally spaced (e.g., every x minutes)?
Or can I use sequences with irregular time intervals, as long as the time order is preserved?
Thanks for your help!
|
https://github.com/huggingface/transformers/issues/39551
|
closed
|
[] | 2025-07-20T11:38:50Z
| 2025-08-28T08:03:20Z
| null |
2004learner
|
pytorch/torchtitan
| 1,422
|
[Gemma3] Support?
|
Hi Authors,
Is there a plan for Gemme3 series?
Best,
Peter
|
https://github.com/pytorch/torchtitan/issues/1422
|
open
|
[] | 2025-07-20T03:22:02Z
| 2025-08-21T03:25:09Z
| 1
|
YHPeter
|
huggingface/diffusers
| 11,961
|
New Adapter/Pipeline Request: IT-Blender for Creative Conceptual Blending
|
## Model/Pipeline/Scheduler description
### Name of the model/pipeline/scheduler
"Image-and-Text Concept Blender" (IT-Blender), a diffusion adapter that blends visual concepts from a real reference image with textual concepts from a prompt in a disentangled manner. The goal is to enhance human creativity in design tasks.
### Project page & ArXiv link
Paper link: https://arxiv.org/pdf/2506.24085
The project website: https://imagineforme.github.io/
**(a lot of interesting feasible examples are in the project page.)**
</br>
<img width="2880" height="3159" alt="Image" src="https://github.com/user-attachments/assets/87607797-32a1-41a5-b5aa-69cd8406352c" />
### What is the proposed method?
IT-Blender is an adapter that works with existing models like SD and FLUX. Its core innovation is the **Blended Attention (BA)** module. This module modifies the standard self-attention layers. It uses a two-stream approach (a noisy stream for generation and a clean reference stream for the image) and introduces trainable parameters within an Image Cross-Attention (imCA) term to bridge the distributional shift between clean and noisy latents.
### Is the pipeline different from an existing pipeline?
Yes. The IT-Blender pipeline is distinct for a few reasons:
1. **Native Image Encoding**: It uses the diffusion model's own denoising network to encode the reference image by forwarding a clean version at "t=0". This avoids an external image encoder to better preserve details.
2. **Two-Stream Processing**: During training and inference, it processes a "noisy stream" for the text-guided generation and a "reference stream" for the clean visual concept image simultaneously.
3. **Blended Attention Integration**: The pipeline replaces standard self-attention modules with the new Blended Attention (BA) module, which is designed to physically separate textual and visual concept processing.
### Why is this method useful?
The method is particularly effective for creative tasks like product design, character design, and graphic design, as shown by the extensive examples in the paper and project page. We believe it would be a valuable and unique addition to the `diffusers` library.
### Open source status
- [x] The model implementation is available.
- [x] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
**Demo page**: https://huggingface.co/spaces/WonwoongCho/IT-Blender
**GitHub page for inference**: https://github.com/WonwoongCho/IT-Blender
Note that we are using our own diffusers with a little bit of changes (`requirements.txt` in the github repo);
**Changed Diffusers Pipeline for FLUX**: https://github.com/WonwoongCho/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux.py
**Changed Diffusers Pipeline for SD1.5**: https://github.com/WonwoongCho/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
|
https://github.com/huggingface/diffusers/issues/11961
|
open
|
[] | 2025-07-20T03:07:38Z
| 2025-07-20T03:08:06Z
| 0
|
WonwoongCho
|
huggingface/transformers
| 39,522
|
T5Gemma failing on provided example
|
### System Info
- `transformers` version: 4.53.2
- Platform: Linux-6.14.0-23-generic-x86_64-with-glibc2.41
- Python version: 3.13.3
- Huggingface_hub version: 0.33.4
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: True
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- dynamo_config: {'dynamo_backend': 'INDUCTOR'}
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 5060 Ti
### Who can help?
@ArthurZucker and @itazap
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the example from the T5Gemma docs page.
```
echo -e "Question: Why is the sky blue? Answer:" | transformers run --task text2text-generation --model google/t5gemma-s-s-ul2 --device 0
```
### Expected behavior
When I run I get:
```
File ".venv/lib/python3.13/site-packages/transformers/configuration_utils.py", line 209, in __getattribute__
return super().__getattribute__(key)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
AttributeError: 'T5GemmaConfig' object has no attribute **'vocab_size'**
```
Indeed. The vocab_size is a sub attribute from encoder/decoder, not a direct attribute.
|
https://github.com/huggingface/transformers/issues/39522
|
closed
|
[
"bug"
] | 2025-07-19T11:07:26Z
| 2025-08-27T07:51:08Z
| 7
|
jadermcs
|
pytorch/executorch
| 12,659
|
Fix bug in export recipe logic where quantization output is not being forwarded and reexport if quantized.
|
### 🚀 The feature, motivation and pitch
I've found couple of issues with the original export recipes logic has incomplete functionality:
1. The output of quantize stage is not getting propagated to next stages
2. When quantize stage is run, we should re-export the model before we lower to edge.
### Alternatives
_No response_
### Additional context
_No response_
### RFC (Optional)
_No response_
cc @JacobSzwejbka @angelayi
|
https://github.com/pytorch/executorch/issues/12659
|
closed
|
[
"module: exir",
"triaged"
] | 2025-07-19T03:22:30Z
| 2025-07-23T21:48:14Z
| null |
abhinaykukkadapu
|
huggingface/lerobot
| 1,540
|
Controlling robot with text using SmolVLA
|
Is it possible to control the robot with text inputs? I thought that's what a VLA model was...
I cannot find any instructions on how to do this anywhere...
I found this https://huggingface.co/masato-ka/smolvla_block_instruction , but control_robot was split into multiple files recently - none of which seem to work.
|
https://github.com/huggingface/lerobot/issues/1540
|
open
|
[
"question",
"policies"
] | 2025-07-18T23:09:11Z
| 2025-08-12T09:35:59Z
| null |
drain-pipe
|
pytorch/tutorials
| 3,473
|
💡trace images are too small to see anything
|
### 🚀 Describe the improvement or the new tutorial
The trace images in https://docs.pytorch.org/tutorials/intermediate/pinmem_nonblock.html are not quite readable because they are massively scaled down. Is it possible to make them clickable/zoom-able?
<img width="901" height="1120" alt="Image" src="https://github.com/user-attachments/assets/b5b339a6-edbd-4d06-8003-236d4f60db35" />
I was able to view them via browser's open image in a new tab feature and then zoom, but this is very cumbersome.
This probably applies to some other tutorials as well if they contains trace snapshots.
thanks.
### Existing tutorials on this topic
https://docs.pytorch.org/tutorials/intermediate/pinmem_nonblock.html
### Additional context
_No response_
cc @svekars @sekyondaMeta @AlannaBurke
|
https://github.com/pytorch/tutorials/issues/3473
|
open
|
[
"website"
] | 2025-07-18T22:18:37Z
| 2025-07-18T22:34:43Z
| 0
|
stas00
|
huggingface/diffusers
| 11,956
|
Frequency-Decoupled Guidance (FDG) for diffusion models
|
FDG is a new method for applying CFG in the frequency domain. It improves generation quality at low CFG scales while inherently avoiding the harmful effects of high CFG values. It could be a nice addition to the guiders part of diffusers. The implementation details for FDG are available on page 19 of the paper.
https://huggingface.co/papers/2506.19713
|
https://github.com/huggingface/diffusers/issues/11956
|
closed
|
[
"help wanted",
"Good second issue",
"contributions-welcome",
"advanced",
"consider-for-modular-diffusers"
] | 2025-07-18T19:12:50Z
| 2025-08-07T05:51:03Z
| 5
|
Msadat97
|
pytorch/torchtitan
| 1,415
|
[Feature request] Use omegaconf or hydra for the config system
|
Is there a plan to use Omegaconf or Hydra for the configuration system?
The current .toml-based configuration system is simple but verbose: it does not support configuration inheritance or composition, which prevents config reuse.
If this is needed, I am interested in contributing an alternative configuration solution based on Omegaconf.
|
https://github.com/pytorch/torchtitan/issues/1415
|
open
|
[] | 2025-07-18T18:28:34Z
| 2025-07-19T00:49:55Z
| 3
|
yzhao30
|
huggingface/datasets
| 7,689
|
BadRequestError for loading dataset?
|
### Describe the bug
Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error:
```
huggingface_hub.errors.BadRequestError: (Request ID: ...)
Bad request:
* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand
✖ Invalid input: expected array, received string
→ at paths
✖ Invalid input: expected boolean, received string
→ at expand
```
I tried with both `4.0.0` and `3.5.1` since this dataset uses `trust_remote_code`, but I get the same error with both.
What can I do to load the dataset? I checked the documentation and GitHub issues here, but couldn't find a solution.
### Steps to reproduce the bug
```python
import datasets
ds = datasets.load_dataset("Helsinki-NLP/europarl", "en-fr", streaming=True, trust_remote_code=True)["train"]
```
### Expected behavior
That the dataset loads as it did a couple days ago.
### Environment info
- `datasets` version: 3.5.1
- Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.11
- `huggingface_hub` version: 0.30.2
- PyArrow version: 20.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
|
https://github.com/huggingface/datasets/issues/7689
|
closed
|
[] | 2025-07-18T09:30:04Z
| 2025-07-18T11:59:51Z
| 17
|
WPoelman
|
huggingface/diffusers
| 11,951
|
Kontext model loading quantization problem
|
Hello, can kontext be loaded quantitatively at present? Because I only have a 4090 with 24g video memory, the current fp16 loading method will cause OOM. Like flux, can it be loaded with torchao or gguf, so that this model can run on 4090?
|
https://github.com/huggingface/diffusers/issues/11951
|
closed
|
[] | 2025-07-18T03:20:48Z
| 2025-07-18T05:39:28Z
| 2
|
babyta
|
pytorch/executorch
| 12,627
|
How to build executorch for Cortex-A cpu
|
### 🚀 The feature, motivation and pitch
I wan to run executorch in Cortex-A cpu devices;
How can i do?
Thank you very much
### Alternatives
_No response_
### Additional context
_No response_
### RFC (Optional)
_No response_
|
https://github.com/pytorch/executorch/issues/12627
|
closed
|
[
"need-user-input",
"triaged"
] | 2025-07-18T01:34:51Z
| 2025-07-21T12:28:34Z
| null |
barbecacov
|
huggingface/transformers
| 39,484
|
Transformers still tries to use apex.amp which is no longer a thing in apex.
|
### System Info
```
root@12bb27e08b1b:/# pip show transformers
Name: transformers
Version: 4.52.3
```
trainer.py contains this:
```
if is_apex_available():
from apex import amp
```
Apex (built from source, as they recommend) does no longer come with amp.
How to reproduce?
1. install transformers
2. install apex
3. python `from trl import SFTTrainer`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
How to reproduce?
1. install transformers
2. install apex
3. python `from trl import SFTTrainer`
### Expected behavior
There should not be `from apex import amp` in the code base
|
https://github.com/huggingface/transformers/issues/39484
|
closed
|
[
"bug"
] | 2025-07-17T16:43:14Z
| 2025-08-25T08:03:03Z
| 4
|
yselivonchyk
|
huggingface/datasets
| 7,688
|
No module named "distributed"
|
### Describe the bug
hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this?
### Steps to reproduce the bug
1. pip install datasets
2. from datasets.distributed import split_dataset_by_node
### Expected behavior
expecting the command "from datasets.distributed import split_dataset_by_node" can be ran successfully
### Environment info
python: 3.12
|
https://github.com/huggingface/datasets/issues/7688
|
open
|
[] | 2025-07-17T09:32:35Z
| 2025-07-25T15:14:19Z
| 3
|
yingtongxiong
|
huggingface/alignment-handbook
| 220
|
A little question: why num examples is much less than the total amount of my training dataset?
|
I am using this repo to SFT a model, and I notice that:
I print the total amount of my training dataset, which is 7473
`Number of raw training samples: 7473`
But during training, I find the log:
[INFO|trainer.py:2314] 2025-07-17 17:03:23,908 >> ***** Running training *****
[INFO|trainer.py:2315] 2025-07-17 17:03:23,908 >> Num examples = 698
[INFO|trainer.py:2316] 2025-07-17 17:03:23,908 >> Num Epochs = 3
[INFO|trainer.py:2317] 2025-07-17 17:03:23,908 >> Instantaneous batch size per device = 2
[INFO|trainer.py:2320] 2025-07-17 17:03:23,908 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:2321] 2025-07-17 17:03:23,908 >> Gradient Accumulation steps = 4
[INFO|trainer.py:2322] 2025-07-17 17:03:23,908 >> Total optimization steps = 66
[INFO|trainer.py:2323] 2025-07-17 17:03:23,910 >> Number of trainable parameters = 7,612,756,480
I am using a machine with 8 A100. Could anyone explain it? I am afraid I didn't use the whole dataset but only 698 of 7473 samples to train...
|
https://github.com/huggingface/alignment-handbook/issues/220
|
closed
|
[] | 2025-07-17T09:12:08Z
| 2025-07-23T23:30:33Z
| 3
|
Red-Scarff
|
pytorch/ao
| 2,566
|
FP8 PerRow quantization (CUDA capability>=9.0)
|
I found a description as below:
--------------------------------------------------------------------------------------------------------------
A8W8 Float8 Dynamic Quantization with Rowwise Scaling
# for torch 2.5+
from torchao.quantization import quantize_, PerRow, Float8DynamicActivationFloat8WeightConfig
quantize_(model, Float8DynamicActivationFloat8WeightConfig(granularity=PerRow()))
Per-row scaling is only supported for bfloat16 weight and activation. This API is only tested on H100. Hardware with CUDA compute capability 8.9 or greater is required.
----------------------------------------------------------------------------------------------------------------
which said "CUDA compute capability 8.9 or greater is required.".But actually, I found that PerRow() needs CUDA compute capability >=9.0, as in the code
-------------------------------------------------------------------------------------------------------------
File "/opt/conda/lib/python3.11/site-packages/torchao/quantization/quant_api.py", line 1475, in _normalize_granularity
assert is_sm_at_least_90() or is_MI300(), (
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: PerRow quantization only works for CUDA>=9.0 and MI300+
----------------------------------------------------------------------------------------------------------
I use torchao==0.11.0, so is there a typo mistake or the code was wrong?
|
https://github.com/pytorch/ao/issues/2566
|
open
|
[] | 2025-07-17T04:04:24Z
| 2025-07-17T18:26:55Z
| 2
|
zzlin-0629
|
pytorch/TensorRT
| 3,691
|
❓ [Question] How to understand the value of this project
|
## ❓ Question
I am sorry for I did not use this tool before. but since there is a `tensorrt` released in Nvidia tensorrt lib, and this project depends on the Nvidia tensorrt lib, so what is the value of this project? Is it more safe to use this tool to convert pytorch checkpoints to tensorrt engine file directly, then that with pytorch->onnx -> tensorrt pipeline? I had tired to convert my checkpoint to onnx, AMP trained, and no error on onnx fp32, then I use trtexec to convert the onnx to tensorrt engine file, fp16, an bug-in trt file generated and can not be used for inferece. Can I use this package to directly convert checkpoint to trt file, without the inner bugs? or if there is inner-bug, the conversion will report witch line of my pytorch model code triggered this bug?
The report of polygraphy is too hard to find back which line is the bad code.
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/3691
|
closed
|
[
"question"
] | 2025-07-17T03:47:19Z
| 2025-08-19T07:22:22Z
| null |
JohnHerry
|
huggingface/diffusers
| 11,945
|
Floating point exception with nightly PyTorch and CUDA
|
### Describe the bug
When running any code snippet using diffusers it fails with floating point exception, and doesn't print any traceback.
For example this one would cause the issue (the example of Stable Diffusion 3.5 medium):
```
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-medium", torch_dtype=torch.bfloat16)
pipe = pipe.to("cuda")
image = pipe(
"A capybara holding a sign that reads Hello World",
num_inference_steps=40,
guidance_scale=4.5,
).images[0]
image.save("capybara.png")
```
The issue could be with upstream PyTorch or CUDA, but we'd need to identify what of Diffusers is causing it.
### Reproduction
Not too sure as it's my first time with Diffusers but as suggested by [John6666](https://discuss.huggingface.co/u/John6666/summary) any NVIDIA GeForce RTX 5000 series... In my case it's a 16gb 5060 Ti. Perhaps CUDA 575.57.08 with CUDA version 12.9 and/or PyTorch 2.9.0.dev20250716+cu129?
### Logs
```shell
Let me know how can I retrieve any logs you might need.
```
### System Info
`diffusers-cli env` also causes a Floating point exception, but here you have environment information:
**OS**: Debian 12
```
nvidia-smi
Wed Jul 16 15:58:48 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 575.57.08 Driver Version: 575.57.08 CUDA Version: 12.9 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 5060 Ti On | 00000000:01:00.0 On | N/A |
| 0% 42C P5 4W / 180W | 10MiB / 16311MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
```
```
pip list
Package Version
------------------------ ------------------------
bitsandbytes 0.46.1
certifi 2025.7.14
charset-normalizer 3.4.2
diffusers 0.34.0
filelock 3.18.0
fsspec 2025.7.0
hf-xet 1.1.5
huggingface-hub 0.33.4
idna 3.10
importlib_metadata 8.7.0
Jinja2 3.1.6
MarkupSafe 3.0.2
mpmath 1.3.0
networkx 3.5
numpy 2.3.1
nvidia-cublas-cu12 12.9.1.4
nvidia-cuda-cupti-cu12 12.9.79
nvidia-cuda-nvrtc-cu12 12.9.86
nvidia-cuda-runtime-cu12 12.9.79
nvidia-cudnn-cu12 9.10.2.21
nvidia-cufft-cu12 11.4.1.4
nvidia-cufile-cu12 1.14.1.1
nvidia-curand-cu12 10.3.10.19
nvidia-cusolver-cu12 11.7.5.82
nvidia-cusparse-cu12 12.5.10.65
nvidia-cusparselt-cu12 0.7.1
nvidia-nccl-cu12 2.27.5
nvidia-nvjitlink-cu12 12.9.86
nvidia-nvshmem-cu12 3.3.9
nvidia-nvtx-cu12 12.9.79
packaging 25.0
pillow 11.2.1
pip 23.0.1
pytorch-triton 3.4.0+gitae848267
PyYAML 6.0.2
regex 2024.11.6
requests 2.32.4
safetensors 0.5.3
setuptools 66.1.1
sympy 1.14.0
torch 2.9.0.dev20250716+cu129
torchaudio 2.8.0.dev20250716+cu129
torchvision 0.24.0.dev20250716+cu129
tqdm 4.67.1
triton 3.3.1
typing_extensions 4.14.1
urllib3 2.5.0
zipp 3.23.0
```
Don't hesitate to tell me any other info you might need.
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/11945
|
open
|
[
"bug"
] | 2025-07-17T03:16:02Z
| 2025-08-02T13:48:05Z
| 1
|
MxtAppz
|
huggingface/course
| 1,009
|
How Transformers solve tasks - ASR section refers to task using Whisper but task actually uses Wav2Vec2
|
The [Automatic speech recognition](https://huggingface.co/learn/llm-course/chapter1/5?fw=pt#automatic-speech-recognition) segment of Section 1 "Transformer Models" > "How 🤗 Transformers solve tasks" refers to
> Check out our complete [automatic speech recognition guide](https://huggingface.co/docs/transformers/tasks/asr) to learn how to finetune Whisper and use it for inference!
However the guide actually uses Wav2Vec2, not Whisper.
This is a dual request:
1. Update the segment in question to refer to Wav2Vec2
2. Update the task to use Whisper
|
https://github.com/huggingface/course/issues/1009
|
open
|
[] | 2025-07-16T23:25:55Z
| 2025-07-16T23:25:55Z
| null |
renet10
|
huggingface/diffusers
| 11,930
|
how to run convert_cosmos_to_diffusers.py correctly?
|
### Describe the bug
hi. I have tried to convert the cosmos-transfer1's base model to diffuers using "convert_cosmos_to_diffusers.py" code with options --transformer_type Cosmo
s-1.0-Diffusion-7B-Video2World --vae_type CV8x8x8-1.0 --transformer_ckpt_path ../fsdp_edge_v1/iter_000016000_ema_model_only.pt --output_path ./convert_to_diffusers
but I got error
```Traceback (most recent call last):
File "/home1/jovyan/workspace/cosmos-transfer1/diffusers/../convert_cosmos_to_diffusers.py", line 485, in <module>
transformer = convert_transformer(args.transformer_type, args.transformer_ckpt_path, weights_only)
File "/home1/jovyan/workspace/cosmos-transfer1/diffusers/../convert_cosmos_to_diffusers.py", line 358, in convert_transformer
transformer.load_state_dict(original_state_dict, strict=True, assign=True)
File "/opt/conda/envs/cosmos-transfer1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2581, in load_state_dict
raise RuntimeError(
RuntimeError: Error(s) in loading state_dict for CosmosTransformer3DModel:
Missing key(s) in state_dict: "transformer_blocks.3.norm1.linear_1.weight", "transformer_blocks.3.norm1.linear_2.weight", "transformer_blocks.3.attn1.norm_q.weight", "transformer_blocks.3.attn1.norm_k.weight", "transformer_blocks.3.attn1.to_q.weight", "transformer_blocks.3.attn1.to_k.weight", "transformer_blocks.3.attn1.to_v.weight", "transformer_blocks.3.attn1.to_out.0.weight", "transformer_blocks.3.norm2.linear_1.weight", "transformer_blocks.3.norm2.linear_2.weight", "transformer_blocks.3.attn2.norm_q.weight", "transformer_blocks.3.attn2.norm_k.weight", "transformer_blocks.3.attn2.to_q.weight", "transformer_blocks.3.attn2.to_k.weight", "transformer_blocks.3.attn2.to_v.weight", "transformer_blocks.3.attn2.to_out.0.weight", "transformer_blocks.3.norm3.linear_1.weight", "transformer_blocks.3.norm3.linear_2.weight", "transformer_blocks.3.ff.net.0.proj.weight", "transformer_blocks.3.ff.net.2.weight", "transformer_blocks.4.norm1.linear_1.weight", "transformer_blocks.4.norm1.linear_2.weight", "transformer_blocks.4.attn1.norm_q.weight", "transformer_blocks.4.attn1.norm_k.weight", "transformer_blocks.4.attn1.to_q.weight", "transformer_blocks.4.attn1.to_k.weight", "transformer_blocks.4.attn1.to_v.weight", "transformer_blocks.4.attn1.to_out.0.weight", "transformer_blocks.4.norm2.linear_1.weight", "transformer_blocks.4.norm2.linear_2.weight", "transformer_blocks.4.attn2.norm_q.weight", "transformer_blocks.4.attn2.norm_k.weight", "transformer_blocks.4.attn2.to_q.weight", "transformer_blocks.4.attn2.to_k.weight", "transformer_blocks.4.attn2.to_v.weight", "transformer_blocks.4.attn2.to_out.0.weight", "transformer_blocks.4.norm3.linear_1.weight", "transformer_blocks.4.norm3.linear_2.weight", "transformer_blocks.4.ff.net.0.proj.weight", "transformer_blocks.4.ff.net.2.weight", "transformer_blocks.5.norm1.linear_1.weight", "transformer_blocks.5.norm1.linear_2.weight", "transformer_blocks.5.attn1.norm_q.weight", "transformer_blocks.5.attn1.norm_k.weight", "transformer_blocks.5.attn1.to_q.weight", "transformer_blocks.5.attn1.to_k.weight", "transformer_blocks.5.attn1.to_v.weight", "transformer_blocks.5.attn1.to_out.0.weight", "transformer_blocks.5.norm2.linear_1.weight", "transformer_blocks.5.norm2.linear_2.weight", "transformer_blocks.5.attn2.norm_q.weight", "transformer_blocks.5.attn2.norm_k.weight", "transformer_blocks.5.attn2.to_q.weight", "transformer_blocks.5.attn2.to_k.weight", "transformer_blocks.5.attn2.to_v.weight", "transformer_blocks.5.attn2.to_out.0.weight", "transformer_blocks.5.norm3.linear_1.weight", "transformer_blocks.5.norm3.linear_2.weight", "transformer_blocks.5.ff.net.0.proj.weight", "transformer_blocks.5.ff.net.2.weight", "transformer_blocks.6.norm1.linear_1.weight", "transformer_blocks.6.norm1.linear_2.weight", "transformer_blocks.6.attn1.norm_q.weight", "transformer_blocks.6.attn1.norm_k.weight", "transformer_blocks.6.attn1.to_q.weight", "transformer_blocks.6.attn1.to_k.weight", "transformer_blocks.6.attn1.to_v.weight", "transformer_blocks.6.attn1.to_out.0.weight", "transformer_blocks.6.norm2.linear_1.weight", "transformer_blocks.6.norm2.linear_2.weight", "transformer_blocks.6.attn2.norm_q.weight", "transformer_blocks.6.attn2.norm_k.weight", "transformer_blocks.6.attn2.to_q.weight", "transformer_blocks.6.attn2.to_k.weight", "transformer_blocks.6.attn2.to_v.weight", "transformer_blocks.6.attn2.to_out.0.weight", "transformer_blocks.6.norm3.linear_1.weight", "transformer_blocks.6.norm3.linear_2.weight", "transformer_blocks.6.ff.net.0.proj.weight", "transformer_blocks.6.ff.net.2.weight", "transformer_blocks.7.norm1.linear_1.weight", "transformer_blocks.7.norm1.linear_2.weight", "transformer_blocks.7.attn1.norm_q.weight", "transformer_blocks.7.attn1.norm_k.weight", "transformer_blocks.7.attn1.to_q.weight", "transformer_blocks.7.attn1.to_k.weight", "transformer_blocks.7.attn1.to_v.weight", "transformer_blocks.7.attn1.to_out.0.weight", "transformer_blocks.7.norm2.linear
|
https://github.com/huggingface/diffusers/issues/11930
|
open
|
[
"bug"
] | 2025-07-15T16:20:09Z
| 2025-07-15T16:24:47Z
| null |
dedoogong
|
huggingface/transformers
| 39,426
|
object detection : matchin outputs.last_hidden_state with results
|
### Feature request
it seems to me that would be possible with a little modification in the function post_process_object_detection
with
```
``for score, label, box, index in zip(scores, labels, boxes, indexes):
results.append(
{
"scores": score[score > threshold],
"labels": label[score > threshold],
"boxes": box[score > threshold],
"indexes": index[score > threshold],
}
)``
```
and then
`outputs.last_hidden_state[0][results[0]['indexes']] `
gives me the desired vector features
Am I right or is there a better way to obtain this matching ?
Thanks for your help
### Motivation
I would like to use outputs.last_hidden_state as features for auxiliary tasks. So I need to know the label and the bounding box associated to one given vector of outputs.last_hidden_state
### Your contribution
I am not a top coder and do not know how to submit a PR
|
https://github.com/huggingface/transformers/issues/39426
|
open
|
[
"Feature request"
] | 2025-07-15T13:34:08Z
| 2025-07-22T11:08:23Z
| 5
|
fenaux
|
huggingface/peft
| 2,647
|
How can I merge the original model weights with LoRA weights?
|
I'm currently fine-tuning Qwen2.5_VL. Specifically, I used PEFT for LoRA fine-tuning on the linear layers of the LLM part. Meanwhile, I performed regular fine-tuning on other components like visual.merger and embed_tokens (with param.requires_grad set to True). After generating the files, as follow:
<img width="946" height="691" alt="Image" src="https://github.com/user-attachments/assets/b863a12f-956b-4797-bbfa-769518e73c33" />
I exported pytorch_model.bin using zero_to_fp32.py. When I printed the weight keys of the pytorch_model.bin file, I noticed that the original weights and LoRA weights weren't merged. Here's an example:
```
base_model.model.model.language_model.layers.0.self_attn.q_proj.base_layer.weight: shape=(2048, 2048), dtype=torch.bfloat16
base_model.model.model.language_model.layers.0.self_attn.q_proj.base_layer.bias: shape=(2048,), dtype=torch.bfloat16
base_model.model.model.language_model.layers.0.self_attn.q_proj.lora_A.default.weight: shape=(8, 2048), dtype=torch.bfloat16
base_model.model.model.language_model.layers.0.self_attn.q_proj.lora_B.default.weight: shape=(2048, 8), dtype=torch.bfloat16
```
Could you tell me how to merge them? If I use
`model = model.merge_and_unload()`
I need the base_model. However, I no longer have the original base_model, and the original Qwen_2.5_VL model isn't suitable because apart from LoRA fine-tuning the linear layers, I also fine-tuned visual.merger and embed_tokens.
How can I solve this problem? Thank you!
|
https://github.com/huggingface/peft/issues/2647
|
closed
|
[] | 2025-07-15T11:40:33Z
| 2025-08-23T15:03:44Z
| 4
|
guoguo1314
|
huggingface/transformers
| 39,421
|
Speculative Decoding(do_sample=False) get different outputs
|
> @transcend-0 hey!
>
>
>
> The issue was solved in [#30068](https://github.com/huggingface/transformers/pull/30068). You can install transformers from `main` with the following line for the correct generation with assisted decoding:
>
>
>
> `!pip install --upgrade git+https://github.com/huggingface/transformers.git`
_Originally posted by @zucchini-nlp in [#30608](https://github.com/huggingface/transformers/issues/30608#issuecomment-2089846816)_
### **System Info**
Python 3.10.11
transformers 4.49.0
torch 2.6.0+cu124
### **Same Reproduction**
Target_Model = Qwen2.5-32B-Instruct
Draft_Model = Qwen2.5-7B-Instruct
`question = "Dienes are organic compounds with two adjacent double bonds in their structure, and they exhibit unique reactivity due to their conjugated pi-electron system. They play a significant role in organic chemistry and are involved in various chemical reactions and natural processes.\nAmong the given options which one is the possible reactant (A) for the given reaction also mention the correct sequence of the dienes according to their reactivity ( most reactive to least reactive) B.\nCyclohexene + A ---> 8,8-diiodobicyclo[4.2.0]octan-7-one\n(B) 1. 2,3-dimethylbuta-1,3-diene, 2. (2E,4E)-hexa-2,4-diene, 3. (2E,4E)-hexa-2,4-diene, 4. (2Z,4Z)-hexa-2,4-diene\n\n\nA. A = 2,2-diiodoethen-1-one, B = 3, 1, 2, 4\nB. A = 2,2-diiodoethen-1-one, B = 4, 2, 1, 3\nC. A = 4,4-diiodocyclobut-2-en-1-one, B = 3, 1, 2, 4\nD. A = 4,4-diiodocyclobut-2-en-1-one, B = 4, 2, 1, 3\n\n"`
`prompt = '<|im_start|>user' + question + 'Please reason step-by-step and put your choice letter without any other text with \\boxed{} in the end.'`
`['userDienes are organic compounds with two adjacent double bonds in their structure, and they exhibit unique reactivity due to their conjugated pi-electron system. They play a significant role in organic chemistry and are involved in various chemical reactions and natural processes.\nAmong the given options which one is the possible reactant (A) for the given reaction also mention the correct sequence of the dienes according to their reactivity ( most reactive to least reactive) B.\nCyclohexene + A ---> 8,8-diiodobicyclo[4.2.0]octan-7-one\n(B) 1. 2,3-dimethylbuta-1,3-diene, 2. (2E,4E)-hexa-2,4-diene, 3. (2E,4E)-hexa-2,4-diene, 4. (2Z,4Z)-hexa-2,4-diene\n\n\nA. A = 2,2-diiodoethen-1-one, B = 3, 1, 2, 4\nB. A = 2,2-diiodoethen-1-one, B = 4, 2, 1, 3\nC. A = 4,4-diiodocyclobut-2-en-1-one, B = 3, 1, 2, 4\nD. A = 4,4-diiodocyclobut-2-en-1-one, B = 4, 2, 1, 3\n\nPlease reason step-by-step and put your choice letter without any other text with \\boxed{} in the end. To solve this problem, we need to identify the reactant \\( A \\) that can react with cyclohexene to form 8,8-diiodobicyclo[4.2.0]octan-7-one. We also need to determine the correct sequence of the dienes according to their reactivity from most reactive to least reactive.\n\n### Step-by-Step Reasoning:\n\n1. **Identify the Product:**\n - The product is 8,8-diiodobicyclo[4.2.0]octan-7-one. This suggests that the reactant \\( A \\) must be a compound that can undergo a Diels-Alder reaction with cyclohexene to form the bicyclic structure and then iodination at the appropriate positions.\n\n2. **Reactant Identification:**\n - The reactant \\( A \\) should be a dienophile (a compound with a double bond that can participate in a Diels-Alder reaction). Among the given options, the possible candidates are:\n - 2,2-diiodoethen-1-one\n - 4,4-diiodocyclobut-2-en-1-one\n\n3. **Diels-Alder Reaction:**\n - Cyclohexene is a diene, and it will react with a dienophile to form a bicyclic structure. The dienophile should have a double bond that can react with the diene to form the desired product.\n - 2,2-diiodoethen-1-one has a double bond and iodine substituents, making it a suitable dienophile.\n - 4,4-diiodocyclobut-2-en-1-one also has a double bond but is more complex and less likely to form the desired product directly.\n\n4. **Sequence of Dienes According to Reactivity:**\n - The reactivity of dienes depends on the stability of the conjugated pi-electron system.\n - Generally, the order of reactivity from most reactive to least reactive is:\n 1. (2E,4E)-hexa-2,4-diene (most stable and reactive)\n 2. (2E,4E)-hexa-2,4-diene (same as above)\n 3. 2,3-dimethylbuta-1,3-diene (less stable due to steric hindrance)\n 4. (2Z,4Z)-hexa-2,4-diene (least stable due to cis configuration)\n\n5. **Matching Options:**\n - Option A: \\( A = 2,2 \\)-diiodoethen-1-one, B = 3, 1, 2, 4\n - Option B: \\( A = 2,2 \\)-diiodoethen-1-one, B = 4, 2, 1, 3\n - Option C: \\( A = 4,4 \\)-diiodocyclobut-2-en-1-one, B = 3, 1, 2, 4\n - Option D: \\( A = 4,4 \\)-diiodocyclobut-2-en-1-one, B = 4, 2, 1, 3\n\nGiven the correct sequence of dienes and the suitable dienophile, the correct option is:\n\n\\boxed{A}']`
- targetDecoding - Running time: 41.82 s`
`['userDienes are organic compounds with two adjacent double bonds in thei
|
https://github.com/huggingface/transformers/issues/39421
|
closed
|
[] | 2025-07-15T11:36:31Z
| 2025-07-19T03:11:04Z
| 13
|
nighty8
|
pytorch/TensorRT
| 3,683
|
❓ [Question] HELP:dynamic shape of offset and input is not supported in aten_ops_embedding_bag converter
|
## offset and input with dynamic shape is not supported
Its failed When using tensorrt to compile embedding bag module with dynamic shape in aot mode,
What confuses me is whether the aten_ops_embedding_bag converter supports dynamic shapes for the offset and indices parameters.
The official test demo only covers the scenario where the weight has a dynamic shape.
However, during my tests, I found that an negative dimensions error occurs when offset and input is set to a dynamic shape.
## Test Code Demo
```
class EmbeddingBagModel(nn.Module):
def __init__(self, num_embeddings, embedding_dim, hidden_dim=128, mode='mean'):
super().__init__()
self.embedding_bag = nn.EmbeddingBag(
num_embeddings=num_embeddings,
embedding_dim=embedding_dim,
mode=mode,
sparse=False
)
nn.init.uniform_(self.embedding_bag.weight, -0.1, 0.1)
self.mlp = nn.Sequential(
nn.Linear(embedding_dim, hidden_dim),
nn.ReLU(),
#nn.BatchNorm1d(hidden_dim),
nn.Linear(hidden_dim, 1)
)
self.sigmoid = nn.Sigmoid()
def forward(self, input, offsets):
embedded = self.embedding_bag(input, offsets)
embedded = embedded.reshape(-1,1,embedding_dim)
hidden = self.mlp(embedded)
output = self.sigmoid(hidden)
return output
# main
num_embeddings = 10000
embedding_dim = 64
hidden_dim = 128
batch_size = 8
seq_length = 4
model = EmbeddingBagModel(num_embeddings, embedding_dim, hidden_dim).cuda()
input_tensor = torch.randint(0, num_embeddings, (batch_size * seq_length,), dtype=torch.int32).cuda()
offsets_tensor = torch.arange(0, batch_size * seq_length, seq_length, dtype=torch.int32).cuda()
inputs=(input_tensor, offsets_tensor)
dynamic_shapes={
"input": { 0: torch.export.Dim("dyn_dim_in", min=2, max=32),},
"offsets": { 0: torch.export.Dim("dyn_dim_off", min=2, max=32),},
}
fx_model = torch.export.export(model, inputs, dynamic_shapes=dynamic_shapes)
trt_model= torch_tensorrt.dynamo.compile(
fx_model,
inputs=inputs,
enable_precisions=torch.float32,
min_block_size=1
)
```
## Error log
```
File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/_compiler.py", line 288, in compile
trt_gm = compile_module(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/_compiler.py", line 462, in compile_module
trt_module = convert_module(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 142, in convert_module
interpreter_result = interpret_module_to_result(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 121, in interpret_module_to_result
interpreter_result = interpreter.run()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 610, in run
self._construct_trt_network_def()
File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 347, in _construct_trt_network_def
super().run()
File "/usr/local/lib/python3.12/dist-packages/torch/fx/interpreter.py", line 146, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 676, in run_node
trt_node: torch.fx.Node = super().run_node(n)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/interpreter.py", line 203, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 785, in call_function
return converter(self.ctx, target, args, kwargs, self._cur_node_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/converter_utils.py", line 526, in convert_with_type_enforcement
return func(ctx, target, new_args, new_kwargs, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/aten_ops_converters.py", line 313, in aten_ops_embedding_bag
return impl.embedding.embedding_bag(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/impl/embedding.py", line 401, in embedding_bag
return embedding_bag_with_ITensor_offsets(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages
|
https://github.com/pytorch/TensorRT/issues/3683
|
closed
|
[
"question"
] | 2025-07-15T09:01:39Z
| 2025-09-09T20:44:07Z
| null |
theflyfish
|
huggingface/lerobot
| 1,508
|
so101_dualarm_triplecam config to evaluate ACT policy?
|
I recently fine-tuned an ACT policy where my data was from 3 cameras (1 overhead + 2 wrist) and two so101's. Then I tried to evaluate it but noticed there is currently a config file missing to support this. Does or will this support exist soon?
|
https://github.com/huggingface/lerobot/issues/1508
|
open
|
[
"question",
"robots"
] | 2025-07-15T03:44:32Z
| 2025-08-12T09:30:41Z
| null |
sebastiandavidlee
|
huggingface/transformers
| 39,410
|
FP8 training support for Model Parallel / Tensor Parallel (MP/TP)
|
### Feature request
I recieve message "ValueError: The model you are trying to fine-tune is quantized with QuantizationMethod.FP8 but that quantization method do not support training. Please open an issue on GitHub: https://github.com/huggingface/transformers to request the support for training support for QuantizationMethod.FP8" when trying to finetune a fp8 model.
I have learned from the documentations that fp8 models can be trained with ddp, zero or fsdp. Is there a way to do it with MP/TP for huge fp8 models?
### Motivation
Enable finetuning huge fp8 models, like Qwen/Qwen3-235B-A22B-FP8
### Your contribution
I'm afraid it's too tough for me, but I'll do whatever I can if you need.
|
https://github.com/huggingface/transformers/issues/39410
|
open
|
[
"Feature request"
] | 2025-07-15T02:13:05Z
| 2025-07-15T13:30:27Z
| 2
|
edgeinfinity1
|
huggingface/transformers
| 39,409
|
TypeError: couldn't find storage object Float8_e4m3fnStorage - which version is needed for this?
|
Tested so many versions but can't find a version that won't give this error
```
!pip install bitsandbytes==0.45.0 --upgrade
!pip install insightface --upgrade
!pip install huggingface_hub==0.25.1 hf_transfer diffusers==0.31.0 transformers==4.36.0
!pip uninstall xformers triton --yes
!pip install torch==2.2.0+cu121 torchvision --index-url https://download.pytorch.org/whl/cu121
!pip install xformers==0.0.24 --index-url https://download.pytorch.org/whl/cu121
```
```
File "/kaggle/temp/InstantID/gradio_demo/web-ui-multicontrolnet.py", line 975, in generate_image
reload_pipe(model_input, model_dropdown, scheduler, adapter_strength_ratio, enable_LCM, depth_type, lora_model_dropdown, lora_scale,test_all_loras,single_lora)
File "/kaggle/temp/InstantID/gradio_demo/web-ui-multicontrolnet.py", line 654, in reload_pipe
pipe = load_model(_pretrained_model_folder, model_to_load)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/kaggle/temp/InstantID/gradio_demo/web-ui-multicontrolnet.py", line 528, in load_model
pipeline = StableDiffusionPipeline.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/pipeline_utils.py", line 896, in from_pretrained
loaded_sub_model = load_sub_model(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/pipeline_loading_utils.py", line 704, in load_sub_model
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/modeling_utils.py", line 4027, in from_pretrained
dtype_orig = cls._set_default_torch_dtype(torch_dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/modeling_utils.py", line 1584, in _set_default_torch_dtype
torch.set_default_dtype(dtype)
File "/usr/local/lib/python3.11/dist-packages/torch/__init__.py", line 1009, in set_default_dtype
_C._set_default_dtype(d)
TypeError: couldn't find storage object Float8_e4m3fnStorage
```
|
https://github.com/huggingface/transformers/issues/39409
|
closed
|
[
"bug"
] | 2025-07-15T01:51:08Z
| 2025-08-02T12:06:59Z
| 1
|
FurkanGozukara
|
huggingface/datasets
| 7,682
|
Fail to cast Audio feature for numpy arrays in datasets 4.0.0
|
### Describe the bug
Casting features with Audio for numpy arrays - done here with `ds.map(gen_sine, features=features)` fails
in version 4.0.0 but not in version 3.6.0
### Steps to reproduce the bug
The following `uv script` should be able to reproduce the bug in version 4.0.0
and pass in version 3.6.0 on a macOS Sequoia 15.5
```python
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "datasets[audio]==4.0.0",
# "librosa>=0.11.0",
# ]
# ///
# NAME
# create_audio_dataset.py - create an audio dataset of sine waves
#
# SYNOPSIS
# uv run create_audio_dataset.py
#
# DESCRIPTION
# Create an audio dataset using the Hugging Face [datasets] library.
# Illustrates how to create synthetic audio datasets using the [map]
# datasets function.
#
# The strategy is to first create a dataset with the input to the
# generation function, then execute the map function that generates
# the result, and finally cast the final features.
#
# BUG
# Casting features with Audio for numpy arrays -
# done here with `ds.map(gen_sine, features=features)` fails
# in version 4.0.0 but not in version 3.6.0
#
# This happens both in cases where --extra audio is provided and where is not.
# When audio is not provided i've installed the latest compatible version
# of soundfile.
#
# The error when soundfile is installed but the audio --extra is not
# indicates that the array values do not have the `.T` property,
# whilst also indicating that the value is a list instead of a numpy array.
#
# Last lines of error report when for datasets + soundfile case
# ...
#
# File "/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py", line 239, in cast_storage
# storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
# ~~~~~~~~~~~~~~~~~~~~~~^^^
# File "/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py", line 122, in encode_example
# sf.write(buffer, value["array"].T, value["sampling_rate"], format="wav")
# ^^^^^^^^^^^^^^^^
# AttributeError: 'list' object has no attribute 'T'
# ...
#
# For the case of datasets[audio] without explicit adding soundfile I get an FFmpeg
# error.
#
# Last lines of error report:
#
# ...
# RuntimeError: Could not load libtorchcodec. Likely causes:
# 1. FFmpeg is not properly installed in your environment. We support
# versions 4, 5, 6 and 7.
# 2. The PyTorch version (2.7.1) is not compatible with
# this version of TorchCodec. Refer to the version compatibility
# table:
# https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.
# 3. Another runtime dependency; see exceptions below.
# The following exceptions were raised as we tried to load libtorchcodec:
#
# [start of libtorchcodec loading traceback]
# FFmpeg version 7: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib, 0x0006): Library not loaded: @rpath/libavutil.59.dylib
# Referenced from: <6DB21246-F28A-31A6-910A-D8F3355D1064> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib
# Reason: no LC_RPATH's found
# FFmpeg version 6: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib, 0x0006): Library not loaded: @rpath/libavutil.58.dylib
# Referenced from: <BD3B44FC-E14B-3ABF-800F-BB54B6CCA3B1> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib
# Reason: no LC_RPATH's found
# FFmpeg version 5: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib, 0x0006): Library not loaded: @rpath/libavutil.57.dylib
# Referenced from: <F06EBF8A-238C-3A96-BFBB-B34E0BBDABF0> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib
# Reason: no LC_RPATH's found
# FFmpeg version 4: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib, 0x0006): Library not loaded: @rpath/libavutil.56.dylib
# Referenced from: <6E59F017-C703-3AF6-A271-6277DD5F8170> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib
# Reason: no LC_RPATH's found
# ...
#
# This is strange because the the same error does not happen when using version
|
https://github.com/huggingface/datasets/issues/7682
|
closed
|
[] | 2025-07-14T18:41:02Z
| 2025-07-15T12:10:39Z
| 2
|
luatil-cloud
|
huggingface/lerobot
| 1,507
|
[PI0] Evaluation result on the metaworld
|
Has anyone tried training pi0 on the Metaworld benchmark? My evaluation results are relatively low 30~%.
|
https://github.com/huggingface/lerobot/issues/1507
|
closed
|
[
"bug",
"question",
"policies",
"simulation"
] | 2025-07-14T14:56:38Z
| 2025-10-08T08:47:31Z
| null |
chenkang455
|
huggingface/transformers
| 39,401
|
Qwen3 tokenizer wrong offset_mapping
|
### System Info
transformers 4.53.2, Ubuntu 22.04.4, python 3.11.13
### Who can help?
@ArthurZucker and @itazap There must be a problem with the `offset_mapping` of Qwen3 `tokenizer`. The starting point in the text for each token, except the first and the last, is one position behind. I compared it with the BERT's `tokenizer`, which produces what is expected:
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
sample_text='A girl is styling her hair.'
bert_tokenizer = BertTokenizerFast.from_pretrained('google-bert/bert-base-cased')
bert_encoding = bert_tokenizer(
text=sample_text, add_special_tokens=False, return_offsets_mapping=True
)
print(bert_encoding['offset_mapping'])
qwen_tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-0.6B')
qwen_encoding = qwen_tokenizer(
text=sample_text, add_special_tokens=False, return_offsets_mapping=True
)
print(qwen_encoding['offset_mapping'])
```
### Expected behavior
[(0, 1), (2, 6), (7, 9), (10, 17), (18, 21), (22, 26), (26, 27)]
[(0, 1), (1, 6), (6, 9), (9, 17), (17, 21), (21, 26), (26, 27)]
|
https://github.com/huggingface/transformers/issues/39401
|
closed
|
[
"bug"
] | 2025-07-14T14:21:08Z
| 2025-07-16T09:59:35Z
| 4
|
contribcode
|
huggingface/lerobot
| 1,506
|
episode: None
|
When I run "python -m lerobot.scripts.train --dataset.root=./lerobot_datasets/my_robot_dataset/ --output_dir=./lerobot_datasets/outputs/ --policy.type=pi0 --dataset.repo_id=lerobot/tape --policy.push_to_hub=false", I got
‘’
'dataset': {'episodes': None,
'image_transforms': {'enable': False...
}
‘’.
Is this right?
|
https://github.com/huggingface/lerobot/issues/1506
|
open
|
[
"question",
"policies"
] | 2025-07-14T13:29:07Z
| 2025-08-12T09:31:16Z
| null |
LogSSim
|
huggingface/finetrainers
| 420
|
How to fine-tune Wan 2.1 with Context Parallelism?
|
I am trying to fine-tune the Wan 2.1 model and would like to leverage the Context Parallelism (CP) feature to manage memory and scale the training. I saw in the main README that `CP support` is listed as a key feature.
I have looked through the `examples/training` directory and the documentation, but I couldn't find a specific example or launch script demonstrating how to fine-tune the Wan model with Context Parallelism enabled.
Could you please provide some guidance or a minimal example on how to properly configure a training job for **Wan 2.1 with Context Parallelism**?
|
https://github.com/huggingface/finetrainers/issues/420
|
open
|
[] | 2025-07-14T06:55:39Z
| 2025-07-15T05:09:45Z
| null |
vviper25
|
huggingface/lerobot
| 1,503
|
LeRobot So100 and Groot N1.5 Model Multi-Robot Deployment Feasibility Inquiry
|
Hello, I am conducting various tests using LeRobot's So100 (robot arm) with Groot N1.5 for training.
I have some questions to ask.
**Main Question**
Is it possible to simultaneously apply a model trained with Groot N1.5 base on one robot to multiple robots of the same model?
**Question Background (Actual Experience)**
I had a model that was trained with Groot 1.5 base using data collected from So100. However, when one robot motor failed and was replaced, I had to recalibrate the entire system.
After applying the previously used model for inference, the robot did not operate properly.
I suspect this might be due to the basic position changing during the calibration process.
**Core Question**
Following this logic, does each robot of the same model require an individual model tailored to its specific calibration?
This question also relates to whether a single unified model can be used for inference and operation when deploying 100 robot arms in a factory setting.
I would appreciate your response.
|
https://github.com/huggingface/lerobot/issues/1503
|
open
|
[
"enhancement",
"question",
"policies",
"dataset"
] | 2025-07-14T05:55:44Z
| 2025-08-12T09:31:35Z
| null |
devedgar
|
pytorch/helion
| 303
|
RuntimeError: Tile(0) is not tracked with proxy for
|
Hi, I noticed the following when a tile is used in a function:
Code:
```python
import ast
import torch
import helion
import helion.language as hl
from helion.language import _decorators
from helion._compiler.inductor_lowering import CodegenState
@_decorators.api()
def func(
tensor: torch.Tensor,
tile: tuple[int, ...]
) -> torch.Tensor:
raise NotInsideKernel
@_decorators.register_fake(func)
def _(
tensor: torch.Tensor,
tile: tuple[int, ...]
) -> torch.Tensor:
return tensor
@_decorators.codegen(func)
def _(state: CodegenState) -> ast.AST:
tensor = state.ast_arg(0)
assert isinstance(tensor, ast.AST)
return tensor
@helion.kernel(static_shapes=True)
def helion_func(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
for tile_m, tile_n in hl.tile((x.shape[0], x.shape[1])):
x_tile = func(x, (tile_m, tile_n))
x = torch.randn(16, 16)
y = torch.randn(16, 16)
helion_func(x, y)
```
The above will print:
```python
InternalError: RuntimeError: Tile(0) (140637436543456)is not tracked with proxy for <torch.fx.experimental.proxy_tensor.PythonKeyTracer object at 0x7fe8b4647680>
```
Any chance you know how to fix this? Thanks again!
|
https://github.com/pytorch/helion/issues/303
|
closed
|
[
"question"
] | 2025-07-13T07:45:20Z
| 2025-08-25T21:25:22Z
| null |
HanGuo97
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.