repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers
| 38,918
|
Lack of IDE-Specific Authentication Instructions in Hugging Face "Quickstart" Documentation
|
Explanation:
I’m currently exploring the Transformers library and want to understand its architecture in order to make meaningful contributions. I started with the Quickstart page, particularly the setup section, which provides instructions for getting started with the Hugging Face Hub.
However, I noticed that the documentation appears to be primarily tailored for users working in Jupyter notebooks. The instructions for authentication (using notebook_login()) seem to assume that the user is running code within a notebook environment. As someone who is working in PyCharm (and possibly others working in VS Code or other IDEs), I found that there is no clear guidance for authenticating via these IDEs.
It would be helpful to explicitly mention how users working in an IDE like PyCharm or VS Code should authenticate. Specifically, using huggingface-cli for authentication in a non-notebook environment could be a good solution. Providing a simple, clear guide on how to authenticate via the CLI or within the IDE would greatly improve the documentation.
Suggestion:
I recommend updating the documentation to include a section specifically addressing authentication when working in IDEs like PyCharm or VS Code.
Please let me know if this suggestion makes sense or if you need any further clarification before I proceed with the update.
|
https://github.com/huggingface/transformers/issues/38918
|
closed
|
[] | 2025-06-19T17:16:32Z
| 2025-06-24T18:48:17Z
| 4
|
marcndo
|
huggingface/datasets
| 7,627
|
Creating a HF Dataset from lakeFS with S3 storage takes too much time!
|
Hi,
I’m new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_
Here I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot!
From what I understand, it is loading the images into cache then building the dataset.
– Please find bellow the execution screenshot –
Is there a way to optimize this or am I doing something wrong?
Thanks!

|
https://github.com/huggingface/datasets/issues/7627
|
closed
|
[] | 2025-06-19T14:28:41Z
| 2025-06-23T12:39:10Z
| 1
|
Thunderhead-exe
|
huggingface/lerobot
| 1,351
|
Need help about dataset and train.
|
# What this for
Attracted by smolvla, and new to smolvla_base, and i am now trying to ask few questions before a try with this model.
Several parts:
1) dataset
2) simulation
3) real world
## dataset
### Two cameras ?
I have read three datasets, including
https://huggingface.co/datasets/lerobot/svla_so101_pickplace
https://huggingface.co/datasets/Marlboro1998/starai02
and its structure shows:
videos/chunks/ two foldes with .mp4 files, each is one camera.
https://huggingface.co/datasets/unitreerobotics/Z1_DualArmStackBox_Dataset
I find that the data in unitree dataset is with one camera
does it mean that it is not necessary with two cameras?
**if one camera** is possible to build dataset. Where and how should i change the code to build the dataset and to train with it?
**if two cameras are min demand**, is it possible i make it with random position? like one is in-hand, and one is some where else, because it might be hard to real put it everytime in the same position ( for some work)
### depth data?
I have one realsense camera with depth data. how should i deal with it in dataset? only with color frame?
### video length
I have watch several videos in svla_so101_pickplace, and each is with 10s. I understand that this is because such shot video contains a complete task.
how about a work might be long and complex? break it down into n parts so you will get n +1 (break down + full) tasks and then train with it?
## simulation
### simulation env
i got some basic understanding in this part. I used few times with mujoco and isaac sim. just start to try with lerobot.
Is it possible output to mujoco or isaac sim? I understand these are two might not relate to lerobot, sorry if anything wrong.
### simulation of different robot
This is something relating to train. How can i record a dataset for custom robot? I have read some dataset like for unitree, but like how to record in simulate and with custom robot?
I have not yet deep read the documentation with lerobot, so if there is any doc can help this, could you share some information.
# real world
if i try to train with other robot, but with few dataset ( because less community data and self-collection data) , i think its performance would not be as good as those in your paper. so how many data do you think necessary for such situation ( robot different from paper)
Thanks a lot for your consideration. Forgive me if anything wrong in my text above.
|
https://github.com/huggingface/lerobot/issues/1351
|
closed
|
[
"question",
"policies",
"dataset"
] | 2025-06-19T04:03:43Z
| 2025-10-17T11:47:56Z
| null |
hbj52152
|
huggingface/candle
| 2,997
|
Implement Conv3D support for compatibility with Qwen-VL and similar models
|
Several vision-language models such as Qwen-VL and its variants make use of 3D convolution layers (Conv3D) in their architecture, especially for handling video or temporal spatial data. Currently, Candle does not support Conv3D operations, which makes it impossible to run or port such models natively.
In order to support these models and ensure broader compatibility with existing open-source architectures, it would be beneficial to implement Conv3D in Candle as a fundamental operation.
This will enable:
- Native execution of Qwen-VL-style models
- Proper handling of video or spatio-temporal data inputs
- Compatibility with pretrained weights relying on Conv3D layers
Looking forward to discussion and suggestions on how best to approach this implementation.
|
https://github.com/huggingface/candle/issues/2997
|
open
|
[] | 2025-06-19T02:57:20Z
| 2025-10-10T16:51:20Z
| 1
|
maximizemaxwell
|
pytorch/torchrec
| 3,114
|
Which lightning strategy to use with torchrec optimizers?
|
Hi, thank you for this great work. I would like to know which [distributed strategy](https://github.com/Lightning-AI/pytorch-lightning/blob/76d3d22c5997398ffb5296cf500c723a176c0a06/src/lightning/pytorch/trainer/trainer.py#L95) to use with lightning trainer. I see two potential avenues:
1. DDP strategy: [following this example](https://github.com/pytorch/torchrec/blob/ab1cbe13833f51ace06f5075653ca1e16d937038/examples/bert4rec/bert4rec_main.py#L512-L524), I verified that the updates are not sparse, ie, embeddings not used to compute the loss for the current batch were still updated when using Adam (due to momentum/weight decay)
2. Custom strategy for DMP: [when using DMP](https://github.com/pytorch/torchrec/blob/ab1cbe13833f51ace06f5075653ca1e16d937038/examples/bert4rec/bert4rec_main.py#L491-L507), I've verified the updates are sparse. However, AFAIK, there is not a DMP strategy for lightning, and so I would need to define a custom strategy.
Is it possible to make DDP work for sparse opt, and if not, is a custom strategy the best option?
MWE:
```
import argparse
import os
import sys
from typing import Any, cast, Dict, List, Union
from fbgemm_gpu.split_embedding_configs import EmbOptimType
import torch
from torch import distributed as dist, nn, optim
import torch.utils.data as data_utils
from torch.nn.parallel import DistributedDataParallel as DDP
import torchrec
from torchrec.distributed.embeddingbag import EmbeddingBagCollectionSharder
from torchrec.distributed.model_parallel import DistributedModelParallel as DMP
from torchrec.distributed.types import ModuleSharder
from torchrec.optim.keyed import CombinedOptimizer, KeyedOptimizerWrapper
from torchrec.optim.optimizers import in_backward_optimizer_filter
from torchrec.sparse.jagged_tensor import KeyedJaggedTensor
from torchrec.modules.embedding_configs import ShardingType
from torchrec import EmbeddingBagCollection, EmbeddingBagConfig, PoolingType
class DataSet(torch.utils.data.IterableDataset):
def __init__(
self,
max_id: int,
max_seq_len: int
) -> None:
self.max_seq_len = max_seq_len
self.max_id = max_id
def __iter__(self):
while True:
len_ = torch.randint(1, self.max_seq_len + 1, (1, )).item()
yield torch.randint(0, self.max_id, (len_,))
class Model(torch.nn.Module):
def __init__(
self,
max_id: int,
emb_dim: int,
) -> None:
super().__init__()
self.emb_dim = emb_dim
item_embedding_config = EmbeddingBagConfig(
name="item_embedding",
embedding_dim=emb_dim,
num_embeddings=max_id,
feature_names=["item"],
weight_init_max=1.0,
weight_init_min=-1.0,
pooling=PoolingType.MEAN,
)
self.ebc = EmbeddingBagCollection(
tables=[item_embedding_config],
)
self.head = nn.Linear(emb_dim, 1)
def forward(self, x: KeyedJaggedTensor) -> torch.Tensor:
out = self.ebc(x)["item"].to_dense()
return self.head(out)
def parse_args(argv: List[str]) -> argparse.Namespace:
parser = argparse.ArgumentParser()
parser.add_argument(
"--mode",
type=str,
default="ddp",
help="dmp (distributed model parallel) or ddp (distributed data parallel)",
)
return parser.parse_args(argv)
def _to_kjt(seqs: torch.LongTensor, device: torch.device) -> KeyedJaggedTensor:
seqs_list = list(seqs)
lengths = torch.IntTensor([value.size(0) for value in seqs_list])
values = torch.cat(seqs_list, dim=0)
kjt = KeyedJaggedTensor.from_lengths_sync(
keys=["item"], values=values, lengths=lengths
).to(device)
return kjt
def get_embedding_weights(model: Union[DDP, DMP], x: List[torch.Tensor]):
emb_weights = [v.data.clone() for k, v in model.named_parameters() if "embedding" in k]
assert len(emb_weights) == 1
emb_weights = emb_weights[0]
x = torch.cat(x)
ids = torch.arange(len(emb_weights)).type_as(x)
used_mask = torch.isin(ids, x)
return emb_weights[used_mask], emb_weights[~used_mask]
def _train_one_epoch(
model: Union[DDP, DMP],
loader: data_utils.DataLoader,
device: torch.device,
optimizer: optim.Adam,
) -> None:
model.train()
if torch.cuda.is_available():
torch.cuda.set_device(dist.get_rank())
i = 0
NUM_ITER = 5
for batch in loader:
i += 1
batch = [x.to(device) for x in batch]
optimizer.zero_grad()
kjt = _to_kjt(batch, device)
loss = model(kjt).norm()
used_embs_pre, unused_embs_pre = get_embedding_weights(model, batch)
loss.backward()
optimizer.step()
used_embs_post, unused_embs_post = get_embedding_weights(model, batch)
diffs_used = torch.norm(used_embs_post - used_embs_pre).item()
diffs_unused = torch.norm(unused_embs_post - unused_embs_pre).item()
print(f"Iter {i
|
https://github.com/meta-pytorch/torchrec/issues/3114
|
open
|
[] | 2025-06-18T19:25:10Z
| 2025-06-19T06:04:22Z
| 0
|
JacobHelwig
|
huggingface/accelerate
| 3,633
|
how to save a model with FSDP2 ?
|
Hello everyone, I’m confused about how to save model weights using FSDP2. I keep running into OOM (out-of-memory) issues when trying to save a trained 8B model with FSDP2. Interestingly, memory is sufficient during training, but saving the model requires too much memory.
I would like each rank to save only its own weights (Maybe the OOM issue doesn't occur in this case?)
I’m using 8 A100-40GB GPUs, and I’d really appreciate your help.
here is my envs:
```text
accelereate==1.7.0
torch==2.6.0+cu12.6
transformers==4.52.4
```
this is my accelerate config (FSDP2.ymal):
```yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
enable_cpu_affinity: false
fsdp_config:
fsdp_activation_checkpointing: false
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: false
fsdp_reshard_after_forward: true
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_version: 2
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
my script (demo.py):
```python
import os
import os.path as osp
import torch
import torch.nn as nn
from transformers import AutoTokenizer, AutoModelForCausalLM
from accelerate import Accelerator
class Mydataset(torch.utils.data.Dataset):
def __init__(self, data_length=32, tokenizer = None):
super().__init__()
self.data_length = data_length
self.tokenizer = tokenizer
self.input_str = 'this is a test'
self.data = tokenizer(self.input_str, return_tensors='pt', padding='max_length', max_length=32, padding_side='right')
def __len__(self):
return 10
def __getitem__(self, idx):
return {
'input_ids': self.data['input_ids'][0],
'attention_mask': self.data['attention_mask'][0]
}
if __name__ == '__main__':
accelerator = Accelerator()
model_path = "./pretrain/Qwen3-8B"
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
dataset = Mydataset(tokenizer=tokenizer)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=16, shuffle=True)
model, optimizer, dataloader = accelerator.prepare(model, optimizer, dataloader)
loss_fuc = torch.nn.CrossEntropyLoss()
model.train()
# training
for batch in dataloader:
input_ids = batch['input_ids']
attention_mask = batch['attention_mask']
labels = batch['input_ids'].clone()
outputs = model(input_ids=input_ids, attention_mask=attention_mask)
labels = nn.functional.pad(labels, (0, 1), value=-100)
shift_labels = labels[..., 1:].contiguous().view(-1)
accelerator.wait_for_everyone()
loss = loss_fuc(outputs.logits.view(-1, outputs.logits.shape[-1]), shift_labels)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
print("training finished")
model.eval()
model_save_path = "./saved_models/tmp"
accelerator.save_model(model, model_save_path)
print("Done")
```
command:
```bash
accelerate launch --config_file ./accelerate_configs/FSDP2.yaml demo.py
```
|
https://github.com/huggingface/accelerate/issues/3633
|
closed
|
[] | 2025-06-18T11:41:05Z
| 2025-06-18T15:36:37Z
| null |
colinzhaoxp
|
huggingface/datasets
| 7,624
|
#Dataset Make "image" column appear first in dataset preview UI
|
Hi!
#Dataset
I’m currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub.
However, at the moment, the `"image"` column is not the first—in fact, it appears last, which is not ideal for the presentation I’d like to achieve.
I have a couple of questions:
Is there a way to force the dataset card to display the `"image"` column first?
Is there currently any way to control or influence the column order in the dataset preview UI?
Does the order of keys in the .jsonl file or the features argument affect the display order?
Thanks again for your time and help! :blush:
|
https://github.com/huggingface/datasets/issues/7624
|
closed
|
[] | 2025-06-18T09:25:19Z
| 2025-06-20T07:46:43Z
| 2
|
jcerveto
|
huggingface/agents-course
| 550
|
[QUESTION] Diagram of the multi-agent architecture
|
[Unit 2.1 Multi-Agent Systems](https://huggingface.co/learn/agents-course/unit2/smolagents/multi_agent_systems#multi-agent-systems) contains [an image](https://mermaid.ink/img/pako:eNp1kc1qhTAQRl9FUiQb8wIpdNO76eKubrmFks1oRg3VSYgjpYjv3lFL_2hnMWQOJwn5sqgmelRWleUSKLAtFs09jqhtoWuYUFfFAa6QA9QDTnpzamheuhxn8pt40-6l13UtS0ddhtQXj6dbR4XUGQg6zEYasTF393KjeSDGnDJKNxzj8I_7hLW5IOSmP9CH9hv_NL-d94d4DVNg84p1EnK4qlIj5hGClySWbadT-6OdsrL02MI8sFOOVkciw8zx8kaNspxnrJQE0fXKtjBMMs3JA-MpgOQwftIE9Bzj14w-cMznI_39E9Z3p0uFoA?type=png) depicting a diagram of the multi-agent architecture. In this image, the Manager Agent, which is typically responsible for task delegation, has direct access to a Code-Interpreter Tool. Would it be more reasonable in practice if there was a Code-Interpreter Agent between them?

|
https://github.com/huggingface/agents-course/issues/550
|
open
|
[
"question"
] | 2025-06-18T08:58:58Z
| 2025-06-18T08:58:58Z
| null |
st143575
|
pytorch/vision
| 9,110
|
RoIHeads.postprocess_detections boxes slicing error occurs when removing predictions with the background label
|
### 🐛 Describe the bug
**Bug Report: Incorrect Box Slicing in Faster R-CNN's postprocess_detections**
### Minimal Reproduction Code
```python
import torch
import torchvision
detector = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
data = torch.zeros((1, 3, 1080, 1920), dtype=torch.float32)
detections = detector(data)
```
### Description
The bug occurs in [`roi_heads.py` (line 701)](https://github.com/pytorch/vision/blob/main/torchvision/models/detection/roi_heads.py#L701) in the `postprocess_detections` function of `RoIHeads` when processing Faster R-CNN outputs. The current implementation incorrectly handles box dimension slicing when removing background class predictions.
### Problem Location
The problematic code segment:
```python
for boxes, scores, image_shape in zip(pred_boxes_list, pred_scores_list, image_shapes):
...
# remove predictions with the background label
boxes = boxes[:, 1:] # Incorrect slicing
scores = scores[:, 1:]
labels = labels[:, 1:]
...
```
### Root Cause
1. The boxes tensor has shape `[N, num_classes * 4]` (where each class has 4 coordinate values)
2. The current slicing `boxes[:, 1:]` incorrectly operates on the last dimension (class*coordinates) instead of just the class dimension
3. This causes misalignment between boxes, scores, and labels since they're being sliced differently

### Expected Behavior
The boxes tensor should first be reshaped to `[N, num_classes, 4]` before slicing to properly separate class and coordinate dimensions.
### Proposed Fix
```python
for boxes, scores, image_shape in zip(pred_boxes_list, pred_scores_list, image_shapes):
...
# remove predictions with the background label
boxes = boxes.reshape(-1, num_classes, 4) # Proper dimension separation
boxes = boxes[:, 1:, :] # Correct class dimension slicing
scores = scores[:, 1:]
labels = labels[:, 1:]
...
```
### Impact
The current implementation leads to:
1. Misaligned boxes and their corresponding scores/labels
2. Potentially incorrect final detection results
3. Silent failure without explicit errors
### Versions
branch: 6473b779bdb8ba02bab0fc9e0f4ef4661ebb632a
|
https://github.com/pytorch/vision/issues/9110
|
closed
|
[
"bug",
"question"
] | 2025-06-18T08:55:33Z
| 2025-09-04T14:52:39Z
| null |
FeiFanMoKe
|
pytorch/pytorch
| 156,191
|
Dynamo does not know how to trace method `__len__` of class `<unknown type>` with torch.logging calls
|
### 🐛 Describe the bug
Whenever we use any logging function, there is a graph break due to calling `__len__` on an unkown type. I dug into the logging source code and set a breakpoint, and the `root.handlers` object is defintiely a standard list but torch.compile isn't able to parse that.
I know that there is there is this change https://github.com/pytorch/pytorch/pull/139403 that allows us to ignore certain logging functions but calling `logging.info` still forces us through this code path.
We use a ton of logging throughout our large training script and the graph breaks kill our performance. Any help resolving this would be great! Note: we don't actually care about seeing the logs in a torch.compiled graph, we already log everything once eagerly before compiling.
```python
import torch
import logging
import triton
torch._logging.set_logs(graph_breaks=True)
_NUM_ITERATIONS = 20
@torch.compile
def _logging_fn(x, y):
result = x
for _ in range(_NUM_ITERATIONS):
logging.info("Hello")
result += (x * y)
return result
# Benchmark
DEVICE = "cuda"
test_x = torch.randn(1000).to(DEVICE).to(torch.float32)
test_y = torch.randn(1000).to(DEVICE).to(torch.float32)
print(f"logging_fn: {triton.testing.do_bench(lambda: _logging_fn(test_x, test_y))}")
```
### Error logs
```
[__graph_breaks] Graph break in user code at /home/aboubezari/.conda/envs/torch-env2/lib/python3.10/logging/__init__.py:2127
[__graph_breaks] Graph Break Reason: Unsupported method call
[__graph_breaks] Explanation: Dynamo does not know how to trace method `__len__` of class `<unknown type>`
[__graph_breaks] Hint: Avoid calling `<unknown type>.__len__` in your code.
[__graph_breaks] Hint: Please report an issue to PyTorch.
```
### Versions
PyTorch version: 2.7.1+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-1083-gcp-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 535.247.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.166
BogoMIPS: 4400.33
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 6 MiB
L3 cache: 38.5 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe
|
https://github.com/pytorch/pytorch/issues/156191
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2025-06-17T16:54:31Z
| 2025-06-17T19:27:01Z
| null |
aboubezari
|
huggingface/lerobot
| 1,337
|
how to work with ur robot,and collect the data and fine turn the model ?
|
https://github.com/huggingface/lerobot/issues/1337
|
closed
|
[
"question",
"policies",
"dataset"
] | 2025-06-17T09:51:16Z
| 2025-10-17T11:49:17Z
| null |
mmlingyu
|
|
huggingface/diffusers
| 11,730
|
Add `--lora_alpha` and metadata handling in training scripts follow up
|
With #11707, #11723 we pushed some small changes to the way we save and parse metadata for trained LoRAs, which also allow us to add a `--lora_alpha` arg to the Dreambooth LoRA training scripts, making LoRA alpha also configurable.
This issue is to ask for help from the community to bring these changes to the other training scripts.
Since this is an easy contribution, let's try to leave this issue for beginners and people that want to start learning how to contribute to open source projects 🤗
Updating list of scripts to contribute to:
- [ ] [train_dreambooth_lora_sdxl_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py)
- [x] [train_dreambooth_lora_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py)
- [x] [train_dreambooth_lora_sd3](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sd3.py)
- [x] [train_dreambooth_lora_sana](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sana.py)
- [ ] [train_dreambooth_lora_lumina2](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_lumina2.py)
- [x] [train_dreambooth_lora_hidream](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_hidream.py)
- [ ] [train_dreambooth_lora](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py)
If you want to contribute just answer to this issue with the one you want to do and tag me in the PR. Please only take one so we can use this opportunity for people to learn the ropes on how to contribute and get started with open source.
cc: @sayakpaul
|
https://github.com/huggingface/diffusers/issues/11730
|
closed
|
[
"good first issue",
"contributions-welcome"
] | 2025-06-17T09:29:24Z
| 2025-06-24T10:58:54Z
| 8
|
linoytsaban
|
huggingface/trl
| 3,605
|
How to convert my multiturn dialogue dataset?
|
I have created a multiturn dialogue dataset. During the training process, the assistant's reply needs to be based on the user's reply and historical records in the previous round. First, the user's reply is labeled, and then the corresponding reply sentence is generated. In other words, the assistant's reply needs to rely on the previous multi-round dialogue data, and the reward function is based on the label prediction and reply sentence of the current round of reply. How should this kind of dataset be handled?
####example
{'role':'user',content:"hello,doctor,I cant sleep well"},
{'role':'assiatnt',content:"userstate:sleep problems | useremotion:|response:Is it trouble falling asleep or poor sleep quality?"},
{'role':'user',content:"All"},
{'role':'assiatnt',content:"userstate:sleep problems | useremotion:irritable|assistant-strategy:Ask for details|response:How long has it lasted??"},
{'role':'user',content:"About two months"},
......
Using a single round of user input alone cannot determine the user's state and emotions。But I hope that in each round of user response, the output of the assistant will be evaluated.
|
https://github.com/huggingface/trl/issues/3605
|
closed
|
[
"🏋 Reward"
] | 2025-06-17T09:07:47Z
| 2025-09-22T17:46:35Z
| null |
Miaoqinghong
|
huggingface/lerobot
| 1,333
|
SO-100 Follower: Severe wrist_roll motor instability causing unwanted rotation during teleoperation
|
## Problem Description
The SO-100 Follower robot arm experiences severe instability in the `wrist_roll` motor during teleoperation, causing unwanted and uncontrollable rotation that significantly impacts usability. The motor exhibits extreme sensitivity and appears to be completely out of control in the default configuration.
## Environment
- **Robot**: SO-100 Follower
- **LeRobot Version**: [Current version]
- **Hardware**: Feetech STS3215 servos
- **OS**: macOS
- **Python**: 3.10.4
## Quantitative Analysis
### Baseline Analysis (Default Configuration)
- **Data Collection**: 416.5 seconds, 24,894 data points
- **Standard Deviation**: **95.596** (extremely high)
- **Large Changes (>10.0)**: **242 occurrences**
- **Value Distribution**:
- Small values (|x|<5.0): **0%**
- Large values (|x|≥10.0): **100%** (completely uncontrolled)
### Motor Correlation Analysis
Strong correlations with other motors suggest cross-coupling issues:
1. **elbow_flex.pos**: -0.253 (negative correlation, highest impact)
2. **shoulder_lift.pos**: 0.203 (positive correlation)
3. **gripper.pos**: 0.167 (positive correlation)
4. **shoulder_pan.pos**: 0.124 (weak positive correlation)
5. **wrist_flex.pos**: 0.026 (minimal correlation)
### Trigger Pattern Analysis
When wrist_roll experiences large changes (242 instances), average changes in other motors:
- **elbow_flex.pos**: 1.970 (highest trigger)
- **wrist_flex.pos**: 2.092
- **shoulder_lift.pos**: 1.119
- **gripper.pos**: 0.585
- **shoulder_pan.pos**: 0.426
## Root Cause Investigation
### 1. Motor Configuration Issues
- Default P_Coefficient (16) appears too high for wrist_roll motor
- No deadzone filtering in default configuration
- Potential hardware-level noise or mechanical coupling
### 2. Cross-Motor Interference
- Strong negative correlation with elbow_flex suggests mechanical or electrical interference
- Movement of other motors triggers unwanted wrist_roll rotation
### 3. Control System Sensitivity
- Motor responds to minimal input changes
- No built-in filtering for noise or small movements
## Reproduction Steps
1. Set up SO-100 Follower with default configuration
2. Run teleoperation:
```bash
python -m lerobot.teleoperate \
--robot.type=so100_follower \
--robot.port=/dev/tty.usbserial-130 \
--robot.id=blue \
--teleop.type=so100_leader \
--teleop.port=/dev/tty.usbserial-110 \
--teleop.id=blue
```
3. Move any other motor (especially elbow_flex)
4. Observe unwanted wrist_roll rotation
## Attempted Solutions and Results
### 1. P Coefficient Reduction
**Implementation**: Reduced wrist_roll P_Coefficient from 16 to 4
**Result**: Improved standard deviation from 95.596 to 59.976 (37.3% improvement)
### 2. Deadzone Filtering
**Implementation**: Added deadzone threshold (5.0) to ignore small changes
**Result**: Partial improvement but problem persists
### 3. Advanced Filtering System
**Implementation**: Created comprehensive filtering with:
- Moving average filter
- Gripper-linked filter
- Combined filtering modes
**Result**: Reduced responsiveness but didn't eliminate core issue
### 4. Complete Disabling (Workaround)
**Implementation**: Force wrist_roll value to 0.0 at all times
**Result**: Eliminates problem but removes wrist_roll functionality
## Proposed Solutions
### Short-term (Workarounds)
1. **Lower P Coefficient**: Further reduce to 2 or 1
2. **Stronger Deadzone**: Increase threshold to 20.0+
3. **Motor Disabling**: Provide option to disable problematic motors
### Long-term (Root Cause Fixes)
1. **Hardware Investigation**: Check for:
- Cable interference/noise
- Mechanical coupling between joints
- Motor calibration issues
- Power supply stability
2. **Software Improvements**:
- Adaptive filtering based on motor correlations
- Cross-motor interference compensation
- Better default configurations for SO-100
3. **Configuration Options**:
- Motor-specific P/I/D coefficients
- Built-in filtering options
- Hardware-specific presets
## Additional Data Available
I have collected extensive analysis data including:
- Multiple log files with quantitative measurements
- Correlation analysis scripts and results
- Visualization graphs showing the problem
- Working implementations of various filtering approaches
## Impact
This issue severely impacts the usability of SO-100 Follower robots for:
- Teleoperation tasks
- Data collection for machine learning
- Precise manipulation requirements
The problem appears to be systemic rather than isolated to individual units, suggesting a configuration or design issue that affects the SO-100 platform generally.
## Request for Assistance
Given the complexity of this issue and its impact on SO-100 usability, I would appreciate:
1. Guidance on hardware-level debugging approaches
2. Insights from other SO-100 users experiencing similar issues
3. Potential firmware or configuration updates
4. Recommendations for permanen
|
https://github.com/huggingface/lerobot/issues/1333
|
open
|
[
"question",
"policies"
] | 2025-06-17T07:10:23Z
| 2025-12-05T12:17:16Z
| null |
TKDRYU104
|
huggingface/safetensors
| 624
|
Interest in Parallel Model Training and Xformers Saving Support (Bug?) (SOLVED)
|
### Feature request
I would like to request official support for xformers (link: https://github.com/facebookresearch/xformers) and parallel model training: https://huggingface.co/docs/transformers/v4.13.0/en/parallelism for the safetensor saving file format if this does not currently exist. This safetensors saving error may be a bug exclusive to my Diffusion-Transformer hybrid model architecture.
### Motivation
I had a problem when training a custom Diffusion-Transformer hybrid architecture with xformers and parallel model training. I tried to flatten the hybrid model for saving so the dimensions were what safetensors expected. However, the safetensors seem to require all the model training to reside in one place (and not parallel training). I believe that this may be a solvable error or bug? Thank you for your time.
### Your contribution
I am unsure how to suggest adding this feature into the safetensors project.
|
https://github.com/huggingface/safetensors/issues/624
|
closed
|
[] | 2025-06-17T03:20:15Z
| 2025-06-18T22:01:11Z
| 1
|
viasky657
|
huggingface/lerobot
| 1,330
|
Could you update the repository to enable the evaluation of SmolVLA's performance?
|
Could you update the repository to enable the evaluation of SmolVLA's performance?
|
https://github.com/huggingface/lerobot/issues/1330
|
closed
|
[
"question",
"policies"
] | 2025-06-17T02:38:22Z
| 2025-10-17T11:50:22Z
| null |
Pandapan01
|
huggingface/transformers
| 38,851
|
Should `compute_metrics` only run on the main process when doing DDP?
|
Hi, I want to know when doing training and evaluation on a multi-GPU setup (DDP using trainer and accelerate), does `compute_metrics` only need to be run on the main process?
The reason being that `trainer` itself already does `gather_for_metrics` ([here](https://github.com/huggingface/transformers/blob/v4.51-release/src/transformers/trainer.py#L4373)), which I suppose should collect all predictions (logits) and labels across processes, running `compute_metrics` from multiple processes again will be doing duplicated work, no?
to add:
I am using `batch_eval_metrics`, where I first spotted that if I run the training script (modified version of `run_clm.py`) with `accelerate launch`, the `compute_metrics` is always called multiple times, but the logits from `EvalPrediction` for each call is `per_device_eval_batch_size` * number of GPU I am using.
|
https://github.com/huggingface/transformers/issues/38851
|
closed
|
[] | 2025-06-17T00:09:43Z
| 2025-07-25T08:02:33Z
| 2
|
TIE666
|
pytorch/xla
| 9,371
|
Failing `torch_xla._XLAC._xla_custom_call()` with `RuntimeError: Bad StatusOr access: UNIMPLEMENTED: No registered implementation for custom call to my_lib.my_op.default for platform CUDA`
|
## ❓ Questions and Help
During execution of `torch_xla.stablehlo.exported_program_to_stablehlo()`, it fails with `RuntimeError: Bad StatusOr access: UNIMPLEMENTED: No registered implementation for custom call to my_lib.my_op.default for platform CUDA`. For more context, `my_op` is registered under a custom library as follows
```python
from torch.library import Library, impl
from torch.library import impl_abstract
MY_LIB = Library("my_lib", "DEF")
MY_LIB.define("my_op(Tensor t) -> Tensor")
@impl(f"{MY_LIB.ns}::my_op", "default")
def my_op(t):
return t
@impl_abstract(f"{MY_LIB.ns}::my_op")
def my_op_meta(t):
return torch.empty_like(t)
```
I am able to get the torch ExportedProgram and the `MY_LIB` namespace is allowed in the stablehlo graph as a custom op by specifying
```
StableHLOExportOptions(
custom_ops_allowed_in_graph={MY_LIB.ns}
)
```
It **seems** to me that if XLA does not attempt to execute the graph then the error is not thrown. I have a few questions here:
1. How can I get around this `RuntimeError`?
2. Does registering a custom op under torch library (the way I did in the first code snippet) not expose the implementation to XLA?
|
https://github.com/pytorch/xla/issues/9371
|
open
|
[
"bug",
"stablehlo"
] | 2025-06-16T21:01:05Z
| 2025-06-24T18:55:50Z
| 4
|
hsjts0u
|
pytorch/xla
| 9,366
|
PyTorch/XLA custom Triton kernel export to StableHLO
|
I'd like to export a model to StableHLO with a simple custom Triton kernel. Following the [guide here](https://docs.pytorch.org/xla/master/features/triton.html) on Pytorch/XLA with custom GPU kernels. However, I am encountering errors with the [torch.export](https://docs.pytorch.org/xla/master/features/stablehlo.html) where it seems like it is unable to run tracing due to the existence of the custom operations. How can I properly export my model with custom GPU kernel to StableHLO?
Error:
```
Traceback (most recent call last):
File "/root/test_code.py", line 73, in <module>
exported = export(model, (x,y))
File "/root/testing/lib64/python3.9/site-packages/torch/export/__init__.py", line 270, in export
return _export(
File "/root/testing/lib64/python3.9/site-packages/torch/export/_trace.py", line 1017, in wrapper
raise e
File "/root/testing/lib64/python3.9/site-packages/torch/export/_trace.py", line 990, in wrapper
ep = fn(*args, **kwargs)
File "/root/testing/lib64/python3.9/site-packages/torch/export/exported_program.py", line 114, in wrapper
return fn(*args, **kwargs)
File "/root/testing/lib64/python3.9/site-packages/torch/export/_trace.py", line 1880, in _export
export_artifact = export_func( # type: ignore[operator]
File "/root/testing/lib64/python3.9/site-packages/torch/export/_trace.py", line 1224, in _strict_export
return _strict_export_lower_to_aten_ir(
File "/root/testing/lib64/python3.9/site-packages/torch/export/_trace.py", line 1252, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "/root/testing/lib64/python3.9/site-packages/torch/export/_trace.py", line 560, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 1432, in inner
result_traced = opt_f(*args, **kwargs)
File "/root/testing/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/testing/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "/root/testing/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/testing/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/root/testing/lib64/python3.9/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1692, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/root/testing/lib64/python3.9/site-packages/torch/_dynamo/variables/functions.py",
|
https://github.com/pytorch/xla/issues/9366
|
open
|
[
"enhancement",
"xla:gpu",
"Triton"
] | 2025-06-16T18:28:42Z
| 2025-06-23T19:55:53Z
| 4
|
annabellej
|
huggingface/lerobot
| 1,324
|
Where is control_robot.py script?
|
It is mentioned in the readme in the Walkthrough section that there is a script called control_robot.py. however, I can not see it in the main branch
|
https://github.com/huggingface/lerobot/issues/1324
|
closed
|
[] | 2025-06-16T15:57:34Z
| 2025-06-18T11:06:11Z
| null |
AbdElRahmanFarhan
|
huggingface/agents-course
| 547
|
[QUESTION] Possible mistake in transformers size in terms of parameters
|
Hey,
Thanks for the great course!
I have a question on what looks to me like an inconsistency.
In the [unit1/what-are-llms](https://huggingface.co/learn/agents-course/unit1/what-are-llms) section, when explaining the 3 types of transformers, in the Typical Size, we can see:
Decoders:
Typical Size: Billions (in the US sense, i.e., 10^9) of parameters
Seq2Seq (Encoder–Decoder)
Typical Size: Millions of parameters
It looks strange to me that a Seq2Seq transformer, which comprises a Decoder within it, is smaller in Typical Size than a plain Decoders.
I would put
Seq2Seq (Encoder–Decoder)
Typical Size: Billions (in the US sense, i.e., 10^9) of parameters
Please tell me if there is something I misunderstood !
|
https://github.com/huggingface/agents-course/issues/547
|
open
|
[
"question"
] | 2025-06-16T14:43:29Z
| 2025-06-16T14:43:29Z
| null |
jonoillar
|
huggingface/transformers.js
| 1,341
|
FireFox compatible models
|
### Question
I am fairly new to everything here and kind of just vibe code while I learn JS, but I use Zen browser and enjoy making it more like Arc over my summer. I was wondering if it was possible to expose the native Firefox AI and be able to prompt it, which I was able to do [here](https://github.com/Anoms12/Firefox-AI-Testing.uc.mjs). I discovered the models through some [documentation](https://github.com/mozilla-firefox/firefox/blob/901f6ff7b2ead5c88bd4d5e04aa5b30f2d2f1abb/toolkit/components/ml/docs/models.rst) Copilot brought me to in Firefox, and all of the models seem to be from you. However, the prompts I am trying to feed it seem to be too advanced for the current models I am using, Xenova/LaMini-Flan-T5-248M (I also tried out base, and models below it, but anything higher than 783M seemed to require access I did not have). I was wondering if you knew of/had a good model for this prompt. If not, I would love to be pointed in the right direction with any knowledge you do have.
```
Analyze the following numbered list of tab data (Title, URL, Description) and assign a concise category (1-2 words, Title Case) for EACH tab.
Some tabs might logically belong to groups already present based on common domains or topics identified by keywords.
Tab Categorization Strategy:
1. For well-known platforms (GitHub, YouTube, Reddit, etc.), use the platform name as the category.
2. For content sites, news sites, or blogs, PRIORITIZE THE SEMANTIC MEANING OF THE TITLE over the domain.
3. Look for meaningful patterns and topics across titles to create logical content groups.
4. Use the domain name only when it's more relevant than the title content or when the title is generic.
BE CONSISTENT: Use the EXACT SAME category name for tabs belonging to the same logical group.
Input Tab Data:
{TAB_DATA_LIST}
---
Instructions for Output:
1. Output ONLY the category names.
2. Provide EXACTLY ONE category name per line.
3. The number of lines in your output MUST EXACTLY MATCH the number of tabs in the Input Tab Data list above.
4. DO NOT include numbering, explanations, apologies, markdown formatting, or any surrounding text like "Output:" or backticks.
5. Just the list of categories, separated by newlines.
---
Output:
```
If it was not clear, it is for a tab grouping script, the community currently has an Ollama, Gemini, and Mistral version, but we want to make it as easy as possible, so this seemed like the next logical step.
Thank you for anything you can provide in advance. I love the project.
|
https://github.com/huggingface/transformers.js/issues/1341
|
open
|
[
"question"
] | 2025-06-16T12:43:39Z
| 2025-06-16T12:47:44Z
| null |
12th-devs
|
huggingface/lerobot
| 1,319
|
How to debug or inspect the health of Feetech servos in so101 setup?
|
Hi, I'm working with the `so101` robot and running into issues with the Feetech servos.
I would like to ask:
1. Are there any recommended tools or procedures for debugging Feetech servos?
2. How can I check the health of a servo (e.g. temperature, load, internal error)?
Any help or pointers would be greatly appreciated. Thanks!
|
https://github.com/huggingface/lerobot/issues/1319
|
open
|
[
"question",
"robots"
] | 2025-06-16T08:58:32Z
| 2025-08-12T10:01:41Z
| null |
DIMARIA123
|
huggingface/lerobot
| 1,318
|
How to use my own dataset to train pi0 or smolVLA
|
I have a dataset that I collected and converted to Lerobot format. This dataset has not been uploaded to huggingface. I want to use this dataset to train `pi0` or `smolvla`. How should I set it up?
I have tried to use only `dataset.root`, but it prompts that `dataset.repo_id` needs to be entered. What should I do?
|
https://github.com/huggingface/lerobot/issues/1318
|
closed
|
[
"question",
"policies"
] | 2025-06-16T08:40:50Z
| 2025-10-17T11:51:54Z
| null |
xliu0105
|
huggingface/lerobot
| 1,316
|
[Question] SmolVLA LIBERO / MetaWorld evaluation
|
Hello, thank you for open sourcing this wonderful repository. I have read the SmolVLA paper impressively and tried to run some evaluations.

In Section 4.5 of the paper, under Simulation Evaluation, it seems that you have fine-tuned the SmolVLA baseline to the Franka Emika Panda and the Swayer arm to perform evaluation on the LIBERO and MetaSim benchmark respectively.
Could you elaborate on the details of the fine-tuning process? (which parameters were trained/frozen, optimizer, gradient steps, etc..)
I am planning to reproduce the results.
Thank you.
|
https://github.com/huggingface/lerobot/issues/1316
|
closed
|
[
"question",
"policies",
"simulation"
] | 2025-06-16T06:28:50Z
| 2025-12-10T22:11:17Z
| null |
tykim0507
|
huggingface/agents-course
| 546
|
[QUESTION] Can i solve this final assignment with free versions?
|
First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord
However, if you prefer, you can ask here, please **be specific**.
I like to solve the final assignment, but I failed with free tools. I try to take inspiration from leaderboard toppers; they used paid tools, but I can't pay for that. Any free roadmap or idea?
|
https://github.com/huggingface/agents-course/issues/546
|
open
|
[
"question"
] | 2025-06-16T06:13:37Z
| 2025-06-16T06:13:37Z
| null |
mehdinathani
|
huggingface/datasets
| 7,617
|
Unwanted column padding in nested lists of dicts
|
```python
from datasets import Dataset
dataset = Dataset.from_dict({
"messages": [
[
{"a": "...",},
{"b": "...",},
],
]
})
print(dataset[0])
```
What I get:
```
{'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]}
```
What I want:
```
{'messages': [{'a': '...'}, {'b': '...'}]}
```
Is there an easy way to automatically remove these auto-filled null/none values?
If not, I probably need a recursive none exclusion function, don't I?
Datasets 3.6.0
|
https://github.com/huggingface/datasets/issues/7617
|
closed
|
[] | 2025-06-15T22:06:17Z
| 2025-06-16T13:43:31Z
| 1
|
qgallouedec
|
pytorch/torchtitan
| 1,301
|
Slow checkpoint saving time (6 mins to save an 8B model checkpoint in sync mode)
|
It takes ~6 minutes to save a checkpoint using non async mode. Is this expected?
### Sync mode
```
[rank0]:[titan] 2025-06-15 21:31:48,968 - root - INFO - TensorBoard logging enabled. Logs will be saved at ./outputs/tb/20250615-2131
[rank0]:[titan] 2025-06-15 21:31:48,969 - root - INFO - CUDA capacity: NVIDIA H100 80GB HBM3 with 79.10GiB memory
[rank0]:[titan] 2025-06-15 21:31:49,083 - root - INFO - Model llama3 8B size: 8,030,261,248 total parameters
[rank0]:[titan] 2025-06-15 21:31:49,084 - root - INFO - Applied full activation checkpointing to the model
[rank0]:[titan] 2025-06-15 21:31:49,164 - root - INFO - Applied FSDP to the model
[rank0]:[titan] 2025-06-15 21:31:49,505 - root - INFO - Peak FLOPS used for computing MFU: 9.890e+14
[rank0]:[titan] 2025-06-15 21:31:49,505 - root - INFO - CUDA memory usage for model: 3.95GiB(4.99%)
[rank0]:[titan] 2025-06-15 21:31:49,535 - root - INFO - Checkpointing active. Checkpoints will be loaded from and saved to ./outputs/c
heckpoint
[rank0]:[titan] 2025-06-15 21:31:49,535 - root - INFO - Trainer is initialized with local batch size 1, global batch size 64, gradient
accumulation steps 8, sequence length 8192, total steps 1000 (warmup 40).
[rank0]:[titan] 2025-06-15 21:31:49,535 - root - INFO - Loading the checkpoint from assets/models/dcp/llama3.1-8B.
[rank0]:[titan] 2025-06-15 21:32:02,935 - root - INFO - [GC] GC collection for checkpoint loading. 0.01 seconds.
[rank0]:[titan] 2025-06-15 21:32:02,935 - root - INFO - Finished loading the checkpoint in 13.40 seconds.
[rank0]:[titan] 2025-06-15 21:32:02,935 - root - INFO - Training starts at step 1.
[rank0]:[titan] 2025-06-15 21:32:15,816 - root - INFO - step: 1 loss: 2.4292 memory: 29.18GiB(36.90%) tps: 2,452 tflops: 141.98
mfu: 14.36%
[rank0]:[titan] 2025-06-15 21:32:15,816 - root - INFO - Saving the checkpoint (or staging if async is enabled).
[rank0]:[titan] 2025-06-15 21:38:31,430 - root - INFO - [GC] GC collection invoked by checkpointer. 0.04 seconds.
[rank0]:[titan] 2025-06-15 21:38:31,431 - root - INFO - Finished saving the checkpoint (or staging if async is enabled)in 375.61 secon
ds.
[rank0]:[titan] 2025-06-15 21:38:31,431 - root - INFO - Synchronizing and adjusting timeout for all ProcessGroups to 0:01:40
[rank0]:[titan] 2025-06-15 21:40:09,439 - root - INFO - step: 10 loss: 2.3602 memory: 36.65GiB(46.33%) tps: 1,245 tflops: 72.12
mfu: 7.29%
```
## Async mode:
```
rank0]:[titan] 2025-06-15 21:44:35,889 - root - INFO - step: 1 loss: 2.4292 memory: 29.18GiB(36.90%) tps: 2,327 tflops: 134.74 mfu: 13.62%
[rank0]:[titan] 2025-06-15 21:44:35,890 - root - INFO - Saving the checkpoint (or staging if async is enabled).
[rank0]:[titan] 2025-06-15 21:44:35,898 - root - INFO - [GC] GC collection invoked by checkpointer. 0.01 seconds.
[rank0]:[titan] 2025-06-15 21:44:47,661 - root - INFO - [GC] GC collection invoked by checkpointer. 0.00 seconds.
[rank0]:[titan] 2025-06-15 21:44:47,672 - root - INFO - Finished saving the checkpoint (or staging if async is enabled)in 11.78 seconds.
[rank0]:[titan] 2025-06-15 21:44:47,672 - root - INFO - Synchronizing and adjusting timeout for all ProcessGroups to 0:01:40
[rank0]:/home/ubuntu/code/thirdparty/torchtitan/.venv/lib/python3.13/site-packages/torch/distributed/checkpoint/filesystem.py:111: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
[rank0]: if tensor.storage().size() != tensor.numel():
[rank0]:[titan] 2025-06-15 21:46:26,319 - root - INFO - step: 10 loss: 2.3601 memory: 36.64GiB(46.33%) tps: 5,341 tflops: 309.34 mfu: 31.28%
```
Reproduction: check out https://github.com/pytorch/torchtitan/pull/1300 and run
```
CONFIG_FILE="./torchtitan/models/llama3/train_configs/llama3_8b.toml" uv run ./run_train.sh \
--model.tokenizer_path assets/tokenizer/Meta-Llama-3.1-8B-tokenizer.model \
--training.max_seq_len 131072 \
--checkpoint.initial_load_path "assets/models/dcp/llama3.1-8B" \
--profiling.no_enable
|
https://github.com/pytorch/torchtitan/issues/1301
|
closed
|
[
"question",
"module: checkpoint"
] | 2025-06-15T21:42:47Z
| 2025-06-23T16:34:52Z
| null |
vwxyzjn
|
huggingface/transformers.js
| 1,340
|
Audio-to-Audio task
|
### Question
Hi there.
I would like to know how running **Audio-to-Audio models** with _transformers.js_.
I haven't success to found any material about this. If has no way, is there some schedule to adds this?
Thanks!
|
https://github.com/huggingface/transformers.js/issues/1340
|
open
|
[
"question"
] | 2025-06-15T17:58:54Z
| 2025-10-13T04:45:39Z
| null |
LuSrodri
|
huggingface/open-r1
| 677
|
Error from E2B executor: cannot access local variable 'sandbox' where it is not associated with a value
|
Hi there,
I encountered a bug while following the sandbox setup instructions exactly as provided. Here’s what I’m seeing:

Has anyone experienced this before? Any advice on how to resolve it would be greatly appreciated!
Thank you. : )
|
https://github.com/huggingface/open-r1/issues/677
|
closed
|
[] | 2025-06-14T19:08:22Z
| 2025-07-22T06:55:38Z
| null |
juyongjiang
|
pytorch/examples
| 1,355
|
`language_translation` has typo which make loaded tgt tensor invalid
|
for `_yield_token` implementation in `src/data.py`, the third argument `src` expected to be `True` or `False`
```
# Turns an iterable into a generator
def _yield_tokens(iterable_data, tokenizer, src):
# Iterable data stores the samples as (src, tgt) so this will help us select just one language or the other
index = 0 if src else 1
for data in iterable_data:
yield tokenizer(data[index])
```
But the actual used argument is `str` (e.g. 'de' or 'en'), which will make `_yield_tokens` always construct `tgt` vocab from `src` tokens, so the loaded tgt tensor was wrong
```
tgt_vocab = build_vocab_from_iterator(
_yield_tokens(train_iterator, tgt_tokenizer, tgt_lang), <-- tgt_lang is 'de' or 'en'
min_freq=1,
specials=list(special_symbols.keys()),
special_first=True
```
example of wrong tgt tensor, too much `0` values (which means `unknown`)
```
tensor([[ 2, 2, 2, 2, 2, 2, 2, 2],
[ 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 7, 0, 7, 0, 0],
[ 0, 0, 0, 0, 3425, 0, 0, 0],
[ 0, 0, 7, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 28, 0],
[ 7, 5, 0, 0, 0, 15, 5, 0],
[ 0, 3, 0, 0, 0, 0, 3, 0],
[ 0, 1, 5, 0, 5, 0, 1, 0],
[ 0, 1, 3, 0, 3, 0, 1, 5315],
[ 0, 1, 1, 0, 1, 0, 1, 0],
[ 5, 1, 1, 0, 1, 0, 1, 0],
[ 3, 1, 1, 0, 1, 0, 1, 5],
[ 1, 1, 1, 5, 1, 0, 1, 3],
[ 1, 1, 1, 3, 1, 5, 1, 1],
[ 1, 1, 1, 1, 1, 3, 1, 1]], device='cuda:0')
```
|
https://github.com/pytorch/examples/issues/1355
|
closed
|
[] | 2025-06-14T12:13:35Z
| 2025-06-16T13:55:52Z
| 0
|
zwzmzd
|
pytorch/xla
| 9,356
|
Transition torch_xla::ShardingSec to torch_xla::OpSharding
|
This is primarily for the sake of documentation and consistency.
|
https://github.com/pytorch/xla/issues/9356
|
open
|
[
"distributed",
"documentation"
] | 2025-06-13T23:07:34Z
| 2025-06-13T23:07:34Z
| 0
|
pgmoka
|
pytorch/TensorRT
| 3,571
|
❓ [Question] Can I export a serialized engine from Torch-TensorRT targeting TensorRT 10.3.0.26?
|
## ❓ Question
Hello, I am attempting to export a serialized engine from Torch-TRT. I require TensorRT version 10.3.0.26, as I am planning to use this engine with a Nvidia DeepStream container that requires that TensorRT version. I attempted to use torch-tensorrt==2.5.0, but this version is listed as using builtin TensorRT version 10.3.0, and did not work with the container. How would you recommend generating this .engine for this specific TensorRT version? Unfortunately, I cannot just use trtexec as the outputs of the trtexec model are incorrect.
I am assuming probably building from source, but the documentation at https://docs.pytorch.org/TensorRT/getting_started/installation.html appears a bit outdated, as there is no longer any WORKSPACE file as referenced in that install guide. Please advise, thank you!
## Environment
Container to be used on: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/deepstream (deepstream-7.1-multiarch)
- PyTorch Version (e.g., 1.0): Any. I have tested 2.4/2.5/2.5.1.
- CPU Architecture: x86
- OS (e.g., Linux): Ubuntu 22.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source): NA
- Are you using local sources or building from archives: NA
- Python version: 3.10
- CUDA version: 12.6
- GPU models and configuration: A sample model can be found here: https://drive.google.com/file/d/1NukSOFFQwVGhZh6VrasjMBiKnLL8CHM9/view?usp=sharing
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/3571
|
closed
|
[
"question"
] | 2025-06-13T16:44:40Z
| 2025-06-16T20:06:15Z
| null |
geiche735
|
pytorch/torchtitan
| 1,291
|
Using official HuggingFace script to convert DCP weights to HF format,the outputs are not human-readable
|
DCP -> torch (in PyTorch, see https://github.com/pytorch/torchtitan/blob/main/docs/checkpoint.md)
torch -> HF (from [HF](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py), although missing params.json if saved from DCP)[](url)

Does anyone have a working convert script or know how to fix this issue?
|
https://github.com/pytorch/torchtitan/issues/1291
|
closed
|
[
"module: checkpoint"
] | 2025-06-13T03:11:23Z
| 2025-07-02T07:18:01Z
| 7
|
guang11644331
|
huggingface/agents-course
| 536
|
[QUESTION] Llama-3.3-70B-Instruct model request denied
|
My request was denied for access to Llama-3.3-70B-Instruct model. However, it was accepted for the Llama 4 models. Is it possible that meta is limiting access after the release of Llama 4 in April?
Could the course be updated to reflect this change?
|
https://github.com/huggingface/agents-course/issues/536
|
open
|
[
"question"
] | 2025-06-12T00:29:48Z
| 2025-06-12T00:29:48Z
| null |
BookDisorder
|
pytorch/torchtitan
| 1,283
|
KV Replication for context parallel (Ring attention)
|
Hi,
For the llama3-8b model (which has GQA, with num_kv_heads=8, num_heads=32), I see the KV replication being done inside the Attention module in model.py
Will this lead to additional communication volume for ring attention (with passKV) wherein we'll be circulating 32 heads instead of 8?
Afaik flash attention kernels support GQA internally (ie, it accepts QKV with num_kv_heads < num_q_heads), so can we omit the KV replication in the attention module?
Thanks!
|
https://github.com/pytorch/torchtitan/issues/1283
|
open
|
[
"question",
"module: context parallel"
] | 2025-06-11T22:18:04Z
| 2025-06-12T16:08:59Z
| null |
rghadia
|
huggingface/transformers.js
| 1,339
|
Model is cached, but still reloads from network?
|
### Question
I have this code in a React project :
```
import { env, pipeline } from "@xenova/transformers";
const model = await pipeline("translation", "Xenova/opus-mt-de-en");
let transText = await model("hallo, ich bin hier");
```
When I inspect the browser cache, I see relevant files in "cache storage". (xenova-opus-mt-de-en...)
But when I reload the network says I am re-downloading it each time from cdn.jsdeliver.net
How can I get it to grab the cached version instead of do a network request?
|
https://github.com/huggingface/transformers.js/issues/1339
|
closed
|
[
"question"
] | 2025-06-11T16:19:26Z
| 2025-06-27T06:06:25Z
| null |
patrickinminneapolis
|
huggingface/peft
| 2,583
|
Lora transfer learning
|
Hello, I am training a lora model using flux fill pipeline using diffusers+peft+accelerate. I already have a lora model for general purpose for my application which was trained for 5k steps and large dataset. Now, I want to do transfer learning to finetune on very small dataset but want to train from previous lora model instead of scratch training. how can I do it? My lora config is as following. Currently I am using `gaussian` method to initialize lora model. Is there anyway to use pretrained lora model without random initialize? Thanks in advance.
```
lora_config:
r: 256
lora_alpha: 256
init_lora_weights: "gaussian"
target_modules: "(.*x_embedder|.*(?<!single_)transformer_blocks\\.[0-9]+\\.norm1\\.linear|.*(?<!single_)transformer_blocks\\.[0-9]+\\.attn\\.to_k|.*(?<!single_)transformer_blocks\\.[0-9]+\\.attn\\.to_q|.*(?<!single_)transformer_blocks\\.[0-9]+\\.attn\\.to_v|.*(?<!single_)transformer_blocks\\.[0-9]+\\.attn\\.to_out\\.0|.*(?<!single_)transformer_blocks\\.[0-9]+\\.ff\\.net\\.2|.*single_transformer_blocks\\.[0-9]+\\.norm\\.linear|.*single_transformer_blocks\\.[0-9]+\\.proj_mlp|.*single_transformer_blocks\\.[0-9]+\\.proj_out|.*single_transformer_blocks\\.[0-9]+\\.attn.to_k|.*single_transformer_blocks\\.[0-9]+\\.attn.to_q|.*single_transformer_blocks\\.[0-9]+\\.attn.to_v|.*single_transformer_blocks\\.[0-9]+\\.attn.to_out)"
```
|
https://github.com/huggingface/peft/issues/2583
|
closed
|
[] | 2025-06-11T12:00:25Z
| 2025-07-20T15:04:05Z
| 4
|
hardikdava
|
huggingface/transformers
| 38,750
|
Is it a good choice to early error when `output_attentions=True` and attn implementation not equal to `eager`
|
### System Info
Before this PR [38288](https://github.com/huggingface/transformers/pull/38288), the program will run smoothly even when we set `output_attentions=True` and the attn implementation is not `eager`, as it will fallback to use eager mode, after this PR, it will throw error directly: [L342](https://github.com/huggingface/transformers/blob/main/src/transformers/configuration_utils.py#L342). I think it would be better if we just throw a warning and fallback to `eager` attn. Is it possible to revert it back or make small direct change based on this PR?
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
We want to make sure program can run without crash even we set `output_attentions=True` and attn implementation not equal to `eager`
|
https://github.com/huggingface/transformers/issues/38750
|
closed
|
[
"bug"
] | 2025-06-11T11:05:48Z
| 2025-06-25T08:00:06Z
| 2
|
kaixuanliu
|
huggingface/lerobot
| 1,262
|
use smolVLA, How to know the current task is completed
|
I use smolVLA to do a wiping task, it will keep doing the task again and again, how to judge the task is completed, thank you
|
https://github.com/huggingface/lerobot/issues/1262
|
open
|
[
"question",
"policies"
] | 2025-06-11T08:48:03Z
| 2025-08-12T10:04:14Z
| null |
haoyankai
|
huggingface/transformers.js
| 1,338
|
Question about supporting Float16Array
|
### Question
I am trying transformers.js with WebGPU. The performance is great, but I found that transformers.js returns a Float32Array where the model is quantized to `fp16`:
```javascript
const extractor = await pipeline(
"feature-extraction",
"bge-small-zh-v1.5",
{
device: "webgpu",
dtype: "fp16",
local_files_only: true,
},
);
// ...
const embeddings = await extractor(texts, {pooling: "mean", normalize: true});
console.log(embeddings.data);
// -> Float32Array(5120000) [...]
```
Since the model itself has only 16-bit precision, returning a Float32Array (instead of [Float16Array](https://caniuse.com/mdn-javascript_builtins_float16array) that is supported in latest browsers) seems a waste of performance. Is this comment correct, and do we have plans to support Float16Array for better performance? Thanks!
|
https://github.com/huggingface/transformers.js/issues/1338
|
open
|
[
"question"
] | 2025-06-11T07:29:19Z
| 2025-07-03T05:50:56Z
| null |
xmcp
|
huggingface/transformers
| 38,745
|
[Bug][InformerForPredict] The shape will cause a problem
|
### System Info
When I set the infomerconfig.input_size = 1, I find a bug, but I don't know how to fix it.
- Function Name : `create_network_inputs`
```
time_feat = (
torch.cat(
(
past_time_features[:, self._past_length - self.config.context_length :, ...],
future_time_features,
),
dim=1,
)
if future_values is not None
else past_time_features[:, self._past_length - self.config.context_length :, ...]
)
print(self._past_length)
# target
if past_observed_mask is None:
past_observed_mask = torch.ones_like(past_values)
context = past_values[:, -self.config.context_length :]
observed_context = past_observed_mask[:, -self.config.context_length :]
_, loc, scale = self.scaler(context, observed_context)
inputs = (
(torch.cat((past_values, future_values), dim=1) - loc) / scale
if future_values is not None
else (past_values - loc) / scale
)
print(loc.shape, scale.shape, inputs.shape)
# static features
log_abs_loc = loc.abs().log1p() if self.config.input_size == 1 else loc.squeeze(1).abs().log1p()
log_scale = scale.log() if self.config.input_size == 1 else scale.squeeze(1).log()
print(f"log_abs_loc: {log_abs_loc.shape}, {log_scale.shape}")
print(time_feat.shape, self.config.input_size)
static_feat = torch.cat((log_abs_loc, log_scale), dim=1)
print(time_feat.shape, static_feat.shape)
if static_real_features is not None:
static_feat = torch.cat((static_real_features, static_feat), dim=1)
if static_categorical_features is not None:
embedded_cat = self.embedder(static_categorical_features)
static_feat = torch.cat((embedded_cat, static_feat), dim=1)
print(time_feat.shape, static_feat.shape)
expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
# all features
features = torch.cat((expanded_static_feat, time_feat), dim=-1)
# lagged features
subsequences_length = (
self.config.context_length + self.config.prediction_length
if future_values is not None
else self.config.context_length
)
lagged_sequence = self.get_lagged_subsequences(sequence=inputs, subsequences_length=subsequences_length)
lags_shape = lagged_sequence.shape
reshaped_lagged_sequence = lagged_sequence.reshape(lags_shape[0], lags_shape[1], -1)
if reshaped_lagged_sequence.shape[1] != time_feat.shape[1]:
raise ValueError(
f"input length {reshaped_lagged_sequence.shape[1]} and time feature lengths {time_feat.shape[1]} does not match"
)
# transformer inputs
transformer_inputs = torch.cat((reshaped_lagged_sequence, features), dim=-1)
return transformer_inputs, loc, scale, static_feat
```
As we can see, I add some `print` sentence in the library to see the shape, now the bug is:
```
Traceback (most recent call last):
File "/home/wjt/luck/FinalWork/alert_models/informer_based_model_3_cpu.py", line 820, in <module>
pipline.train_model()
File "/home/wjt/luck/FinalWork/alert_models/informer_based_model_3_cpu.py", line 466, in train_model
outputs = model(
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/transformers/models/informer/modeling_informer.py", line 1844, in forward
outputs = self.model(
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/transformers/models/informer/modeling_informer.py", line 1568, in forward
transformer_inputs, loc, scale, static_feat = self.create_network_inputs(
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/transformers/models/informer/modeling_informer.py", line 1386, in create_network_inputs
expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
RuntimeError: expand(torch.cuda.FloatTensor{[32, 1, 2, 1]}, size=[-1, 27, -1]): the number of sizes provided (3) must be greater or equal to the number of dimensions in the tensor (4)
```
- First
```
log_abs_loc = loc.abs().log1p() if self.config.input
|
https://github.com/huggingface/transformers/issues/38745
|
closed
|
[
"bug"
] | 2025-06-11T07:22:06Z
| 2025-07-20T11:41:45Z
| 11
|
2004learner
|
huggingface/transformers
| 38,740
|
[DOCS] Add `pruna` as optimization framework
|
### Feature request
Have a section on Pruna AI within the documentation. We did [a similar PR for diffusers](https://github.com/huggingface/diffusers/pull/11688) and thought it would be nice to show how to optimize transformers models too.
.
### Motivation
Have a section on Pruna AI within the documentation to show how to optimize LLMs for inference.
### Your contribution
We could do everything for the PR.
|
https://github.com/huggingface/transformers/issues/38740
|
open
|
[
"Feature request"
] | 2025-06-11T04:52:33Z
| 2025-07-16T08:56:52Z
| 8
|
davidberenstein1957
|
huggingface/sentence-transformers
| 3,390
|
How to create a customized model architecture that fits sentence-transformer's training framework?
|
I'd like to train a two tower model that takes categorical features, floats features in one tower, and the other tower just encodes a document using an out of the box embedding. Then the outputs from both towers are feed into sentence transformers loss function. All the training configuration should reuse sentence transformer's setup (loss function implementation, Training Arguments, etc) as much as possible.
Is this even feasible? Skimmed through the document found this page here (https://www.sbert.net/docs/sentence_transformer/usage/custom_models.html#structure-of-sentence-transformer-models), but the example on this page seems to be creating a new module, but only as part of a purely sequential models, each connected to its next.
Much appreciated!
|
https://github.com/huggingface/sentence-transformers/issues/3390
|
open
|
[] | 2025-06-11T03:07:42Z
| 2025-06-12T05:05:54Z
| null |
HuangLED
|
pytorch/examples
| 1,353
|
tensor_parallel_example.py and sequence_parallel_example.py
|
The primary difference between the two files are as follows. The TP case , only see 1 allreduce per iteration - is that what is expected ? Seems to be same as DDP ! In the SP case, see 1 allgather and 1 reduce -scatter per iteration.
```
# Custom parallelization plan for the model
sp_model = parallelize_module(
module=model,
device_mesh=device_mesh,
parallelize_plan={
"in_proj": ColwiseParallel(input_layouts=Shard(0)),
"out_proj": RowwiseParallel(output_layouts=Shard(0)),
},
)
# Custom parallelization plan for the model
tp_model = parallelize_module(
module=tp_model,
device_mesh=device_mesh,
parallelize_plan={
"in_proj": ColwiseParallel(),
"out_proj": RowwiseParallel(),
},
)
```
CommDebugMode also appears to show 1 allreduce in fwd and no allreduce in bwd.
```
FORWARD PASS [12/1864]
*c10d_functional.all_reduce: 1
BACKWARD PASS
ToyModel
*module type: class '__main__.ToyModel'
FORWARD PASS
*c10d_functional.all_reduce: 1
ToyModel.in_proj
*module type: class 'torch.nn.modules.linear.Linear'
*Parameter List
*weight: (Shard(dim=0),)
*bias: (Shard(dim=0),)
FORWARD PASS
**aten.addmm.default
shape: [torch.Size([32]), torch.Size([4, 10]), torch.Size([10, 32])]
sharding: [(Shard(dim=0),), (Replicate(),), (Shard(dim=1),)]
device mesh: DeviceMesh('cuda', [0, 1, 2, 3])
BACKWARD PASS
**aten.mm.default
shape: [torch.Size([32, 4]), torch.Size([4, 10])]
sharding: [(Shard(dim=0),), (Replicate(),)]
device mesh: DeviceMesh('cuda', [0, 1, 2, 3])
**aten.sum.dim_IntList
shape: [torch.Size([4, 32])]
sharding: [(Shard(dim=1),)]
device mesh: DeviceMesh('cuda', [0, 1, 2, 3])
**aten.add_.Tensor
shape: [torch.Size([32]), torch.Size([32])]
sharding: [(Shard(dim=0),), (Shard(dim=0),)]
device mesh: DeviceMesh('cuda', [0, 1, 2, 3])
**aten.add_.Tensor
shape: [torch.Size([32, 10]), torch.Size([32, 10])]
sharding: [(Shard(dim=0),), (Shard(dim=0),)]
device mesh: DeviceMesh('cuda', [0, 1, 2, 3])
ToyModel.relu
*module type: class 'torch.nn.modules.activation.ReLU'
FORWARD PASS
BACKWARD PASS
ToyModel.out_proj
*module type: class 'torch.nn.modules.linear.Linear'
*Parameter List
*weight: (Shard(dim=1),)
*bias: (Replicate(),)
FORWARD PASS
*c10d_functional.all_reduce: 1
**aten.addmm.default
shape: [torch.Size([5]), torch.Size([4, 32]), torch.Size([32, 5])]
sharding: [(Replicate(),), (Shard(dim=1),), (Shard(dim=0),)]
device mesh: DeviceMesh('cuda', [0, 1, 2, 3])
BACKWARD PASS
**aten.mm.default
shape: [torch.Size([4, 5]), torch.Size([5, 32])]
sharding: [(Replicate(),), (Shard(dim=1),)]
device mesh: DeviceMesh('cuda', [0, 1, 2, 3])
**aten.mm.default
shape: [torch.Size([5, 4]), torch.Size([4, 32])]
sharding: [(Replicate(),), (Shard(dim=1),)]
device mesh: DeviceMesh('cuda', [0, 1, 2, 3])
**aten.sum.dim_IntList
shape: [torch.Size([4, 5])]
sharding: [(Replicate(),)]
device mesh: DeviceMesh('cuda', [0, 1, 2, 3])
**aten.add_.Tensor
shape: [torch.Size([5]), torch.Size([5])]
sharding: [(Replicate(),), (Replicate(),)]
device mesh: DeviceMesh('cuda', [0, 1, 2, 3])
**aten.add_.Tensor
shape: [torch.Size([5, 32]), torch.Size([5, 32])]
sharding: [(Shard(dim=1),), (Shard(dim=1),)]
device mesh: DeviceMesh('cuda', [0, 1, 2, 3])
```
|
https://github.com/pytorch/examples/issues/1353
|
open
|
[] | 2025-06-11T01:10:08Z
| 2025-10-30T09:12:25Z
| 2
|
githubsgi
|
huggingface/lerobot
| 1,258
|
Leader Servo Numbering different from script to documentation
|
First thank you for sharing this amazing work!
I am initializing the servos for the arm leader and I noticed that the numbering for the Wrist Roll and Wrist Pitch are different from the documentation when I ran the script:

wrist_roll is set to 5 in the script but set to 4 in the documentation
wrist_flex is set to 4 in the script but set to 5 (assuming it is Wrist Pitch) in the documentation
I guess nothing to worry about ?
|
https://github.com/huggingface/lerobot/issues/1258
|
open
|
[
"documentation",
"question"
] | 2025-06-10T21:03:03Z
| 2025-08-12T10:04:29Z
| null |
FaboNo
|
huggingface/transformers
| 38,733
|
GRPO per_device_eval_batch_size can't be set as 1, when there is only 1 GPU
|
`eval batch size must be evenly divisible by the number of generations per prompt. ` When I only have one GPU, I cannot set `per_device_eval_batch_size=1` because there will be no reasonable G to choose from. Is it possible to automatically calculate a value similar to the number of gradient accumulation steps to achieve this feature?
|
https://github.com/huggingface/transformers/issues/38733
|
closed
|
[] | 2025-06-10T14:58:11Z
| 2025-06-11T09:45:32Z
| 0
|
CasanovaLLL
|
huggingface/lerobot
| 1,254
|
[Feature Proposal] Planning a new user friendly simulation environment for new task and data collection
|
Hello and bonjour! First and foremost, I really wanted to thanks the team and community for making this wonderful repo. It really helps and guide beginner in this field. And I also wanted to contribute for the community.
Reading the issues here, I found a lot of people are trying to run without physical robot. But with the current Aloha and Xarm simulation environment it is hard to config and train new task. So I was thinking to make new env where we could do that.
Here is the main new feature:
- New sim env we can use as extra like Xarm, Aloha and Pusht in a new repo.
- Make a simple, game like GUI which enable controlling the manipulator with only keyboard and mouse. (Thinking of making a mini robot on html that can be controlled with mouse, z axis and gripper with keyboard)
- Make it compatible to recent official MuJoCo release for further [update](https://playground.mujoco.org/) and [extension](https://github.com/google-deepmind/mujoco_warp). (Planning to use [MJX](https://mujoco.readthedocs.io/en/stable/mjx.html)(RL compatible) model)
- Realtime inference using mujoco view.
I'm a beginner in this field, so it might be a hard task for me. But I thought this this project might help quite people, and also really funny to do. So I'll try my best.
What are your thoughts on this proposal? (Sorry if there is already similar features.)
If it is okay, I'll start to dig in.
|
https://github.com/huggingface/lerobot/issues/1254
|
open
|
[
"question",
"simulation"
] | 2025-06-10T12:36:13Z
| 2025-08-12T10:04:42Z
| null |
Bigenlight
|
huggingface/lerobot
| 1,252
|
Failed to sync read 'Present_Position' on ids=[2,3,4,6]after 1 tries. [TxRxResult] There is no status packet
|
my arm is koch,when I set the motors ids and baudrates, it report error:
Failed to sync read 'Present_Position' on ids=[2,3,4,6]after 1 tries. [TxRxResult] There is no status packet
|
https://github.com/huggingface/lerobot/issues/1252
|
open
|
[
"question",
"robots"
] | 2025-06-10T10:21:05Z
| 2025-09-01T02:24:25Z
| null |
huazai665
|
pytorch/torchtitan
| 1,278
|
[Qes] Is `torch.float32` as the default dtype when training?
|
I ran the example config, and found the parameter dtype of model is `torch.float32`. I don't understand why we use this as the default dtype, why not half precision? And I found the only way to change it to half precision is enabling fsdp and set mix dtype to half.
|
https://github.com/pytorch/torchtitan/issues/1278
|
closed
|
[] | 2025-06-10T09:30:35Z
| 2025-06-12T06:13:43Z
| 2
|
foreverlms
|
huggingface/lerobot
| 1,251
|
where is async inference
|
hi,thx for your SmolVLA
I have a question:**where is the async inference?**
the eval.py in script doesn't seem for SmolVLA inference
hope for your early reply,thx in advance
|
https://github.com/huggingface/lerobot/issues/1251
|
closed
|
[] | 2025-06-10T07:44:38Z
| 2025-06-30T11:35:25Z
| null |
JuilieZ
|
huggingface/transformers.js
| 1,336
|
node.js WebGPU compatibility and WASM performance in web enviornment
|
### Question
Hello!
I've been running some performance benchmarks on whisper models and noticed that the web environment (running in react renderer in electron, separate worker with WASM) produced slower transcription results than the python counterpart (e.g. 1400ms vs 400ms per batch) - both utilizing the same number of threads and data types.
node.js environment running with WASM was almost on par with python, but unfortunately it won't let me pick webgpu as device - only cpu and dml are supported.
The onnxruntime-node package does mention webgpu being supported so I was wondering if it will be available for transformers running in node.js environment.
And I'm also wondering if the performance drop using WASM in web environment is expected or if I'm doing something wrong.
|
https://github.com/huggingface/transformers.js/issues/1336
|
open
|
[
"question"
] | 2025-06-10T06:05:36Z
| 2025-06-11T06:53:35Z
| null |
devnarekm
|
huggingface/transformers
| 38,709
|
`get_video_features` in XCLIPModel always returns `pooled_output`
|
### System Info
https://github.com/huggingface/transformers/blob/f4fc42216cd56ab6b68270bf80d811614d8d59e4/src/transformers/models/x_clip/modeling_x_clip.py#L1376
Hi
The `get_video_features` function is hardcoded to always return the `pooled_output`. But sometimes, it might be beneficial to get the `last_hidden_state` instead. Can we fix this behavior?
Thanks
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```import av
import torch
import numpy as np
from transformers import AutoProcessor, AutoModel
from huggingface_hub import hf_hub_download
np.random.seed(0)
def read_video_pyav(container, indices):
'''
Decode the video with PyAV decoder.
Args:
container (`av.container.input.InputContainer`): PyAV container.
indices (`List[int]`): List of frame indices to decode.
Returns:
result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
'''
frames = []
container.seek(0)
start_index = indices[0]
end_index = indices[-1]
for i, frame in enumerate(container.decode(video=0)):
if i > end_index:
break
if i >= start_index and i in indices:
frames.append(frame)
return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
'''
Sample a given number of frame indices from the video.
Args:
clip_len (`int`): Total number of frames to sample.
frame_sample_rate (`int`): Sample every n-th frame.
seg_len (`int`): Maximum allowed index of sample's last frame.
Returns:
indices (`List[int]`): List of sampled frame indices
'''
converted_len = int(clip_len * frame_sample_rate)
end_idx = np.random.randint(converted_len, seg_len)
start_idx = end_idx - converted_len
indices = np.linspace(start_idx, end_idx, num=clip_len)
indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
return indices
# video clip consists of 300 frames (10 seconds at 30 FPS)
file_path = hf_hub_download(
repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
)
container = av.open(file_path)
# sample 8 frames
indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames)
video = read_video_pyav(container, indices)
processor = AutoProcessor.from_pretrained("microsoft/xclip-base-patch32")
model = AutoModel.from_pretrained("microsoft/xclip-base-patch32")
inputs = processor(
videos=list(video),
return_tensors="pt",
padding=True,
)
# forward pass
with torch.no_grad():
outputs = model.get_video_features(**inputs)
print(outputs.shape)
### Expected behavior
The `get_video_features` function should have the option to output the `last_hidden_state` as well.
|
https://github.com/huggingface/transformers/issues/38709
|
closed
|
[
"bug"
] | 2025-06-10T00:51:37Z
| 2025-07-18T08:02:50Z
| 4
|
Vishu26
|
huggingface/lerobot
| 1,242
|
SmolVLA Gym Simulation - Release?
|
Hello,
I've trained the smolvla_base for 200K steps. I'm trying to do a inference and visualize like we do for aloha or pusht. Could anyone guide me on this.
I dont have a robot arm, so Gym simulation is something I'm looking for, when will it be released?
|
https://github.com/huggingface/lerobot/issues/1242
|
closed
|
[
"question",
"policies",
"visualization"
] | 2025-06-09T13:05:38Z
| 2025-10-17T11:00:57Z
| null |
Jaykumaran
|
huggingface/smollm
| 78
|
how to continously pretrain VLM base model
|
rt.
How can I pretrain VLM base model?
|
https://github.com/huggingface/smollm/issues/78
|
open
|
[
"Image",
"Video"
] | 2025-06-09T07:04:57Z
| 2025-07-29T12:50:50Z
| null |
allenliuvip
|
huggingface/text-generation-inference
| 3,259
|
Enable passing arguments to chat templates
|
### Feature request
I would like to enable passing parameters to a chat template when using the messages API. Something like:
```python
qwen3_model = HuggingFaceModel(...)
predictor = qwen3_model.deploy(...)
predictor.predict({
"messages": [
{"role": "system", "content": "You are a helpful assistant." },
{"role": "user", "content": "What is deep learning?"}
]
"template_args": { "enable_thinking": False }
})
```
### Motivation
There are models with various custom arguments that can be passed to chat templates. For example, Qwen3 comes with `enable_thinking` parameter than can be either True or False, and CohereLabs c4ai-command-r-plus RAG chat template has a `citation_mode` flag that can be `accurate` or `fast`.
### Your contribution
Unfortunately, no. Do not know Rust beyond some basics.
|
https://github.com/huggingface/text-generation-inference/issues/3259
|
open
|
[] | 2025-06-09T06:04:27Z
| 2025-06-09T07:53:17Z
| 2
|
alexshtf
|
huggingface/datasets
| 7,600
|
`push_to_hub` is not concurrency safe (dataset schema corruption)
|
### Describe the bug
Concurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable.
Consider this scenario:
- we have an Arrow dataset
- there are `N` configs of the dataset
- there are `N` independent processes operating on each of the individual configs (e.g. adding a column, `new_col`)
- each process calls `push_to_hub` on their particular config when they're done processing
- all calls to `push_to_hub` succeed
- the `README.md` now has some configs with `new_col` added and some with `new_col` missing
Any attempt to load a config (using `load_dataset`) where `new_col` is missing will fail because of a schema mismatch between `README.md` and the Arrow files. Fixing the dataset requires updating `README.md` by hand with the correct schema for the affected config. In effect, `push_to_hub` is doing a `git push --force` (I found this behavior quite surprising).
We have hit this issue every time we run processing jobs over our datasets and have to fix corrupted schemas by hand.
Reading through the code, it seems that specifying a [`parent_commit`](https://github.com/huggingface/huggingface_hub/blob/v0.32.4/src/huggingface_hub/hf_api.py#L4587) hash around here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5794 would get us to a normal, non-forced git push, and avoid schema corruption. I'm not familiar enough with the code to know how to determine the commit hash from which the in-memory dataset card was loaded.
### Steps to reproduce the bug
See above.
### Expected behavior
Concurrent edits to disjoint configs of a dataset should never corrupt the dataset schema.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.2
- `fsspec` version: 2023.9.0
|
https://github.com/huggingface/datasets/issues/7600
|
closed
|
[] | 2025-06-07T17:28:56Z
| 2025-07-31T10:00:50Z
| 4
|
sharvil
|
huggingface/lerobot
| 1,226
|
404 Not Found
|
[lerobot](https://github.com/huggingface/lerobot/tree/main)/[examples](https://github.com/huggingface/lerobot/tree/main/examples)
/10_use_so100.md/
This is supposed to be a tutorial but cannot be opened???
404 Not Found!!!
|
https://github.com/huggingface/lerobot/issues/1226
|
closed
|
[
"documentation",
"question"
] | 2025-06-07T09:02:37Z
| 2025-06-08T21:26:07Z
| null |
luk-e158
|
huggingface/transformers
| 38,656
|
Potential Memory Leak or Caching in Fast Image Processor
|
### System Info
Hi team,
Thank you for your great work on `transformers`!
While using the `AutoProcessor` with `use_fast=True`, I noticed that there seems to be a memory leak or possibly some form of persistent caching when processing images. Even after deleting the processor and clearing the CUDA cache, approximately 600MB of GPU memory remains occupied.
Here is a minimal reproducible example:
```python
from transformers import AutoProcessor
from PIL import Image
import time
import torch
import requests
from io import BytesIO
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2.5-VL-7B-Instruct",
use_fast=True,
trust_remote_code=False,
revision=None,
)
url = "https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true"
response = requests.get(url)
images = [Image.open(BytesIO(response.content)).convert("RGB")]
result = processor(
text=[
"<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n"
"<|im_start|>user\nWhat’s in this image?<|vision_start|><|image_pad|><|vision_end|><|im_end|>\n"
"<|im_start|>assistant\n"
],
padding=True,
return_tensors="pt",
images=images,
device="cuda"
)
del result
del processor
torch.cuda.empty_cache()
print("You can now use nvidia-smi to observe GPU memory usage, which is around 600MB.")
while True:
time.sleep(60)
```
I’d like to kindly ask:
1. If this is due to caching, is there a way to control or disable the cache?
2. If this is an unintended memory leak, would it be possible to investigate and potentially fix it?
Thanks again for your help and time!
Best regards
### Who can help?
tokenizers: @ArthurZucker and @itazap
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
As provided above.
### Expected behavior
It would be great if caching could be made optional, or if there could be an option to avoid any GPU memory usage entirely.
|
https://github.com/huggingface/transformers/issues/38656
|
closed
|
[
"bug"
] | 2025-06-07T08:46:48Z
| 2025-08-12T13:02:37Z
| 8
|
yhyang201
|
huggingface/transformers
| 38,654
|
The visualization of image input in Qwen2.5-VL
|
The image input of Qwen2.5-VL is processed by processor and then saved as tensor in inputs['pixel_values'].
I tried to restore the image, using tensor in inputs['pixel_values'], but I found that the restored image patches were in disorder.
So how to restore the image from inputs['pixel_values'] in a proper way?
For example, the origin input image is as follows.

And failed to restore from the inputs['pixel_values'].

|
https://github.com/huggingface/transformers/issues/38654
|
closed
|
[] | 2025-06-07T08:15:44Z
| 2025-06-10T09:04:04Z
| 2
|
Bytes-Lin
|
pytorch/pytorch
| 155,391
|
how to save the fx graph with output tensor shapes ?
|
### 🐛 Describe the bug
# When I use **f.write** to save the fx graph, it doesn't have output tensor shapes
> refer to https://www.doubao.com/chat/7948299479012098
```
with open("fx_graph.py", "w") as f:
f.write(graph_module.code)
```
* its dump is similar to
```
def forward(self, inputs_1, labels_1):
view = torch.ops.aten.view.default(inputs_1, [32, -1]); inputs_1 = None
_param_constant0 = self._param_constant0
t = torch.ops.aten.t.default(_param_constant0); _param_constant0 = None
_param_constant1 = self._param_constant1
...
```
# Compare to **print(joint_graph._graph.python_code(root_module="self", verbose=True).src)**, we can see that there is output tensor shapes
* its dump is similar to
```
def forward(self, inputs_1: f32[32, 1, 784], labels_1: i64[32]):
# No stacktrace found for following nodes
view: f32[32, 784] = torch.ops.aten.view.default(inputs_1, [32, -1]); inputs_1 = None
_param_constant0 = self._param_constant0
t: f32[784, 64] = torch.ops.aten.t.default(_param_constant0); _param_constant0 = None
_param_constant1 = self._param_constant1
...
```
### Versions
Python 3.10.14
torch 2.1.0
torch-npu 2.1.0.post6.dev20240716
torchaudio 2.1.0
torchvision 0.16.0
|
https://github.com/pytorch/pytorch/issues/155391
|
closed
|
[] | 2025-06-07T02:35:38Z
| 2025-06-07T02:58:25Z
| null |
vfdff
|
huggingface/lerobot
| 1,223
|
smolvla introduce an asynchronous inference stack decoupling perception and action prediction?
|
why code not realize?
|
https://github.com/huggingface/lerobot/issues/1223
|
closed
|
[
"question",
"policies"
] | 2025-06-07T01:23:24Z
| 2025-06-08T21:25:04Z
| null |
zmf2022
|
huggingface/transformers
| 38,650
|
Support of Qwen3 GGUF model
|
Hi, I am getting the following error when I want to use the GGUF model with Qwen3
"ValueError: GGUF model with architecture qwen3 is not supported yet."
I have the latest transformers and gguf-0.17.0
```
self.tokenizer = AutoTokenizer.from_pretrained(model_name, gguf_file= "Qwen3-0.6B-Q2_K_L.gguf",use_fast=True)
if self.tokenizer.pad_token is None:
self.tokenizer.pad_token = "<pad>"
self.tokenizer.add_special_tokens({"pad_token": "<pad>"})
self.tokenizer.padding_side = "left"
self.model = AutoModelForCausalLM.from_pretrained(
model_name,
gguf_file = "Qwen3-0.6B-Q2_K_L.gguf",
pad_token_id=self.tokenizer.pad_token_id,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto",
)
```
How can I use the gguf model of Qwen3 with transformers? Could you please add the support of it?
Thanks!
|
https://github.com/huggingface/transformers/issues/38650
|
closed
|
[] | 2025-06-06T20:11:23Z
| 2025-07-15T08:02:59Z
| 2
|
Auth0rM0rgan
|
huggingface/diffusers
| 11,675
|
Error in loading the pretrained lora weights
|
Hi, I am using the script https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py to train a lora.
An error is raised on https://github.com/huggingface/diffusers/blob/73a9d5856f2d7ae3637c484d83cd697284ad3962/examples/text_to_image/train_text_to_image_lora_sdxl.py#L1314C9-L1314C52
```
Loading adapter weights from state_dict led to missing keys in the model: down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_A
.default_0.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_B.default_0.weight, ...
```
The difference between the keys in the saved lora weights and the ''missing keys'' mentioned above is ''default_0''. How can I resolve this problem?
diffusers 0.32.2
peft 0.15.2
|
https://github.com/huggingface/diffusers/issues/11675
|
closed
|
[] | 2025-06-06T17:09:45Z
| 2025-06-07T07:40:14Z
| 1
|
garychan22
|
huggingface/text-generation-inference
| 3,257
|
if use chat.completions, text+image inference return incorrect output because of template issue
|
### System Info
common in all platform
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
text-generation-launcher --model-id=llava-hf/llava-v1.6-mistral-7b-hf --max-input-tokens 4096 --max-batch-prefill-tokens 16384 --max-total-tokens 8192 --max-batch-size 4
client:
```
from openai import OpenAI
client = OpenAI(base_url="http://localhost:80/v1", api_key="-")
chat_completion = client.chat.completions.create(
model="tgi",
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"
},
},
{"type": "text", "text": "Whats in this image?"},
],
},
],
max_tokens=50,
temperature=0.0,
stream=False,
)
print(chat_completion)
```
### Expected behavior
incorrect output is
ChatCompletion(id='', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=" I'm sorry, but I'm not sure what you're asking. Can you please provide more context or information about what you're looking for? ", refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None))], created=1749197214, model='llava-hf/llava-v1.6-mistral-7b-hf', object='chat.completion', service_tier=None, system_fingerprint='3.3.1-dev0-native', usage=CompletionUsage(completion_tokens=35, prompt_tokens=8, total_tokens=43, completion_tokens_details=None, prompt_tokens_details=None))
|
https://github.com/huggingface/text-generation-inference/issues/3257
|
open
|
[] | 2025-06-06T13:06:20Z
| 2025-06-06T13:11:22Z
| 2
|
sywangyi
|
huggingface/nanotron
| 372
|
datatrove need numpy>=2.0.0 bug nanotron 0.4 requires numpy<2, how to fix?
|
https://github.com/huggingface/nanotron/issues/372
|
open
|
[] | 2025-06-06T12:12:39Z
| 2025-11-22T14:44:01Z
| null |
lxyyang
|
|
pytorch/xla
| 9,303
|
Runtime is already initialized. Do not use the XLA ' RuntimeError: Runtime is already initialized. Do not use the XLA device before calling xmp.spawn.
|
## 🐛 Bug
-- Block 13 ALT: Direct xmp.spawn (Consolidated) ---
torch_xla and xmp imported for Block 13.
Defining hyperparameters for training function...
Hyperparameters for training function defined.
Setting XLA/TPU specific environment variables for xmp.spawn...
XRT_TPU_CONFIG already set: localservice;0;localhost:51011
Environment variables set.
Arguments tuple for xmp.spawn's target function prepared.
Set TPU_NUM_DEVICES = 8
Using nprocs = None (None = use all available devices) for xmp.spawn.
🚀 Launching TPU training directly via xmp.spawn with nprocs=None (auto-detect devices)...
❌❌❌ xmp.spawn FAILED: Runtime ALREADY initialized.
/tmp/ipykernel_10/3843059188.py:91: UserWarning: tpu_cores not found or invalid from Block 0/1. Defaulting to 8 for TPU v3-8.
warnings.warn("tpu_cores not found or invalid from Block 0/1. Defaulting to 8 for TPU v3-8.")
Traceback (most recent call last):
File "/tmp/ipykernel_10/3843059188.py", line 103, in <module>
xmp.spawn(
File "/usr/local/lib/python3.10/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 39, in spawn
return pjrt.spawn(fn, nprocs, start_method, args)
File "/usr/local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 213, in spawn
run_multiprocess(spawn_fn, start_method=start_method)
File "/usr/local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 145, in run_multiprocess
raise RuntimeError('Runtime is already initialized. Do not use the XLA '
RuntimeError: Runtime is already initialized. Do not use the XLA device before calling xmp.spawn.
Ensuring WandB run is finished...
Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
✅ Block 13 ALT Completed (Direct xmp.spawn Attempted).
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
I have working on this problem for the past two weeks and l can't get my head over it, i really don't know what am doing wrong.
My question if you are using tpu vm v3-8 in kaggle, does it mean you can't "!pip install "torch~=2.6.0" "torchvision~=0.21.0" "torch_xla[tpu]~=2.6.0" -f https://storage.googleapis.com/libtpu-releases/index.html --quiet" in your kaggle notebook
print("PyTorch/XLA installation attempt complete.\n")
it any unique install pytorch/xla? I initially started with notebook_launcher, accelerator from huggingface.
<!--
It is really important for the team to have a quick repro, which requires no setup work.
The quicker is the repro to be run, the higher the chances the bug will be addressed sooner.
The best way to create quick repros is to create a Colab based on the following template:
https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#using-debug_runpy-to-collect-debug-information
Things to avoid in repros is the need to download datasets which require setting up keys or other login information, like Kaggle downloads for example.
Another example are Colab which mount user's Google Drive storages.
Using a fake data generator could be a solution, in case the dataset cannot be easily downloaded without setting up credentials:
https://github.com/pytorch/xla/blob/784b4d4f21751a54be0029a95f47d3896561c2a9/test/test_train_mp_mnist.py#L65
-->
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. Or better use the Colab template: https://github.com/pytorch/xla/blob/master/contrib/colab/issue-report.ipynb -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
- Torch: 2.6.0+cu124
- TorchXLA: 2.6.0+libtpu
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/xla/issues/9303
|
open
|
[
"question"
] | 2025-06-06T01:12:22Z
| 2025-06-10T23:04:30Z
| null |
pojoba02
|
pytorch/pytorch
| 155,242
|
Partitioner loses Inplace ops where source is constant
|
### 🐛 Describe the bug
If backward contains some constant compute, e.g. result of joint constant propagation:
```
POST_JOINT_CONST_FOLDING:graph():
237 %primals_1 : [num_users=1] = placeholder[target=primals_1]
238 %primals_2 : [num_users=2] = placeholder[target=primals_2]
239 %tangents_1 : [num_users=1] = placeholder[target=tangents_1]
240 %clone : [num_users=1] = call_function[target=torch.ops.aten.clone.default](args = (%primals_1,), kwargs = {})
241 %full_default : [num_users=1] = call_function[target=torch.ops.aten.full.default](args = ([2], 0.0), kwargs = {dtype: torch.float32, layout: torch.strided, device: cuda:0, pin_memory: False})
242 %add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%full_default, 1), kwargs = {})
243 %mul_1 : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%add, 1), kwargs = {})
244 %add_1 : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%primals_2, %mul_1), kwargs = {})
245 %copy_ : [num_users=0] = call_function[target=torch.ops.aten.copy_.default](args = (%primals_2, %add_1), kwargs = {})
246 return [clone, tangents_1, None]
```
And this add_1 will be counted as "Invalid" for backward in partitioner and copy_ will not be captured at all.
Repro:
```
import torch
class Func(torch.autograd.Function):
@staticmethod
def forward(ctx, dummy, inplace_tensor, attach_gradient):
ctx.attach_gradient = attach_gradient
ctx.inplace_tensor = inplace_tensor
return dummy.clone()
@staticmethod
def backward(ctx, grad_output):
inplace_tensor = ctx.inplace_tensor
attach_gradient = ctx.attach_gradient
gradient_attachment = (grad_output * 0 + 1)
inplace_tensor.add_(1 * gradient_attachment)
return grad_output, None, None
def call(dummy, inplace_tensor, attach_gradient):
return Func.apply(dummy, inplace_tensor, attach_gradient)
compiled_call = torch.compile(call)
dummy = torch.randn((2,), requires_grad=True).to('cuda')
inplace_tensor = torch.zeros((2,), requires_grad=False).to('cuda')
print(f'Uncompiled')
loss = call(dummy, inplace_tensor, True).sum()
print(f'Pre backward inplace: {inplace_tensor}')
loss.backward()
print(f'Post backward inplace: {inplace_tensor}\n')
inplace_tensor.zero_()
print(f'Compiled no gradient attachment')
loss = compiled_call(dummy, inplace_tensor, True).sum()
print(f'COMPILED Pre backward inplace: {inplace_tensor}')
loss.backward()
print(f'COMPILED Post backward inplace: {inplace_tensor}\n')
inplace_tensor.zero_()
```
Result:
```
===== Joint graph 0 =====
/data/users/ivankobzarev/b/pytorch/torch/fx/_lazy_graph_module.py class joint_helper(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: "f32[2][1]cuda:0"; primals_2: "f32[2][1]cuda:0"; tangents_1: "f32[2][1]cuda:0";
primals_1, primals_2, tangents_1, = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /home/ivankobzarev/task-inplace/r.py:23 in call, code: return Func.apply(dummy, inplace_tensor, attach_gradient)
clone: "f32[2][1]cuda:0" = torch.ops.aten.clone.default(primals_1); primals_1 = None
mul: "f32[2][1]cuda:0" = torch.ops.aten.mul.Tensor(tangents_1, 0)
add: "f32[2][1]cuda:0" = torch.ops.aten.add.Tensor(mul, 1); mul = None
mul_1: "f32[2][1]cuda:0" = torch.ops.aten.mul.Tensor(add, 1); add = None
add_1: "f32[2][1]cuda:0" = torch.ops.aten.add.Tensor(primals_2, mul_1); mul_1 = None
# No stacktrace found for following nodes
copy_: "f32[2][1]cuda:0" = torch.ops.aten.copy_.default(primals_2, add_1); primals_2 = add_1 = copy_ = None
return pytree.tree_unflatten([clone, tangents_1, None], self._out_spec)
INFO: aot_config id: 0, fw_metadata=ViewAndMutationMeta(input_info=[InputAliasInfo(is_leaf=False, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=True), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=False, keep_input_mutations=True)], output_info=[OutputAliasInfo(output_type=<OutputType.non_alias: 1>, raw_type=<class 'torch._subclasses.functional_tensor.FunctionalTensor'>, base_idx=None, dynamic_dims=set(), requires_grad=True, functional_tensor=None)], num_intermediate_bases=0, keep_input_mutations=True, traced_tangents=[FakeTensor(..., device='cuda:0', size=(2,))], subclass_inp_meta=[PlainTensorMeta(unwrapped_idx=0, memory_format=None), PlainTensorMeta(unwrapped_idx=1, memory_format=None)], subclass_fw_graph_out_meta=[PlainTensorMeta(unwrapped_idx=0, memory_format=None)], subclass_tangent_
|
https://github.com/pytorch/pytorch/issues/155242
|
closed
|
[
"triaged",
"module: correctness (silent)",
"module: aotdispatch"
] | 2025-06-05T17:34:38Z
| 2025-06-11T12:50:03Z
| null |
IvanKobzarev
|
huggingface/transformers
| 38,613
|
MDX Errors
|
### System Info
Ubuntu 24.04.2 LTS, CPython 3.11.12, transformers==4.53.0.dev0
@stevhliu I'm trying to contribute to the model cards. I forked the latest transformers and I ran the scripts, from the home page and then I want to the documents page. I'm having issues with the doc builder. I keep receiving the errors "ValueError: There was an error when converting docs/source/en/internal/generation_utils.md to the MDX format.
Unable to find generation.TFGreedySearchEncoderDecoderOutput in transformers. Make sure the path to that object is correct." And Unable to find image_processing_utils_fast.BaseImageProcessorFast in transformers. Make sure the path to that object is correct.
I ran the " pip install -e ".[docs]" and saw this after installing everything: "warning: The package `transformers @ file://s` does not have an extra named `docs`"
I ran the doc builder and that ran as expected until I ran the doc-builder command "doc-builder build transformers docs/source/en/ --build_dir ~/tmp/test-build"
Is there something that I'm misunderstanding? Is there a workaround for me to write the markdown of the card that I have been assigned without having to run those scripts instead, in the meantime.. Thank you!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Ran install scripts on the Documents folder
### Expected behavior
To generate the docs
|
https://github.com/huggingface/transformers/issues/38613
|
closed
|
[
"bug"
] | 2025-06-05T14:19:45Z
| 2025-06-06T20:12:36Z
| 7
|
rileyafox
|
pytorch/ao
| 2,310
|
[Question] Combining QAT and Sparsity Training
|
First of all, thank you for all the time and effort invested in this project to make (large) models more accessible.
I am fairly new to optimizing my models using sparsity, and therefore, wanted to ask if my understanding of this library is correct.
In general, I would like to train my model using sparsity and QAT.
For QAT, I would follow this [guide](https://github.com/pytorch/ao/blob/main/torchao/quantization/qat/README.md#quantize_-api-recommended).
Now I am curious how to correctly use this together with sparsity.
I assume this `swap_linear_with_semi_sparse_linear(model, sparse_config)` is the correct snippet ([guide](https://github.com/pytorch/ao/tree/main/torchao/sparsity/training#quickstart)).
If I want to combine these two optimizations, what is the correct way to do so?
1. Train baseline
2. Train a sparse model
3. Train a sparse and quantization-aware model
Additionally, I found this statement
> A fully sparse 2:4 trained model exhibited a -0.5 pp accuracy drop; we were able to further reduce the accuracy loss to -0.1 pp by first training with 2:4 sparsity enabled and then switching over to normal dense training.
Does this mean adding a step?
4. Revert sparsity using `swap_semi_sparse_linear_with_linear(model)` and train
Lastly, the sparsity `sparsify_ ` and quantization `quantize_` need to be 'applied'.
I would greatly appreciate your input on this.
|
https://github.com/pytorch/ao/issues/2310
|
closed
|
[
"question"
] | 2025-06-05T13:03:12Z
| 2025-06-20T12:37:47Z
| null |
CaptainDario
|
huggingface/diffusers
| 11,661
|
[BUG]: Using args.max_train_steps even if it is None in diffusers/examples/flux-control
|
### Describe the bug
Under [https://github.com/huggingface/diffusers/tree/main/examples/flux-control](examples/flux-control) there are two files showing how to fine tune flux-control:
- [train_control_flux.py](https://github.com/huggingface/diffusers/blob/main/examples/flux-control/train_control_flux.py)
- [train_control_lora_flux.py](https://github.com/huggingface/diffusers/blob/main/examples/flux-control/train_control_lora_flux.py)
Both of them have a bug when args.max_train_steps is None:
Starting from [Line 905](https://github.com/huggingface/diffusers/blob/c934720629837257b15fd84d27e8eddaa52b76e6/examples/flux-control/train_control_flux.py#L905) we have following code:
```.py
if args.max_train_steps is None:
len_train_dataloader_after_sharding = math.ceil(len(train_dataloader) / accelerator.num_processes)
num_update_steps_per_epoch = math.ceil(len_train_dataloader_after_sharding / args.gradient_accumulation_steps)
num_training_steps_for_scheduler = (
args.num_train_epochs * num_update_steps_per_epoch * accelerator.num_processes
)
else:
num_training_steps_for_scheduler = args.max_train_steps * accelerator.num_processes
lr_scheduler = get_scheduler(
args.lr_scheduler,
optimizer=optimizer,
num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes,
num_training_steps=args.max_train_steps * accelerator.num_processes,
num_cycles=args.lr_num_cycles,
power=args.lr_power,
)
```
Note how it gets checked that `args.max_train_steps` is None in the if, in this case a num_training_steps_for_scheduler gets prepared. However in [Line 918](https://github.com/huggingface/diffusers/blob/c934720629837257b15fd84d27e8eddaa52b76e6/examples/flux-control/train_control_flux.py#L918) we use `args.max_train_steps`
```.py
num_training_steps=args.max_train_steps * accelerator.num_processes,
```
isntead of the prepared num_training_steps_for_scheduler and causing following error:
```.sh
num_training_steps=args.max_train_steps * accelerator.num_processes,
~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
```
### Reproduction
Training runs where the max_train_steps are not set, i.e.:
```.sh
accelerate launch train_control_lora_flux.py \
--pretrained_model_name_or_path="black-forest-labs/FLUX.1-dev" \
--dataset_name="raulc0399/open_pose_controlnet" \
--output_dir="pose-control-lora" \
--mixed_precision="bf16" \
--train_batch_size=1 \
--rank=64 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--use_8bit_adam \
--learning_rate=1e-4 \
--report_to="wandb" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--num_train_epochs=10 \
--validation_image="openpose.png" \
--validation_prompt="A couple, 4k photo, highly detailed" \
--offload \
--seed="0" \
--push_to_hub
```
### Logs
```shell
```
### System Info
Not relevant for the mentioned Bug.
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/11661
|
closed
|
[
"bug"
] | 2025-06-05T07:18:06Z
| 2025-06-05T09:26:26Z
| 0
|
Markus-Pobitzer
|
huggingface/lerobot
| 1,203
|
Could you please upload the config.json file for smolvla?
|
Could you please upload the config.json file for smolvla? Thank you very much!
FileNotFoundError: config.json not found on the HuggingFace Hub in lerobot/smolvla_base
|
https://github.com/huggingface/lerobot/issues/1203
|
closed
|
[
"question"
] | 2025-06-05T06:59:12Z
| 2025-06-11T14:56:56Z
| null |
Pandapan01
|
huggingface/transformers
| 38,601
|
Contribute to Transformers on windows natively without WSL
|
### System Info
### System info
OS: Windows 11
Python: 3.13.3 and 3.10
Git: 2.49.0
CMake: 4.0.2
Msys64: Pacman v6.1.0 - libalpm v14.0.0
Pip: 25.1.1
Setuptools: 80.9.0
Visual studio C++ build tools
### NOTE: I followed the steps here [Contribute to 🤗 Transformers](https://huggingface.co/docs/transformers/en/contributing) and for sure system info already existed before following but let me walk through again for additional info.
1- Forked the repo.
2- Cloned it
3- cd transformers (so made sure I am in the right path which is the root for the repo)
3- switched to my own branch
4- made a python virtual environment using python 3.10 then activated it
5- made sure transformers ain't installed inside it
6- installed PyTorch
7- Ran this command `pip install -e ".[dev]"`
### NOTE: I tried making requirements.txt and using this command `pip install -r requirements.txt` but I got no output and I tried installing onnx with pip which happened successfully then Ran this command `pip install -e ".[dev]"` but nothing changed
### NOTE 6/6/2025: I tried uv instead of python venv, nothing worked. I tried deleting everything including system info and install everything from the beginning, nothing worked still. I made a requiremets.txt from what is in setup.py and installed it and tried to run `pip install -e ".[dev]"` but same issues again, nothing worked
```
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [11 lines of output]
...\setup.py:36: DeprecationWarning: Use shutil.which instead of find_executable
CMAKE = find_executable('cmake3') or find_executable('cmake')
...\setup.py:37: DeprecationWarning: Use shutil.which instead of find_executable
MAKE = find_executable('make')
fatal: not a git repository (or any of the parent directories): .git
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 35, in <module>
File "...\setup.py", line 318, in <module>
raise FileNotFoundError("Unable to find " + requirements_file)
FileNotFoundError: Unable to find requirements.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`pip install -e ".[dev]"`
### Expected behavior
Being able to install transformers for contributing with no issue
|
https://github.com/huggingface/transformers/issues/38601
|
closed
|
[
"bug"
] | 2025-06-05T04:14:12Z
| 2025-07-27T08:02:54Z
| 4
|
ghost
|
pytorch/torchtitan
| 1,262
|
Checkpointer Feature Enhancements
|
This document tracks and describes the essential checkpointing features still to be added to TorchTitan.
- [ ] **Full `state_dict` saving**
- Support exporting the complete (unsharded) model `state_dict`; many existing formats only handle full `state_dict`.
- https://github.com/pytorch/torchtitan/pull/1219 is WIP to support this.
- Need removing FP8 tensor subclass from the `state_dict`.
- [x] **Model `state_dict` mapping**
- Provide an interface for users/developers to plug in custom converters between TorchTitan’s `state_dict`/model definitions and other model definitions (e.g., Hugging Face models).
- [x] **Hugging Face format saving**
- Depends on full `state_dict` export
- Optionally leverages the `model state_dict mapping` interface for users who require conversion
- Uses the Hugging Face API for saving
- [x] **Hugging Face format loading**
- Depends on the `model state_dict interface` as most use cases require conversion from other model definitions
- DCP already supports HF loading but needs tighter API integration and performance tuning (collaboration with DCP)
- [ ] **Enhanced checkpoint debugging & comparison tools**
- Provide APIs (e.g., per-tensor checksums or diff reports) to pinpoint mismatches in model state, optimizer state, etc.
- Streamline root-cause analysis when loaded checkpoints lead to unexpected accuracy changes
- [x] **Complete unit tests**
- Checkpointer has a lot of logic and branches. We can verify Checkpointer through Mock without using GPUs.
- [ ] **Decouple `state_dict` staging from checkpointing/DCP calls**
- Allow staging of the `state_dict` to CPU (or other targets) independently of DCP
- Enables downstream workflows (e.g., RL trainers or parameter servers) to consume staged state without invoking DCP
- [ ] **Remove the call to get_model_state_dict and get_optimizer_state_dict**
- While this originally is viewed as a BE project to demonstrate how to directly get model and optimizer state_dict with canonical FQNs, https://github.com/pytorch/torchtitan/pull/1280 actually depends on this enhancement.
|
https://github.com/pytorch/torchtitan/issues/1262
|
open
|
[
"enhancement",
"better engineering",
"module: checkpoint"
] | 2025-06-04T20:44:27Z
| 2025-08-21T03:20:05Z
| 3
|
fegin
|
huggingface/diffusers
| 11,657
|
Custom Wan diffusion Lora runs without error but doesn't apply effect and gives warning: No LoRA keys associated to WanTransformer3DModel found with the prefix='transformer'.
|
### Describe the bug
I run the diffusers pipe using the standard process with a custom diffusers trained lora:
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
pipe.scheduler = scheduler
pipe.load_lora_weights("lora/customdiffusers_lora.safetensors")
etc...
it runs without error but the effect was not applied, and I see the following warning:
No LoRA keys associated to WanTransformer3DModel found with the prefix='transformer'. This is safe to ignore if LoRA state dict didn't originally have any WanTransformer3DModel related params. You can also try specifying `prefix=None` to resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/new
Is there any config file I need to change for this to work? Thanks
### Reproduction
N/A as a custom Lora
### Logs
```shell
```
### System Info
0.33, linux, python 3.10
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/11657
|
closed
|
[
"bug"
] | 2025-06-04T19:50:14Z
| 2025-09-12T03:32:17Z
| 3
|
st-projects-00
|
huggingface/transformers
| 38,576
|
A local variable 'image_seq_length' leading to UnboundLocalError: cannot access local variable 'image_seq_length' where it is not associated with a value
|
### System Info
- `transformers` version: 4.52.3
- Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
- Python version: 3.12.2
- Huggingface_hub version: 0.32.2
- Safetensors version: 0.5.3
- Accelerate version: 0.26.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The code snippet is as follows:
from transformers.utils.attention_visualizer import AttentionMaskVisualizer
visualizer = AttentionMaskVisualizer("meta-llama/Llama-2-7b-hf")
visualizer("Plants create energy through a process known as")
In the Class AttentionMaskVisualizer, a local variable in the first branch (lines 181-201), 'image_seq_length,' is passed to the function (line 232). However, in the text case, the branch will not be executed, and it will lead to UnboundLocalError: cannot access local variable 'image_seq_length' where it is not associated with a value.
### Expected behavior
None
|
https://github.com/huggingface/transformers/issues/38576
|
closed
|
[
"bug"
] | 2025-06-04T09:06:04Z
| 2025-06-04T12:20:33Z
| null |
IceGiraffe
|
huggingface/lerobot
| 1,195
|
ros2_control support
|
Hello,
I was thinking that it would be great to use the robot with ros2_control :
- to test code developped with the ROS2 framework:
- for education purposes : the robot is great, easily and not expensive to build (thank you for the work achieved), transporteable in a case, etc.
Do you have any knowledge of an existing project ?
If not, would you be interested in this kind of implementation ?
Best,
Aline
|
https://github.com/huggingface/lerobot/issues/1195
|
open
|
[
"enhancement",
"question"
] | 2025-06-03T15:31:53Z
| 2025-11-27T16:30:08Z
| null |
baaluidnrey
|
huggingface/diffusers
| 11,648
|
how to load lora weight with fp8 transfomer model?
|
Hi, I want to run fluxcontrolpipeline with transformer_fp8 reference the code :
https://huggingface.co/docs/diffusers/api/pipelines/flux#quantization
```
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, FluxTransformer2DModel, FluxControlPipeline
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel
quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = T5EncoderModel.from_pretrained(
"black-forest-labs/FLUX.1-dev",
subfolder="text_encoder_2",
quantization_config=quant_config,
torch_dtype=torch.float16,
)
quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = FluxTransformer2DModel.from_pretrained(
"black-forest-labs/FLUX.1-dev",
subfolder="transformer",
quantization_config=quant_config,
torch_dtype=torch.float16,
)
pipeline = FluxControlPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
text_encoder_2=text_encoder_8bit,
transformer=transformer_8bit,
torch_dtype=torch.float16,
device_map="balanced",
)
prompt = "a tiny astronaut hatching from an egg on the moon"
image = pipeline(prompt, guidance_scale=3.5, height=768, width=1360, num_inference_steps=50).images[0]
image.save("flux.png")
```
but when I load lora after build a pipeline
```
pipeline = FluxControlPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
text_encoder_2=text_encoder_8bit,
transformer=transformer_8bit,
torch_dtype=torch.float16,
device_map="balanced",
)
pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora")
```
There a error:
not support fp8 weight , how to fix it??
|
https://github.com/huggingface/diffusers/issues/11648
|
open
|
[] | 2025-06-03T10:31:23Z
| 2025-06-19T12:37:35Z
| null |
Johnson-yue
|
huggingface/candle
| 2,986
|
How to reset gradient before each batch
|
In Pytorch, you would call `optimizer.zero_grad` to zero the gradients before every batch. How do you do this in candle?
|
https://github.com/huggingface/candle/issues/2986
|
open
|
[] | 2025-06-03T10:17:52Z
| 2025-06-03T10:17:52Z
| null |
lokxii
|
huggingface/transformers
| 38,544
|
Paligemma model card needs update
|
Hi
I found a minor problem with paligemma model card. How can I raise a PR to fix it ? I am first time contributor. I raised PR. Whom should I mention to review it ?
https://huggingface.co/google/paligemma-3b-pt-896
|
https://github.com/huggingface/transformers/issues/38544
|
closed
|
[] | 2025-06-03T06:55:14Z
| 2025-07-14T16:23:52Z
| 7
|
punitvara
|
pytorch/torchtitan
| 1,257
|
Question about fixed std=0.02 initialization of `w1` in `moe.py`
|
Hi torchtitan team,
Thanks for the great work on this project! I had a question regarding a detail in the code at moe.py#L92
https://github.com/pytorch/torchtitan/blob/768cde131105bde624160029d808e94649faf0f4/torchtitan/experiments/llama4/model/moe.py#L92
I noticed that `w1` is initialized with a fixed standard deviation of 0.02, whereas `w2` and `w3` are initialized using a configurable `init_std` parameter. I’m wondering if this discrepancy is intentional, and if so, what the reasoning is behind using a hardcoded value for `w1`.
Would greatly appreciate any insights you could share!
Thanks again!
|
https://github.com/pytorch/torchtitan/issues/1257
|
open
|
[
"question",
"triage review"
] | 2025-06-03T04:06:53Z
| 2025-08-21T07:03:44Z
| null |
trestad
|
huggingface/transformers
| 38,541
|
`eager_attention_forward` and `repeat_kv` code duplication
|
I see the two functions appear in a lot of places in the code base. Shall we unify them into a single place?
And can we treat `eager_attention_forward` as another option in [`ALL_ATTENTION_FUNCTIONS`](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L6186)? Any concerns?
|
https://github.com/huggingface/transformers/issues/38541
|
closed
|
[] | 2025-06-03T00:57:16Z
| 2025-06-10T10:27:25Z
| 3
|
ChengLyu
|
pytorch/tutorials
| 3,373
|
[BUG] Running `make html-noplot` yields errors.
|
### Add Link
I ran the following command about 10 hours ago, around 12:20:00 utc and it gave me errors. (I am being specific about the time, because I was unable to find a release that I could point to).
`git clone --depth 1 https://github.com/pytorch/tutorials.git`
### Describe the bug
## What errors did you encounter?
```
generating gallery for beginner... [ 6%] saving_loading_models.py
Extension error (sphinx_gallery.gen_gallery):
Handler <function generate_gallery_rst at 0x000001C0A8C3AB00> for event 'builder-inited' threw an exception (exception: Can't pickle <function call_fn at 0x000001C088543010>: attribute lookup call_fn on __main__ failed)
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python\Python310\lib\multiprocessing\spawn.py", line 107, in spawn_main
new_handle = reduction.duplicate(pipe_handle,
File "C:\Python\Python310\lib\multiprocessing\reduction.py", line 79, in duplicate
return _winapi.DuplicateHandle(
OSError: [WinError 6] The handle is invalid
make: *** [html-noplot] Error 2
```
## What did you expect to happen?
As stated in the README.md file, I expected a basic html version of the tutorial to be built at `_build/html`
## Steps to Reproduce the error
1. Run the git command below
`git clone --depth 1 https://github.com/pytorch/tutorials.git`
2. Run `pip install -r .ci/docker/requirements.txt`. I am aware the instruction was to `pip install -r requirements.txt`. But
I keep encountering the errors below, so I improvised.
```
ERROR: Invalid requirement: '.ci/docker/requirements.txt': Expected package name at the start of dependency specifier
.ci/docker/requirements.txt
^ (from line 1 of requirements.txt)
```
3. Run `make html-noplot`. For this one, I gnuWin32 make. This is what is available on Windows.
I noticed that this error is similar to that found when I run re.compile('\\c'). I am familiar with this scenario and so I looked further and traced the error to the code [here](https://github.com/pytorch/tutorials/blob/20bf27e027d35a455d24469098f6d685547ff11d/.jenkins/get_sphinx_filenames.py#L13). I was able to move on from this error by modifying my local version of the code to
`SPHINX_SHOULD_RUN = "|".join(get_files_for_sphinx()).replace('\\', '\\\\')`
I want to note that I do not feel confident in that action because I notice that code was last modified 2 years ago, unless I have a wrong interpretation of what the "2 years ago" that I see around there means. It was last modified 2 years ago! That means that working tutorials have been built with that piece of code. This makes me feel very strongly that something is wrong with my setup. But I resisted raising any issues because I considered that it might not be worth it to distract the attention of our dear conscientious developers whose efforts to maintain this codebase does not go unnoticed.
4. Run `make html-noplot` once more.
The error [above](##what-errors-did-you-encounter?) appears. I look at the error and I see `multiprocessing.py` there. I do not know how to do anything with code that runs on more than one thread or process. I would appreciate knowing what I have done wrong in my environment because surely the code in this repository works as it has been tested as required.
### Describe your environment
## Environment
* Python 3.10.5
* pip 25.1.1
* All commands were run in the top directory of the cloned repository
* All *pip-installing* was done in a fresh virtual environment created using **venv** and located in the top directory of the cloned repository. The command used for that was `python -m venv doc-env`.
* GPU (not cuda): Intel Iris Xe (Not sure this is relevant)
|
https://github.com/pytorch/tutorials/issues/3373
|
open
|
[
"bug",
"build issue"
] | 2025-06-02T23:37:26Z
| 2025-06-03T20:01:28Z
| 5
|
phonokoye
|
huggingface/chat-ui
| 1,843
|
can you make a release?
|
The current codebase is far away from the official release in November, maybe you can stabilize and release current code?
|
https://github.com/huggingface/chat-ui/issues/1843
|
open
|
[
"enhancement"
] | 2025-06-02T21:26:51Z
| 2025-07-21T20:44:03Z
| 1
|
antonkulaga
|
huggingface/transformers
| 38,527
|
Why do you remove sample_indices_fn for processor.apply_chat_template?
|
Just as shown in the picture, since 4.52 processor.apply_chat_template does no longer support sample_indices_fn but the args doc is still there.
<img width="712" alt="Image" src="https://github.com/user-attachments/assets/e055d5f5-4800-4eb7-8054-0f41a9be5707" />
|
https://github.com/huggingface/transformers/issues/38527
|
closed
|
[] | 2025-06-02T12:34:23Z
| 2025-06-03T02:44:22Z
| 1
|
futrime
|
huggingface/optimum
| 2,284
|
Error when exporting DinoV2 with Registers
|
When trying :
` python -m scripts.convert --quantize --model_id facebook/dinov2-with-registers-small`
I Got :
`ValueError: Trying to export a dinov2-with-registers model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type dinov2-with-registers to be supported natively in the ONNX export.`
|
https://github.com/huggingface/optimum/issues/2284
|
closed
|
[
"Stale"
] | 2025-06-02T08:53:55Z
| 2025-07-04T02:16:54Z
| 1
|
elkizana
|
huggingface/agents-course
| 523
|
[QUESTION] The final quiz of Unit 1, always crashes with dataset not found
|
First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord
However, if you prefer you can ask here, please **be specific**.
Dataset 'agents-course/unit_1_quiz' doesn't exist on the Hub or cannot be accessed.
The full log is:
```
Traceback (most recent call last):
File "/home/user/app/app.py", line 28, in <module>
ds = load_dataset(EXAM_DATASET_ID, split="train")
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 2129, in load_dataset
builder_instance = load_dataset_builder(
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 1849, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 1719, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 1645, in dataset_module_factory
raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e
datasets.exceptions.DatasetNotFoundError: Dataset 'agents-course/unit_1_quiz' doesn't exist on the Hub or cannot be accessed.
Traceback (most recent call last):
File "/home/user/app/app.py", line 28, in <module>
ds = load_dataset(EXAM_DATASET_ID, split="train")
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 2129, in load_dataset
builder_instance = load_dataset_builder(
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 1849, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 1719, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 1645, in dataset_module_factory
raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e
datasets.exceptions.DatasetNotFoundError: Dataset 'agents-course/unit_1_quiz' doesn't exist on the Hub or cannot be accessed.
```
Am I missing something trivial?
|
https://github.com/huggingface/agents-course/issues/523
|
open
|
[
"question"
] | 2025-06-02T07:58:01Z
| 2025-06-02T07:58:01Z
| null |
abcnishant007
|
huggingface/peft
| 2,563
|
Integrate Lily
|
### Feature request
This request proposes integrating Lily (Low-Rank Interconnected Adaptation across Layers), accepted to ACL 2025 Findings, into the PEFT library.
Paper: https://arxiv.org/pdf/2407.09946
Repo: https://github.com/yibozhong/lily
### Motivation
Lily aims to directly make the rank of each individual adapter bigger under the same parameter budget, as it's shown in many papers that higher ranks are beneficial to PEFT performance. This is achieved by breaking the pair-AB-per-layer constraint of LoRA. That is, we do not give each layer a dedicated pair of A and B. Rather, we decouple all the Bs from the layer, and when adapting at each layer, we use a weighted sum of these Bs as the B for this layer. The weight is calculated by a lightweight trainable router, currently data-dependent.

Several points worth noting:
- The method looks somewhat similar to MosLoRA in structure, but it operates at the model level and the aim is to increase the individual rank of each adapter with dynamic adaptation.
- Currently in the paper, we use a data-dependent router, which makes it tricky to merge the weights. I do not observe notable inference latency, possibly due to small model size, but an option for using a non-data-dependent router can be included and enable easy merging the weights.
- The current As are still positioned at a fixed layer (using layer-wise sharing to reduce params). However, it also can be decoupled, simply by providing two routers for weighting As and Bs respectively, rather than one router for B in the current setup. This is a more elegant design and shares the same principle as Lily. After I run quick experiments demonstrating its effectiveness, I can integrate this setup into my current code as Lily v2.
### Your contribution
Implement Lily, repo: https://github.com/yibozhong/lily.
|
https://github.com/huggingface/peft/issues/2563
|
closed
|
[] | 2025-06-02T07:23:30Z
| 2025-12-18T14:03:32Z
| 15
|
yibozhong
|
huggingface/lerobot
| 1,180
|
dataset training
|
How many episodes do you recommend making for each file when learning the dataset? Can I create about 400 episodes by putting different tasks in each episode? Or can I create the same task data for each file and combine multiple files?
|
https://github.com/huggingface/lerobot/issues/1180
|
closed
|
[
"question",
"dataset"
] | 2025-06-01T15:59:47Z
| 2025-10-08T12:54:48Z
| null |
bruce577
|
huggingface/lerobot
| 1,177
|
[Question] Why using a kernel device for IP cameras?
|
I'm wondering why, when we have an IP camera (by using DroidCam on Android for instance), the team decided to plug the IP camera into a loopback device in `/dev/videoX` instead of directly reading the video stream in the code with Opencv `cv2.VideoCapture(url)`. I understand doing this allows controlling FPS & resolution which is not possible when `cv2.VideoCapture(url)` is used directly, however the downside is that you need to map the camera to a kernel device which becomes really cumbersome, especially when you need root access and when the device gets stuck in a weird state.
Why didn't the team simply read the video stream from `cv2.VideoCapture(url)` and then downsized the video stream inside the code loop? (The only downside of doing this I found is that we can't get 30fps if the stream outputs only 25fps but this shouldn't be a problem imo since `OpenCVCamera.read_loop` adds a 0.1 latency which messes up the fps sync anyways).
|
https://github.com/huggingface/lerobot/issues/1177
|
closed
|
[
"question",
"robots",
"stale"
] | 2025-05-31T05:24:21Z
| 2025-12-31T02:35:18Z
| null |
godardt
|
pytorch/xla
| 9,272
|
Improve documentation for running benchmark unit tests
|
## 📚 Documentation
Currently, in the `benchmarks/` directory, the `README.md` file only specified to use `make -C ...` to run the unit tests for the benchmarking code. The python tests like `test_benchmark_model.py` is not run.
We need better instructions on how to run the python unit tests.
Currently, I have to add the `benchmarks/` dir to the `$PYTHONPATH` to have the tests discover the python packages needed for the tests and run the tests by `python test/benchmarks/test_benchmark_model.py`.
@ysiraichi may know a better way.
|
https://github.com/pytorch/xla/issues/9272
|
open
|
[
"documentation",
"benchmarking"
] | 2025-05-30T22:01:17Z
| 2025-06-04T12:10:55Z
| 1
|
haifeng-jin
|
huggingface/transformers
| 38,501
|
torch.compile fails for gemma-3-1b-it
|
### System Info
- `transformers` version: 4.52.4
- Platform: Linux-6.15.0-1-MANJARO-x86_64-with-glibc2.41
- Python version: 3.12.8
- Huggingface_hub version: 0.32.3
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.7.0+cu126 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA GeForce RTX 3090 Ti
### Who can help?
@ArthurZucker @gante
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Running `TORCHDYNAMO_VERBOSE=1 TORCH_LOGS="+dynamo" uv run main.py` fails:
<details>
<summary>Minimal reproducible example</summary>
```python
import torch
from transformers import GemmaTokenizer, Gemma3ForCausalLM
ckpt = "google/gemma-3-1b-it"
model = Gemma3ForCausalLM.from_pretrained(
ckpt,
device_map="cuda:0",
torch_dtype=torch.bfloat16,
)
processor = GemmaTokenizer.from_pretrained(ckpt)
messages = [{"role": "user", "content": "What is 2^7-2^4??"}]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
input_len = inputs["input_ids"].shape[-1]
# generate_fn = model.generate
generate_fn = torch.compile(model.generate, fullgraph=True)
generation = generate_fn(**inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
</details>
<details>
<summary>Stack trace</summary>
Full paste: https://pastebin.com/V103pCWM
```
File "/tmp/gemma_torch/.venv/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 2111, in call_deepcopy
unimplemented(f"copy.deepcopy {repr(x)}")
File "/tmp/gemma_torch/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py", line 439, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: copy.deepcopy UserDefinedObjectVariable(GenerationConfig)
from user code:
File "/tmp/gemma_torch/.venv/lib/python3.12/site-packages/torch/_dynamo/external_utils.py", line 70, in inner
return fn(*args, **kwargs)
File "/tmp/gemma_torch/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/tmp/gemma_torch/.venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 2354, in generate
generation_config, model_kwargs = self._prepare_generation_config(
File "/tmp/gemma_torch/.venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 1744, in _prepare_generation_config
generation_config = copy.deepcopy(generation_config)
```
</details>
### Expected behavior
Compilation proceeds
|
https://github.com/huggingface/transformers/issues/38501
|
closed
|
[
"bug"
] | 2025-05-30T21:01:41Z
| 2025-06-02T20:45:54Z
| 6
|
InCogNiTo124
|
pytorch/xla
| 9,269
|
Torch model parameters as HLO constants
|
## ❓ Questions and Help
Hello, I am wondering if there is a way to bake model parameters into the produced HLO model as constants. For Torch-XLA it seems like model parameters are treated as additional input args which makes it difficult to port this into openxla/xla for execution in cpp. The HLO produced from Jax already has the model parameters as constants within the model. Is there a way to do something closer to Jax where I can save an HLO/StableHLO model from Torch with the model parameters being a part of the HLO and the only arguments being the true model inputs? Thanks!
|
https://github.com/pytorch/xla/issues/9269
|
open
|
[
"question"
] | 2025-05-30T20:18:01Z
| 2025-06-13T04:35:59Z
| null |
drewjenks01
|
huggingface/transformers
| 38,500
|
Unable to deploy Gemma 3 on AWS SageMaker due to lack of support in tranfomers release
|
hi,
it seems when i deploy the model
```
huggingface_model = HuggingFaceModel(
model_data=model_s3_uri,
role=role,
transformers_version="4.49.0",
pytorch_version="2.6.0",
py_version="py312",
)
predictor = huggingface_model.deploy(
instance_type="ml.g5.48xlarge",
initial_instance_count=1,
endpoint_name="gemma-27b-inference",
container_startup_health_check_timeout=900
)
response = predictor.predict({
"inputs": "what can i do?"
})
print(response)
```
```
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400)
from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "The checkpoint you are trying to load has model type gemma3_text but Transformers does not
recognize this architecture. This could be because of an issue with the checkpoint, or because your version of
Transformers is out of date.\n\nYou can update Transformers with the command pip install --upgrade transformers.
```
now i know HuggingFaceModel doesnt support anything above 4.49.0 so if i try to run 4.50.0 it will give an error saying please use this version. the thing is gemma3 is not available in 4.49 so how to fix this? i have the model in my bucket trained just cant deploy it due to the versions of transformers. is there a way to override the container inside the huggingface that takes a more advanced transformer?
I did this, but the issue now is in sagemaker, cuz i cannot use this for the huggingface version as it doesn't support it
pip install git+https://github.com/huggingface/transformers@v4.49.0-Gemma-3
|
https://github.com/huggingface/transformers/issues/38500
|
closed
|
[] | 2025-05-30T17:10:22Z
| 2025-07-08T08:02:37Z
| 2
|
ehrun32
|
huggingface/transformers
| 38,499
|
ModernBERT for MLM outputs incorrect hidden state shape.
|
### System Info
When using `ModernBERTForMaskedLM` with `output_hidden_states=True` the hidden state is not correctly padded when it is returned. A minimal example is included below:
```
import torch
from transformers import AutoTokenizer, ModernBertForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("answerdotai/ModernBERT-base")
model = ModernBertForMaskedLM.from_pretrained("answerdotai/ModernBERT-base").to("cuda")
inputs = tokenizer(
[
"The capital of France is <mask>.",
"The name of the first president of the united states is <mask>.",
],
padding=True,
return_tensors="pt",
).to("cuda")
with torch.no_grad():
outputs = model(**inputs, output_hidden_states=True)
print(inputs["attention_mask"].sum())
# >>> 26
print(outputs.hidden_states[-1].shape)
# >>> torch.Size([26, 768])
assert outputs.hidden_states[-1].shape == inputs["input_ids"].shape + (
model.config.hidden_size,
)
```
I'm using the following library versions:
- `transformers==4.48.2`
- `torch==2.6.0`
It appears that what is returned is the flattened version as the tensor is 2D and the first dimension corresponds to the sum of the attention mask. This issue doesn't happen when using the non MLM version.
I searched modern bert and hidden state and looked at the recent commits and didn't see any mention of this issue, but it might have been fixed in a newer version without it being obvious.
### Who can help?
@ArthurZucker
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the code provided in the issue with flash attention on a Cuda GPU.
### Expected behavior
The hidden states should have shape [batch size, max sequence length, model dim] but they have shape [unknown dim (I think the number of unpadded tokens), model dim].
|
https://github.com/huggingface/transformers/issues/38499
|
closed
|
[
"bug"
] | 2025-05-30T17:02:55Z
| 2025-07-08T08:02:39Z
| 2
|
jfkback
|
huggingface/lerobot
| 1,174
|
[Question] Multi-Rate Sensor and Discrete Event Handling in `lerobot`
|
Hello `lerobot` Team,
First off, huge thanks for building such an awesome open-source project!
I'm currently exploring `lerobot` for a project and have some critical questions regarding its data handling, specifically for multi-rate sensors and discrete events. My understanding from the README is that `lerobot` records at a fixed `fps`, creating a table with `fps * record_time` rows.
This leads to two primary concerns:
1. **Multi-Rate Sensors:**
Consider a sensor like an IMU operating at 1KHz, while other sensors might be at much lower rates. To capture the IMU data without loss, the `fps` would need to be set extremely high, to match highest-rate-sensor. This implies:
* **Massive Data Redundancy:** A significant portion of rows would contain sparse information from the lower-rate sensors.
* **Recording Performance:** Could such a high `fps` and resulting data volume negatively impact recording performance, potentially making it infeasible to capture this type of data?
* **Storage Load:** This approach would also lead to very large dataset sizes.
Am I correct in this interpretation? If so, how does `lerobot` effectively manage multi-rate sensor data to mitigate these issues?
2. **Discrete Events:**
How are discrete events, such as keyboard presses/releases or joystick button presses, recorded into a `LeRobotDataset`? The current design of `LeRobotDataset`, particularly `__nextitem__` and `delta_timestamps`, seems to implicitly assume continuous data that can be interpolated. How does `lerobot` accommodate and represent these non-continuous, event-driven data points within its framework?
A quick response addressing these points would be incredibly helpful for our ongoing development.
Thanks for your time and insight!
|
https://github.com/huggingface/lerobot/issues/1174
|
open
|
[
"question",
"dataset"
] | 2025-05-30T09:04:13Z
| 2025-12-17T10:44:46Z
| null |
MilkClouds
|
huggingface/transformers
| 38,489
|
VLM reverse mapping logic in modeling_utils.py save_pretrained not doing anything?
|
### System Info
transformers version: 4.52.3
Platform: Ubuntu 24.04
Python version: 3.11.0
Huggingface_hub version: 0.32.2
Safetensors version: 0.5.3
Accelerate version: 1.7.0
Accelerate config: not found
DeepSpeed version: not installed
PyTorch version (GPU?): 2.7.0+cu126 (H100)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using distributed or parallel set-up in script?: No
Using GPU in script?: No
GPU type: NVIDIA H100
### Who can help?
@amyeroberts @zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
borrowing the reverse key mapping logic in the modeling_utils.py save_pretrained method as shown here:
https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L3649
If we also use the qwen2 model mappings for Qwen2ForConditionalGeneration as an example
and a sample of keys as shown below to test the reversal logic:
```
import re
from transformers import Qwen2VLForConditionalGeneration
checkpoint_conversion_mapping = Qwen2VLForConditionalGeneration._checkpoint_conversion_mapping
checkpoint_keys = [
'model.language_model.layers.9.post_attention_layernorm.weight', # Should be remapped
'model.layers.9.self_attn.k_proj.bias', # Should not be remapped
'model.visual.blocks.0.attn.proj.bias', # Should be remapped
'visual.blocks.0.attn.proj.weight', # Should not be remapped
]
reverse_key_mapping = {v: k for k, v in checkpoint_conversion_mapping.items()}
for key in checkpoint_keys:
print(f"\nOperating on sample key: {key}:")
for pattern, replacement in reverse_key_mapping.items():
replacement = replacement.lstrip("^") # strip off un-needed chars and patterns
replacement = re.sub(r"\(.*?\)", "", pattern)
key, n_replace = re.subn(pattern, replacement, key)
print(f"pattern: {pattern}, replacement: {replacement}, resultant key: {key}")
# Early exit of the loop
if n_replace > 0:
print(f"Result: final mapped key is {key}")
break
else:
print(f"Result: no mappings performed")
```
returns the following output where no mapping reversal is performed where it should be.
```
Operating on sample key: model.language_model.layers.9.post_attention_layernorm.weight:
pattern: model.visual, replacement: model.visual, resultant key: model.language_model.layers.9.post_attention_layernorm.weight
Result: no mappings performed
pattern: model.language_model, replacement: model.language_model, resultant key: model.language_model.layers.9.post_attention_layernorm.weight
Result: final mapped key is model.language_model.layers.9.post_attention_layernorm.weight
Operating on sample key: model.layers.9.self_attn.k_proj.bias:
pattern: model.visual, replacement: model.visual, resultant key: model.layers.9.self_attn.k_proj.bias
Result: no mappings performed
pattern: model.language_model, replacement: model.language_model, resultant key: model.layers.9.self_attn.k_proj.bias
Result: no mappings performed
Operating on sample key: model.visual.blocks.0.attn.proj.bias:
pattern: model.visual, replacement: model.visual, resultant key: model.visual.blocks.0.attn.proj.bias
Result: final mapped key is model.visual.blocks.0.attn.proj.bias
Operating on sample key: visual.blocks.0.attn.proj.weight:
pattern: model.visual, replacement: model.visual, resultant key: visual.blocks.0.attn.proj.weight
Result: no mappings performed
pattern: model.language_model, replacement: model.language_model, resultant key: visual.blocks.0.attn.proj.weight
Result: no mappings performed
```
### Expected behavior
The expected behavior should be such that we observe the following mapping:
```
model.language_model.layers.9.post_attention_layernorm.weight -> model.layers.9.post_attention_layernorm.weight
model.visual.blocks.0.attn.proj.bias-> visual.blocks.0.attn.proj.bias
model.layers.9.self_attn.k_proj.bias -> model.layers.9.self_attn.k_proj.bias (remains the same)
visual.blocks.0.attn.proj.weight -> visual.blocks.0.attn.proj.weight (remains the same)
```
This could be achieved by changing the reversal code inside the for pattern, replacement in reverse_key_mapping.items(): loop to be
```
replacement = replacement.lstrip("^") # strip off un-needed chars and patterns
replacement = re.sub(r"\^?([^(?]+).*", r"\1", replacement)
key, n_replace = re.subn(pattern, replacement, key)
print(f"pattern: {pattern}, replacement: {replacement}, resultant key: {key}")
# Early exit of the loop
if n_replace > 0:
break
```
instead.
I could
|
https://github.com/huggingface/transformers/issues/38489
|
closed
|
[
"bug"
] | 2025-05-30T08:55:57Z
| 2025-05-30T13:08:58Z
| 6
|
rolandtannous
|
huggingface/diffusers
| 11,637
|
How to load lora weight in distribution applications?
|
If I want to use xDiT with 2 GPU inference FluxControlPipeline, how should I do
I write a xFuserFluxControlPipeline class, but it can not load lora weight with right way
xFuserFluxTransformer in 1GPU have some parameters and another GPU have others.
How should I do ??
|
https://github.com/huggingface/diffusers/issues/11637
|
open
|
[] | 2025-05-30T07:14:50Z
| 2025-06-03T10:15:51Z
| null |
Johnson-yue
|
huggingface/peft
| 2,558
|
GraLoRA support?
|
### Feature request
will the library support the [GraLoRA](https://arxiv.org/abs/2505.20355) technique?
### Motivation
GraLoRA addresses a fundamental limitation of LoRA: overfitting when the bottleneck is widened.
The technique seems to more closely approximate full fine-tuning; hybrid GraLoRA gets the best of both worlds, with LoRA benefiting from low-rank scenarios (16 or less) and GraLoRA from high-rank scenarios (16 to 128).
The authors have a modified peft library; would be nice to have support in the official library.
### Your contribution
I have limited time for the next two weeks. Then, I will be able to contribute.
But should be very easy for the authors to port the implementation; most of it in the [gralora](https://github.com/SqueezeBits/GraLoRA/tree/8dff8438c80969f5f11f23249fed62aac9d687e8/peft/src/peft/tuners/gralora) sub-package.
|
https://github.com/huggingface/peft/issues/2558
|
closed
|
[] | 2025-05-29T18:36:27Z
| 2025-07-15T15:04:20Z
| 10
|
DiTo97
|
huggingface/lerobot
| 1,171
|
sync_read.py
|
Hi, I am currently testing the functions in the STServo_Python folder to work with my STS3215 motors. When I run the sync_read.py script, I encounter an issue caused by the addParam(self, sts_id) function returning False. I tried several things, but I can't get past the error.
I made sure that the motor IDs are correct and that the motors are connected and powered. I'm using a GroupSyncRead object with a start_address of SCSCL_PRESENT_POSITION_L and data_length of 4. Still, addParam() fails, and the motor ID is not added to the list.
Does anyone know why this is happening or how to fix it?
Thanks in advance!
|
https://github.com/huggingface/lerobot/issues/1171
|
closed
|
[
"bug",
"question",
"robots",
"stale"
] | 2025-05-29T15:33:16Z
| 2025-12-31T02:35:19Z
| null |
Baptiste-le-Beaudry
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.