repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
β | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/lerobot
| 1,497
|
ValueError: 'policy.repo_id' argument missing. Please specify it to push the model to the hub.
|
### System Info
```Shell
lerobot commit version:
https://github.com/huggingface/lerobot/tree/69901b9b6a2300914ca3de0ea14b6fa6e0203bd4
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
(lerobot) robot@robot-Legion-Y9000P-IRX8:~/imitation_learning_lerobot/lerobot$ python lerobot/scripts/train.py \
> --policy.type=act \
> --dataset.repo_id=lerobot/aloha_sim_transfer_cube_human \
> --env.type=aloha \
> --env.task=AlohaTransferCube-v0 \
> --output_dir=outputs/train/act_aloha_transfer
INFO 2025-07-13 12:30:41 ils/utils.py:48 Cuda backend detected, using cuda.
WARNING 2025-07-13 12:30:41 /policies.py:77 Device 'None' is not available. Switching to 'cuda'.
Traceback (most recent call last):
File "/home/robot/imitation_learning_lerobot/lerobot/lerobot/scripts/train.py", line 291, in <module>
train()
File "/home/robot/imitation_learning_lerobot/lerobot/lerobot/configs/parser.py", line 226, in wrapper_inner
response = fn(cfg, *args, **kwargs)
File "/home/robot/imitation_learning_lerobot/lerobot/lerobot/scripts/train.py", line 110, in train
cfg.validate()
File "/home/robot/imitation_learning_lerobot/lerobot/lerobot/configs/train.py", line 120, in validate
raise ValueError(
ValueError: 'policy.repo_id' argument missing. Please specify it to push the model to the hub.
### Expected behavior
expected it can work
|
https://github.com/huggingface/lerobot/issues/1497
|
open
|
[
"question",
"policies",
"configuration"
] | 2025-07-13T04:33:14Z
| 2025-08-12T09:32:36Z
| null |
dbdxnuliba
|
huggingface/trl
| 3,730
|
How to design stable reward functions for open-ended text generation tasks in GRPO?
|
I'm using GRPO for a text generation task where there's no single correct answer. I currently compute the reward using cosine similarity between the model output and a reference response. However, during training (around 400 steps), the reward values are quite unstable and fluctuate significantly.
I'm wondering:
Is cosine similarity a reasonable choice for reward in open-ended tasks?
Are there better practices to stabilize the reward or design it more effectively in such scenarios?
Should I consider switching to a learnable reward model (e.g., contrastive learning)?
Any general advice on reward design in non-deterministic generation tasks would be greatly appreciated. Thanks!
|
https://github.com/huggingface/trl/issues/3730
|
open
|
[
"β question",
"π Reward",
"π GRPO"
] | 2025-07-12T18:39:37Z
| 2025-07-12T18:40:05Z
| null |
Jax922
|
huggingface/diffusers
| 11,915
|
Create modular pipeline from existing pipeline
|
new concept of modular pipelines added via #9672 is very flexible way of creating custom pipelines
and one of the best early use-cases is new concept of modular guiders added via #11311
however, this would require a complete rewrite of the existing user apps/codebases to use new concepts
and would likely significantly slow down adoption (if not even block adoption for a long time)
ask here is to provide a way to use an existing pipeline to instantiate a modular pipeline,
very similar to how different standard diffuser pipelines can be instantiated
from a single pipeline class using `from_pipe` method
example of desired workflow:
```py
import torch
import diffusers
# load pipeline using any normal method
# such as DiffusionPipeline, AutoPipelineForText2Image, StableDiffusionPipeline, etc.
pipe = diffusers.DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.bfloat16,
)
# create modular pipeline from loaded pipeline
modular = diffusers.ModularPipeline.from_pipe(pipe)
# create guider and activate it
cfg = diffusers.ClassifierFreeGuidance(guidance_scale=5.0, guidance_rescale=0.0, start=0.0, stop=1.0)
modular.update_states(guider=cfg)
output = modular(
prompt='astronaut in a diner',
height=1024, width=1024)
```
cc: @yiyixuxu @a-r-r-o-w @sayakpaul
|
https://github.com/huggingface/diffusers/issues/11915
|
closed
|
[] | 2025-07-12T16:08:30Z
| 2025-08-28T08:18:08Z
| 6
|
vladmandic
|
huggingface/diffusers
| 11,914
|
Loading multiple LoRAs to 1 pipeline in parallel, 1 LoRA to 2-pipelines on 2-GPUs
|
Hi everyone,
I have the following scenario.
I have a machine with 2-GPUs and a running service that keep has two pipelines loaded to their corresponding devices. Also I have a list of LoRAs (say 10). On each request I split the batch into 2 parts (request also has the corresponding information about LoRA), load LoRAs and run the forward pass.
The problem I encounter is that whatever parallelization method I have tried (threading, multi-processing), the maximum I have achieved is pre-loading LoRAs on the cpu and then, moving them to GPU and only after that `load_lora_weights` from the state_dict.
Even if I attempt to achieve parallelization in by calling the chunk where I load in parallel in threads, the pipe starts to produce either a complete noise or a black image.
Where I would appreciate a lot the help is:
1. To get an advice of elegantly loading multiple LoRAs at once into one pipe (all examples in the documentation indicate that one needs to do it 1 by 1)
2. If I have 2 pipes on 2 different devices, how to parallelize the process of loading 1 LoRA to pipes on their corresponding devices.
```
def apply_multiple_loras_from_cache(pipes, adapter_names, lora_cache, lora_names, lora_strengths, devices):
for device_index, pipe in enumerate(pipes):
logger.info(f"Starting setup for device {devices[device_index]}")
# Step 1: Unload LoRAs
start = time.time()
pipe.unload_lora_weights(reset_to_overwritten_params=False)
logger.info(f"[Device {device_index}] Unload time: {time.time() - start:.3f}s")
# Step 2: Parallelize CPU β GPU state_dict move
def move_to_device(name):
return name, {
k: v.to(devices[device_index], non_blocking=True).to(pipe.dtype)
for k, v in lora_cache[name]['state_dict'].items()
}
start = time.time()
with ThreadPoolExecutor() as executor:
future_to_name = {executor.submit(move_to_device, name): name for name in adapter_names}
results = [future.result() for future in as_completed(future_to_name)]
logger.info(f"[Device {device_index}] State dict move + dtype conversion time: {time.time() - start:.3f}s")
# Step 3: Load adapters
start = time.time()
for adapter_name, state_dict in results:
pipe.load_lora_weights(
pretrained_model_name_or_path_or_dict=state_dict,
adapter_name=adapter_name
)
logger.info(f"[Device {device_index}] Load adapter weights time: {time.time() - start:.3f}s")
# Step 4: Set adapter weights
start = time.time()
pipe.set_adapters(lora_names, adapter_weights=lora_strengths)
logger.info(f"[Device {device_index}] Set adapter weights time: {time.time() - start:.3f}s")
torch.cuda.empty_cache()
logger.info("All LoRAs applied and GPU cache cleared.")
```
|
https://github.com/huggingface/diffusers/issues/11914
|
closed
|
[] | 2025-07-12T15:54:44Z
| 2025-07-15T19:40:11Z
| 5
|
vahe-toffee
|
huggingface/lerobot
| 1,494
|
release the code for reproducing the performance on the LIBERO dataset reported in the SmolVLA paper?
|
Has anyone been able to reproduce the performance on the LIBERO dataset reported in the SmolVLA paper? Iβd appreciate any guidelines or tips to help with reproducing the results.
|
https://github.com/huggingface/lerobot/issues/1494
|
closed
|
[
"question",
"policies",
"simulation"
] | 2025-07-12T09:35:00Z
| 2025-09-23T09:44:59Z
| null |
JustinKai0527
|
huggingface/datasets
| 7,680
|
Question about iterable dataset and streaming
|
In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78
I am confused,
1. If we have already loaded the dataset, why doing `to_iterable_dataset`? Does it go through the dataset faster than map-style dataset?
2. `load_dataset(streaming=True)` is useful for huge dataset, but the speed is slow. How to make it comparable to `to_iterable_dataset` without loading the whole dataset into RAM?
|
https://github.com/huggingface/datasets/issues/7680
|
open
|
[] | 2025-07-12T04:48:30Z
| 2025-08-01T13:01:48Z
| 8
|
Tavish9
|
huggingface/transformers
| 39,377
|
FlashAttention2 support for GSAI-ML / LLaDA-8B-Instruct?
|
Hi there,
I attempted to use flash attention 2 with this model but it seems like it isn't supported, based on this error:
```
ValueError: LLaDAModelLM does not support Flash Attention 2.0 yet. Please request to add support where the model is hosted, on its model hub page: https://huggingface.co/GSAI-ML/LLaDA-8B-Instruct/discussions/new or in the Transformers GitHub repo: https://github.com/huggingface/transformers/issues/new
```
would it be possible to add support to this kind of model?
Thank you for your time!
|
https://github.com/huggingface/transformers/issues/39377
|
closed
|
[] | 2025-07-12T02:48:36Z
| 2025-08-19T08:03:26Z
| 2
|
lbertge
|
huggingface/lerobot
| 1,492
|
Is there any plan to add a validation loss in the training pipeline, which is not dependent on simulation env.
|
Can we have a dataset split in the training code to run the model on a holdout validation episode to check loss on it?
|
https://github.com/huggingface/lerobot/issues/1492
|
open
|
[
"enhancement",
"question",
"policies"
] | 2025-07-11T20:43:04Z
| 2025-12-30T07:12:20Z
| null |
mohitydv09
|
huggingface/peft
| 2,642
|
Prompt_Tuning.ipynb example doesn't seem to train the model
|
Hello! I am running Prompt-Tuning notebook example from PEFT lib examples [here](https://github.com/huggingface/peft/blob/main/examples/sequence_classification/Prompt_Tuning.ipynb). I did **not** change any line of code and I ran the code block sequentially.
However, the performance under metrics remain exactly the **same** for each epoch, which is very weird. From the [orignal notebook](https://github.com/huggingface/peft/blob/main/examples/sequence_classification/Prompt_Tuning.ipynb), we can see accuracy fluctuates and can increase to 0.70.
I checked the output logits for the training data is changing every epoch (set shuffle=False, and this is the only change for debugging). Now I am very confused, any suggestions would be very much welcome, please let me know if I am doing something very wrong, thanks in advance!
Here's the performance log:
```
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.36it/s]
epoch 0: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.72it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.49it/s]
epoch 1: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.34it/s]
epoch 2: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.72it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.35it/s]
epoch 3: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.47it/s]
epoch 4: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.69it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.63it/s]
epoch 5: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.75it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.45it/s]
epoch 6: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.40it/s]
epoch 7: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.53it/s]
epoch 8: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:19<00:00, 5.76it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.27it/s]
epoch 9: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.75it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.50it/s]
epoch 10: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.63it/s]
epoch 11: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:19<00:00, 5.77it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.50it/s]
epoch 12: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:19<00:00, 5.78it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.60it/s]
epoch 13: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.54it/s]
epoch 14: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
```
Besides, my environment info is here if it helps debugging:
```
python 3.10
transformers 4.52.4
peft 0.16.0
torch 2.7.0
jupyterlab 4.4.3
OS Ubuntu 22.04 LTS
GPU NVIDIA RTX 5880
```
|
https://github.com/huggingface/peft/issues/2642
|
closed
|
[] | 2025-07-11T18:26:58Z
| 2025-08-23T15:03:47Z
| 8
|
ruixing76
|
huggingface/transformers
| 39,366
|
RuntimeError when loading llmcompressor W8A8 quantized model: int8 dtype in weight initialization
|
I'm trying to load the quantized model `RedHatAI/Qwen2.5-VL-7B-Instruct-quantized.w8a8` but encountering a dtype compatibility issue during model initialization. The model appears to be quantized using `llmcompressor` with W8A8 quantization scheme.
**Note**: I need to load this model without vLLM because I may need to add custom hooks for my research, so I'm looking for a direct loading method using transformers/llmcompressor.
## Error Message
```python
RuntimeError: expected a floating-point or complex dtype, but got dtype=torch.int8
```
**Full Stack Trace:**
```python
File "/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 366, in _init_weights
module.weight.data.normal_(mean=0.0, std=std)
File "/torch/_refs/__init__.py", line 6214, in normal_
return normal(mean, std, self.shape, out=self, generator=generator)
...
RuntimeError: expected a floating-point or complex dtype, but got dtype=torch.int8
```
## Traceback
The error occurs during model weight initialization where transformers tries to call `normal_()` on int8 tensors. The `normal_()` function in PyTorch only works with floating-point tensors, but the quantized model contains int8 weights.
**Specific failure point:**
- File: `modeling_qwen2_5_vl.py`, line 366
- Function: `_init_weights()`
- Operation: `module.weight.data.normal_(mean=0.0, std=std)`
- Issue: Trying to apply normal distribution to int8 tensors
## Model Information
Based on the model's `config.json`:
- **Quantization method**: `compressed-tensors`
- **Format**: `int-quantized`
- **Scheme**: W8A8 (8-bit weights and activations)
- **Base model**: `Qwen/Qwen2.5-VL-7B-Instruct`
- **Compression ratio**: ~1.2x
- **Ignored layers**: All visual layers (`visual.blocks.*`, `visual.merger.*`, `lm_head`)
## What I've Tried
### 1. llmcompressor methods:
```python
# Method 1: TraceableQwen2_5_VLForConditionalGeneration
from llmcompressor.transformers.tracing import TraceableQwen2_5_VLForConditionalGeneration
model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained(
model_path, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
# Method 2: SparseAutoModelForCausalLM
from llmcompressor.transformers import SparseAutoModelForCausalLM
model = SparseAutoModelForCausalLM.from_pretrained(
model_path, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
```
### 2. Standard transformers methods:
```python
# Method 3: Various dtype configurations
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16, # Also tried: torch.float16, "auto", None
trust_remote_code=True,
device_map="auto"
)
# Method 4: AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
model_path, trust_remote_code=True, torch_dtype="auto"
)
```
**All methods fail at the same weight initialization step, so I wonder should the model be loaded with `_fast_init=False` or other special parameters?**
## Additional Observations
1. **Warning about ignored layers**: The loader warns about missing visual layers, but this seems expected since they were ignored during quantization
2. **Model files exist**: The quantized model directory contains the expected `.safetensors` files and configuration
3. **Original model works**: The base `Qwen/Qwen2.5-VL-7B-Instruct` loads and works perfectly
## Environment
- **Python**: 3.10
- **PyTorch**: 2.7.0+cu126
- **Transformers**: 4.52.4
- **LLMCompressor**: 0.6.0
- **Compressed-tensors**: 0.10.2
This model was likely created using llmcompressor's oneshot quantization:
```python
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
recipe = [
GPTQModifier(
targets="Linear",
scheme="W8A8",
sequential_targets=["Qwen2_5_VLDecoderLayer"],
ignore=["lm_head", "re:visual.*"],
),
]
```
If this is more of an llmcompressor-specific model loading issue rather than a transformers compatibility issue, please let me know and I'll file this issue in the llmcompressor repository instead.
|
https://github.com/huggingface/transformers/issues/39366
|
closed
|
[
"Good First Issue"
] | 2025-07-11T15:15:09Z
| 2025-12-08T13:30:10Z
| 10
|
AdelineXinyi
|
pytorch/vision
| 9,146
|
https://github.com/pytorch/vision/blob/b818d320a14a2e6d9d9f28853e9e7beae703e52e/torchvision/io/video.py#L274
|
### π Describe the bug
https://github.com/pytorch/vision/blob/b818d320a14a2e6d9d9f28853e9e7beae703e52e/torchvision/io/video.py#L274
this function warning infinite.
and we don't know how to find the equalent code in torchcodec as well/......
### Versions
dsf
|
https://github.com/pytorch/vision/issues/9146
|
open
|
[] | 2025-07-11T14:46:36Z
| 2025-08-07T14:22:22Z
| 2
|
OpenJarvisAI
|
huggingface/lerobot
| 1,483
|
How can I set `max_relative_target` to get safe action?
|
I saw this in function `send_action` in `src/lerobot/robots/so100_follower/so100_follower.py`
```python
def send_action(self, action: dict[str, Any]) -> dict[str, Any]:
"""Command arm to move to a target joint configuration.
The relative action magnitude may be clipped depending on the configuration parameter
`max_relative_target`. In this case, the action sent differs from original action.
Thus, this function always returns the action actually sent.
Raises:
RobotDeviceNotConnectedError: if robot is not connected.
Returns:
the action sent to the motors, potentially clipped.
"""
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")}
# Cap goal position when too far away from present position.
# /!\ Slower fps expected due to reading from the follower.
if self.config.max_relative_target is not None:
present_pos = self.bus.sync_read("Present_Position")
goal_present_pos = {key: (g_pos, present_pos[key]) for key, g_pos in goal_pos.items()}
goal_pos = ensure_safe_goal_position(goal_present_pos, self.config.max_relative_target)
# Send goal position to the arm
self.bus.sync_write("Goal_Position", goal_pos)
return {f"{motor}.pos": val for motor, val in goal_pos.items()}
```
But in So100followerconfig it defaults to None
```python
class SO100FollowerConfig(RobotConfig):
# Port to connect to the arm
port: str
disable_torque_on_disconnect: bool = True
# `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
# Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
# the number of motors in your follower arms.
max_relative_target: int | None = None
# cameras
cameras: dict[str, CameraConfig] = field(default_factory=dict)
# sensors
sensors: dict[str, ForceSensorConfig] = field(default_factory=dict)
# Set to `True` for backward compatibility with previous policies/dataset
use_degrees: bool = False
```
I don't know how much should I set `max_relative_target` is there any instruction? thanks!!
|
https://github.com/huggingface/lerobot/issues/1483
|
open
|
[
"question",
"robots"
] | 2025-07-11T02:46:02Z
| 2025-08-12T09:34:51Z
| null |
milong26
|
huggingface/peft
| 2,640
|
Why peft.utils.other.fsdp_auto_wrap_policy do not warp the module do not require grad?
|
In https://github.com/huggingface/peft/blob/main/src/peft/utils/other.py#L977,
```
def fsdp_auto_wrap_policy(model):
if hasattr(FullyShardedDataParallelPlugin, "get_module_class_from_name"):
get_module_class_from_name = FullyShardedDataParallelPlugin.get_module_class_from_name
else:
from accelerate.utils.dataclasses import get_module_class_from_name
from torch.distributed.fsdp.wrap import _or_policy, lambda_auto_wrap_policy, transformer_auto_wrap_policy
from ..tuners import PrefixEncoder, PromptEmbedding, PromptEncoder
default_transformer_cls_names_to_wrap = ",".join(_get_no_split_modules(model))
transformer_cls_names_to_wrap = os.environ.get(
"FSDP_TRANSFORMER_CLS_TO_WRAP", default_transformer_cls_names_to_wrap
).split(",")
transformer_cls_to_wrap = {PrefixEncoder, PromptEncoder, PromptEmbedding}
for layer_class in transformer_cls_names_to_wrap:
if len(layer_class) == 0:
continue
transformer_cls = get_module_class_from_name(model, layer_class)
if transformer_cls is None:
raise Exception("Could not find the transformer layer class to wrap in the model.")
else:
transformer_cls_to_wrap.add(transformer_cls)
def lambda_policy_fn(module):
if (
len(list(module.named_children())) == 0
and getattr(module, "weight", None) is not None
and module.weight.requires_grad
):
return True
return False
lambda_policy = functools.partial(lambda_auto_wrap_policy, lambda_fn=lambda_policy_fn)
transformer_wrap_policy = functools.partial(
transformer_auto_wrap_policy,
transformer_layer_cls=transformer_cls_to_wrap,
)
auto_wrap_policy = functools.partial(_or_policy, policies=[lambda_policy, transformer_wrap_policy])
return auto_wrap_policy
```
the fsdp_auto_wrap_policy uses a lambda_policy_fn which does not warp the module does not require grad.
But in regular Lora training, the original network does not need grad.
That may cause every GPU still keep a full network copy even in FSDP FULLY SHARD.
Why the code design such a policy?
|
https://github.com/huggingface/peft/issues/2640
|
closed
|
[] | 2025-07-10T12:07:13Z
| 2025-08-18T15:05:03Z
| 4
|
Changlin-Lee
|
huggingface/transformers
| 39,336
|
TypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'
|
I am using CogVLM2 video captioning model
It works latest with transformers==4.43.4
with transformers==4.44.0 and forward I get below error
but I need to use latest version of transformers since currently 4bit quantization fails on some gpus and platforms
how can i fix this issue?
`TypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'`
```
14:23:32 - INFO - Final video tensor shape for CogVLM processing: torch.Size([3, 24, 720, 1280])
14:23:35 - ERROR - Error during auto-captioning: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'
Traceback (most recent call last):
File "E:\Ultimate_Video_Processing_v1\STAR\logic\cogvlm_utils.py", line 679, in auto_caption
outputs_tensor = local_model_ref.generate(**inputs_on_device, **gen_kwargs)
File "E:\Ultimate_Video_Processing_v1\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "E:\Ultimate_Video_Processing_v1\venv\lib\site-packages\transformers\generation\utils.py", line 2024, in generate
result = self._sample(
File "E:\Ultimate_Video_Processing_v1\venv\lib\site-packages\transformers\generation\utils.py", line 3032, in _sample
model_kwargs = self._update_model_kwargs_for_generation(
File "E:\Ultimate_Video_Processing_v1\STAR\models\modules\transformers_modules\cogvlm2-video-llama3-chat\modeling_cogvlm.py", line 726, in _update_model_kwargs_for_generation
cache_name, cache = self._extract_past_from_model_output(
TypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'
```
@amyeroberts, @qubvel @SunMarc @MekkCyber
the error i am getting is below with 4.43.1 on B200 when doing 4bit quant. interesting same code same libraries on my rtx 5090 on windows working without errors
fp16 has no issues
```
11:45:10 - INFO - Preparing to load model from: /workspace/STAR/models/cogvlm2-video-llama3-chat with quant: 4, dtype: torch.bfloat16, device: cuda, device_map: auto, low_cpu_mem: True
11:45:10 - INFO - Starting model loading - this operation cannot be interrupted once started
/workspace/venv/lib/python3.10/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/workspace/venv/lib/python3.10/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead.
warnings.warn(
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6/6 [01:18<00:00, 13.07s/steps]
11:46:30 - ERROR - Failed to load CogVLM2 model from path: /workspace/STAR/models/cogvlm2-video-llama3-chat
11:46:30 - ERROR - Exception type: ValueError
11:46:30 - ERROR - Exception details: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
Traceback (most recent call last):
File "/workspace/STAR/logic/cogvlm_utils.py", line 160, in load_cogvlm_model
raise model_loading_result["error"]
File "/workspace/STAR/logic/cogvlm_utils.py", line 122, in load_model_thread
model = AutoModelForCausalLM.from_pretrained(
File "/workspace/venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained
return model_class.from_pretrained(
File "/workspace/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4000, in from_pretrained
dispatch_model(model, **device_map_kwargs)
File "/workspace/venv/lib/python3.10/site-packages/accelerate/big_modeling.py", line 502, in dispatch_model
model.to(device)
File "/workspace/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2849, in to
raise ValueError(
ValueError: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
11:46:30 - ERROR - Error during auto-captioning: 'Could not load CogVLM2 model (check logs for details): `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.'
Traceback (most recent call last):
File "/workspace/STAR/logic/cogvlm_utils.py", line 160, in load_cogvlm_model
raise model_loading_result["error"]
File "/workspace/STAR/logic/cogvlm_utils.py", line 122, in load_model_thread
model = AutoMode
|
https://github.com/huggingface/transformers/issues/39336
|
closed
|
[
"bug"
] | 2025-07-10T11:49:02Z
| 2025-08-18T08:03:13Z
| 4
|
FurkanGozukara
|
huggingface/lerobot
| 1,476
|
Here as interactive gym to play with the robot, (I still need some help)
|
### First the good news:
This is an interactive gym where you can experiment with pre-trained policies to control the robot in real time.
Here is how to use it:
- `Double-click` on a body to select it.
- `Ctrl + left` drag applies a torque to the selected object, resulting in rotation.
- `Ctrl + right` drag applies a force to the selected object in the (x,z) plane, resulting in translation.
- `Ctrl + Shift + right` drag applies a force to the selected object in the (x,y) plane.
### However, there are a few limitations:
- When you move the cubes, the robot doesn't seem to register the new positions and instead attempts to pick them up from their original locations.
- **Only** the environment `lerobot/act_aloha_sim_insertion_human` appears to work occasionally. The others either don't function at all or cause the program to crash due to missing attributes that haven't been implemented in the gym.
I'd really appreciate feedback/guidance from the repo maintainers on how to improve this snippet to support more environments and tasks.
file `interactive_gym.py`:
```python
import gymnasium as gym
import mujoco
import mujoco.viewer
import torch
import importlib
from lerobot.policies.utils import get_device_from_parameters
from lerobot.configs import parser
from lerobot.configs.eval import EvalPipelineConfig
from lerobot.policies.factory import make_policy
from lerobot.envs.utils import preprocess_observation
from lerobot.utils.utils import get_safe_torch_device
# $ python interactive_gym.py --policy.path=lerobot/act_aloha_sim_insertion_human --env.type=aloha
# $ python interactive_gym.py --policy.path=lerobot/act_aloha_sim_transfer_cube_human --env.type=aloha
@parser.wrap()
def make_env_and_policy(cfg: EvalPipelineConfig):
package_name = f"gym_{cfg.env.type}"
try:
importlib.import_module(package_name)
except ModuleNotFoundError as e:
print(f"{package_name} is not installed. Please install it with `pip install 'lerobot[{cfg.env.type}]'`")
raise e
gym_handle = f"{package_name}/{cfg.env.task}"
env = gym.make(gym_handle, disable_env_checker=True, **cfg.env.gym_kwargs)
policy = make_policy(cfg=cfg.policy, env_cfg=cfg.env)
policy.eval()
policy.reset()
return env, policy
def main(env, policy):
device = get_device_from_parameters(policy)
viewer = mujoco.viewer.launch_passive(env.unwrapped.model, env.unwrapped.data)
observation, info = env.reset(seed=42)
viewer.sync()
for i in range(40000):
observation = preprocess_observation(observation)
observation = {
key: observation[key].to(device, non_blocking=device.type == "cuda") for key in observation
}
# Infer "task" from attributes of environments.
# TODO: works with SyncVectorEnv but not AsyncVectorEnv
if hasattr(env, "task_description"):
observation["task"] = env.unwrapped.task_description
elif hasattr(env, "task"):
observation["task"] = env.unwrapped.task
else: # For envs without language instructions, e.g. aloha transfer cube and etc.
observation["task"] = ""
with torch.inference_mode():
action = policy.select_action(observation)
# Convert to CPU / numpy.
action = action.to("cpu").numpy()
assert action.ndim == 2, "Action dimensions should be (batch, action_dim)"
# Apply the next action.
#observation, reward, terminated, truncated, info = env.step(action)
observation, reward, terminated, truncated, info = env.step(action[0])
viewer.sync()
if terminated or truncated:
observation, info = env.reset()
viewer.sync()
if i % 100 == 0:
print(i)
viewer.close()
env.close()
torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = True
env, policy = make_env_and_policy()
main(env, policy)
```
|
https://github.com/huggingface/lerobot/issues/1476
|
open
|
[
"question",
"simulation"
] | 2025-07-09T14:59:22Z
| 2025-12-16T13:41:00Z
| null |
raul-machine-learning
|
huggingface/lerobot
| 1,475
|
[Question] What does each number in predicted action(SmolVLA) stand for?
|
Hi, I'm trying to load the SmolVLA and test on my simulation env.
After passing the observations to the model using "policy.select_action(obs)" I got a 6-dimensional action, but I'm quite confused about what exactly they are. And if there are three for position translation and three for rotation, how could I control the open and close for the gripper?
Thanks.
|
https://github.com/huggingface/lerobot/issues/1475
|
open
|
[
"question",
"policies"
] | 2025-07-09T13:39:25Z
| 2025-08-12T10:08:26Z
| null |
Calvert0921
|
huggingface/lerobot
| 1,471
|
where is 7_get_started_with_real_robot.md?
|
I didn't find 7_get_started_with_real_robot.md
|
https://github.com/huggingface/lerobot/issues/1471
|
closed
|
[
"documentation",
"question"
] | 2025-07-09T08:02:32Z
| 2025-10-08T08:42:21Z
| null |
von63
|
huggingface/alignment-handbook
| 218
|
Will you release SmolLM 3 recipe?
|
First off, thank you so much for sharing these training resources.
I was wondering if, with the recent release of SmolLM3, you have plans to also share its training recipe.
Have a nice day!
|
https://github.com/huggingface/alignment-handbook/issues/218
|
closed
|
[] | 2025-07-08T19:47:20Z
| 2025-07-15T14:16:11Z
| 1
|
ouhenio
|
huggingface/sentence-transformers
| 3,433
|
How to use a custom batch sampler?
|
`SentenceTransformerTrainer.__init__` will check the type of args, so I have to write a class inheriting from `SentenceTransformerTrainingArgs` rather than `TransformerTrainingArgs`. The problem is that `SentenceTransformerTrainingArgs.__post__init__` forces to use `BatchSampler` to initialize a batch sampler. Is there any workaround about this?
|
https://github.com/huggingface/sentence-transformers/issues/3433
|
open
|
[] | 2025-07-08T09:35:24Z
| 2025-07-08T12:36:33Z
| null |
Hypothesis-Z
|
huggingface/transformers
| 39,266
|
Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
|
### System Info
```bash
Traceback (most recent call last):
File "/home/cx/miniconda3/envs/demo/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 767, in convert_to_tensors
tensor = as_tensor(value)
File "/home/cx/miniconda3/envs/demo/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 729, in as_tensor
return torch.tensor(value)
ValueError: expected sequence of length 15757 at dim 1 (got 16242)
```
*DataCollatorForLanguageModeling* seems to only padding input ids and ignore labels, resulting in different lengths of labels in a batch. Why is this?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
def _process_fn(samples, tokenizer : PreTrainedTokenizerFast, config):
samples = [[{"role" : "user", "content" : x[0]}, {"role" : "assistant", "content" : x[1]}]
for x in zip(samples["input"], samples["output"])]
# tokenized_data = tokenizer.apply_chat_template(samples,
# return_tensors="pt",
# return_dict=True,
# padding="max_length",
# truncation=True,
# max_length=8000)
tokenized_data = tokenizer.apply_chat_template(samples,
return_tensors="pt",
return_dict=True,
padding=True
)
samples_ids = tokenized_data["input_ids"]
attention_mask = tokenized_data["attention_mask"]
output_ids = []
for i, seq in enumerate(samples_ids):
output_index = torch.where(seq == SPECIAL_GENERATE_TOKEN_ID)[0]
mask = attention_mask[i]
if len(output_index) == 1:
output_index = output_index[0].item()
else:
continue
temp = torch.full_like(seq, -100)
temp[output_index:] = seq[output_index:]
temp[mask == 0] = -100
output_ids.append(temp)
labels = torch.stack(output_ids)
return {"input_ids" : samples_ids,
"labels" : labels,
"attention_mask" : attention_mask}
trainer = Trainer(
model=peft_model,
args=train_config,
train_dataset=train_data,
eval_dataset=eval_data,
data_collator=DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False,
pad_to_multiple_of=8 if torch.cuda.is_available() else None,
return_tensors="pt"
)
)
```
### Expected behavior
run code
|
https://github.com/huggingface/transformers/issues/39266
|
closed
|
[
"bug"
] | 2025-07-08T05:19:35Z
| 2025-07-08T06:50:47Z
| 0
|
mumu029
|
huggingface/lerobot
| 1,460
|
How to support dataloading with historical cue?
|
as i see, the getitem function of LerobotDataset now returns the single frame data, how to stack the historical frames and make use of batch data with historical information like univla?
|
https://github.com/huggingface/lerobot/issues/1460
|
open
|
[
"question",
"dataset"
] | 2025-07-08T01:49:11Z
| 2025-08-12T09:44:02Z
| null |
joeyxin-del
|
huggingface/lerobot
| 1,458
|
how to control a real robot arm-101 with my own pretrained model?
|
I don't see the instruction or script example on this repositoryγ
Please help
Thanks,
|
https://github.com/huggingface/lerobot/issues/1458
|
open
|
[
"question",
"policies"
] | 2025-07-08T01:19:50Z
| 2025-08-12T09:45:13Z
| null |
jcl2023
|
pytorch/torchtitan
| 1,369
|
Puzzling collectives in TP ( SP to be exact)
|
### Bug description
On running 1 step of a modified Llama3 debug_model ( n_layer=1) on 2 ranks with TP=2 , noticed 12 alleduce's ( reduce_scatter+allgather) of expected size , 8 * 2048 * 256 / 2 = 2097152 . There should be 8 allreduce's altogether, right ? One each for SelfAttention and FFN/MLP in the forward and backward for each rank. In total 4 for each rank.
But from the collectives it looks like what is called TP is actually SP ! In that case, there should have been 16 collectives. 8 for each rank. 2 allgather and 2 reduce-scatter each in forward and backward .
```
debug_model.toml:
[training]
local_batch_size = 8
seq_len = 2048
max_norm = 1.0 # grad norm clipping
steps = 1
compile = false
dataset = "c4_test" # supported datasets: c4_test (2K), c4 (177M)
[parallelism]
data_parallel_replicate_degree = 1
data_parallel_shard_degree = -1
fsdp_reshard_after_forward = "default" # default / never / always
tensor_parallel_degree = 2
enable_async_tensor_parallel = false
pipeline_parallel_degree = 1
context_parallel_degree = 1
__init__.py:
"debugmodel": TransformerModelArgs(
dim=256, n_layers=1, n_heads=16, rope_theta=500000
),
```
Also wondering what the other 3 allreduce are for ( count 1, 256 and 2048) ?
```
[titan] 2025-07-07 20:12:54,779 - root - INFO - Training starts at step 1.
hopper01:191370:191370 [1] NCCL INFO ReduceScatter: opCount 1 sendbuff 0x7fa915800000 recvbuff 0x7fa916800000 count 2097152 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0
hopper01:191369:191369 [0] NCCL INFO AllGather: opCount 2 sendbuff 0x7fdf55800000 recvbuff 0x7fdf54434000 count 2097152 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430
hopper01:191370:191370 [1] NCCL INFO AllGather: opCount 2 sendbuff 0x7fa915800000 recvbuff 0x7fa914434000 count 2097152 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0
hopper01:191369:191369 [0] NCCL INFO ReduceScatter: opCount 3 sendbuff 0x7fdf61200000 recvbuff 0x7fdf56000000 count 2097152 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430
hopper01:191369:191369 [0] NCCL INFO AllGather: opCount 4 sendbuff 0x7fdf60200000 recvbuff 0x7fdf61200000 count 2097152 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430
hopper01:191370:191370 [1] NCCL INFO ReduceScatter: opCount 3 sendbuff 0x7fa921200000 recvbuff 0x7fa916000000 count 2097152 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0
hopper01:191370:191370 [1] NCCL INFO AllGather: opCount 4 sendbuff 0x7fa920200000 recvbuff 0x7fa921200000 count 2097152 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0
hopper01:191369:191369 [0] NCCL INFO ReduceScatter: opCount 5 sendbuff 0x7fdf69400000 recvbuff 0x7fdf60a00000 count 2097152 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430
hopper01:191369:191369 [0] NCCL INFO AllGather: opCount 6 sendbuff 0x7fdf61200000 recvbuff 0x7fdf69400000 count 2097152 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430
hopper01:191370:191370 [1] NCCL INFO ReduceScatter: opCount 5 sendbuff 0x7fa929400000 recvbuff 0x7fa920a00000 count 2097152 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0
hopper01:191370:191370 [1] NCCL INFO AllGather: opCount 6 sendbuff 0x7fa921200000 recvbuff 0x7fa929400000 count 2097152 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0
hopper01:191369:191369 [0] NCCL INFO AllReduce: opCount 7 sendbuff 0x7fdf556b8000 recvbuff 0x7fdf556b8000 count 16384 datatype 7 op 2 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430
hopper01:191370:191370 [1] NCCL INFO AllReduce: opCount 7 sendbuff 0x7fa9156b8000 recvbuff 0x7fa9156b8000 count 16384 datatype 7 op 2 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0
hopper01:191369:191369 [0] NCCL INFO AllReduce: opCount 8 sendbuff 0x7fdf556c8000 recvbuff 0x7fdf556c8000 count 16384 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430
hopper01:191370:191370 [1] NCCL INFO AllReduce: opCount 8 sendbuff 0x7fa9156c8000 recvbuff 0x7fa9156c8000 count 16384 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0
hopper01:191369:191369 [0] NCCL INFO AllReduce: opCount 9 sendbuff 0x7fdf556d8000 recvbuff 0x7fdf556d8000 count 16384 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430
hopper01:191370:191370 [1] NCCL INFO AllReduce: opCount 9 sendbuff 0x7fa9156d8000 recvbuff 0x7fa9156d8000 count 16384 datatype 7 op 0 root 0 comm 0x55fca23fc0f0 [nranks=2] stream 0x55fca193e8e0
hopper01:191369:191420 [0] NCCL INFO ReduceScatter: opCount a sendbuff 0x7fdf69400000 recvbuff 0x7fdf6a400000 count 2097152 datatype 7 op 0 root 0 comm 0x55e290bd32d0 [nranks=2] stream 0x55e29067e430
hopper01:191370:191425 [1] NCCL INFO ReduceScatter: opCount a sendbuff 0x7fa929400000 recvbuff 0x7fa92a400000 c
|
https://github.com/pytorch/torchtitan/issues/1369
|
open
|
[
"question"
] | 2025-07-07T22:12:46Z
| 2025-07-10T01:28:07Z
| null |
githubsgi
|
pytorch/tutorials
| 3,429
|
[BUG] - Broken link in intro of 'Learn the Basics' tutorial
|
### Add Link
https://docs.pytorch.org/tutorials/beginner/basics/intro.html
### Describe the bug
In the 'How to Use This Guide' section, the text reads:
```
If youβre new to deep learning frameworks, head right into the first section of our step-by-step guide: [1. Tensors](https://docs.pytorch.org/tutorials/beginner/basics/tensor_tutorial.html).
```
That link at the end is broken, because tensor_tutorial.html does not exist. The link should point to tensorqs_tutorial.html instead.
The result is that clicking on this link results in a 404 error, when it should actually go to the Tensors section
### Describe your environment
MacOs + Google Chrome
|
https://github.com/pytorch/tutorials/issues/3429
|
closed
|
[
"bug"
] | 2025-07-07T19:52:10Z
| 2025-07-07T22:18:19Z
| 0
|
pankajkakkar
|
huggingface/candle
| 3,016
|
Build fails on Maxwell GPU due to __dp4a undefined in quantized.cu
|
Iβm trying to build a Rust project locally that depends on candle-kernels on my laptop with an NVIDIA GeForce 940MX (Maxwell, compute capability 5.0). The build fails with errors like:
```
src/quantized.cu(1997): error: identifier "__dp4a" is undefined
...
18 errors detected in the compilation of "src/quantized.cu".
```
GPU: NVIDIA GeForce 940MX (GM107, compute capability 5.0)
OS: Kali Linux (rolling)
CUDA toolkit: 12.3
NVIDIA driver: 550.163.01
candle-kernels: v0.7.2
The error is caused by the use of the CUDA intrinsic __dp4a, which is only available on GPUs with compute capability 6.1+ (Pascal and newer).
My GPU is compute 5.0, so this intrinsic is not available.
**Questions:**
Is there a way to disable quantized kernels or the use of __dp4a for older GPUs?
If not, could a feature flag or build option be added to support older hardware, or at least skip building quantized kernels on unsupported GPUs?
|
https://github.com/huggingface/candle/issues/3016
|
open
|
[] | 2025-07-07T14:41:53Z
| 2025-07-07T14:41:53Z
| 0
|
fishonamos
|
huggingface/text-generation-inference
| 3,289
|
How to detect watermark?
|
Hi,
Thanks for the great work.
I saw in the current code the KGW watermark is implemented. But it seems lack of code to evaluate and detect whether the generated text contains watermark.
Could anyone suggest whether this code is exists? It will be very helpful.
Thanks
|
https://github.com/huggingface/text-generation-inference/issues/3289
|
open
|
[] | 2025-07-07T11:42:54Z
| 2025-07-07T11:42:54Z
| null |
Allencheng97
|
pytorch/xla
| 9,447
|
[RFC] Controller for SPMD+MPMD
|
# [RFC] Controller for SPMD+MPMD
## Background
Current work is being done to design a solution for making `mark_sharding` first trace the model before it is loaded into devices (https://github.com/pytorch/xla/issues/9341). Together with [Local SPMD](https://github.com/pytorch/xla/issues/9181), this should enable us to achieve [SPMD+MPMD as per its RFC](https://github.com/pytorch/xla/issues/9019). One leftover question is which controller to leverage and how to do it. This RFC aims to provide an approach, and two examples of how SPMD+MPMD.
## API Discussion
Before thinking about the specifics on the controller, I think it is important to quickly discuss the user interaction experience with SPMD+MPMD. Specifically, how to handle pipeline parallelism in the context of also doing gSPMD. I see two different approaches: (1) to hide some of that process behind a newly created API, or a new level of abstraction; (2) to leverage existing pipeline parallelism tooling.
I think there is a temptation to create something behind a new API to try to simplify the process as much as possible, and create an easy user experience. However, PyTorch already has strong tooling around pipeline parallelism. These tools see external use, and they themselves ease the process of handling multiple processes running different parts of the pipeline.
Rather than creating a new API standard, it is likely better to approach this from a pytorch angle from a βthis is a pytorch backend, how do I do pipeline parallelism with pytorchβ. Looking at that angle, it is better to support SPMD+MPMD in these pipeline parallelism APIs rather than to create a new API.
## Approach
The general approach will be to:
1) Trace model without loading it to devices
2) Split model into individually executing modules
3) Create processes to execute on split modules
4) Have modules be executed by process that will be responsible for executing gSPMD
From an implementation perspective, the idea is that by allowing Local SPMD, and latent model initialization, APIs created to specialize on pipeline parallelism should be able to manage their individual processes.
## PiPPy
[PiPPy](https://github.com/pytorch/PiPPy/tree/main) is the pipeline parallelism library created by pytorch. It has an overall tool set that might be convenient. For PiPPy, pipeline parallelism usually will usually take:
1) Initializing a model without loading it to devices
2) Creating a pipe through pipeline
a. At this step, a `GraphModule` is created which contain the modules for each process to execute later
3) Initializing a process group ([`dist.init_process_group`](https://docs.pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group))
4) Creating `PipelineStage`s based on the pipe
5) Executing each pipeline stage
You can see a step by step in [PiPPyβs read me](https://github.com/pytorch/PiPPy/tree/main), or a llama model example [here](https://github.com/pytorch/PiPPy/blob/main/examples/llama/pippy_llama.py).
Either way, this lets PiPPy to admin individual processes while each process executes gSPMD for the specific modules it was created with.
## Ray
[Ray](https://github.com/ray-project/ray) is a cluster controller for python that has a lot of utility for scaling large applications, including AI. Ray does not have an explicit pipeline parallelism API, but it can achieve it by leveraging its [actors](https://docs.ray.io/en/latest/ray-core/actors.html).
1) Leverage PiPPy pipeline to create a `GraphModule`
2) Leverage βGraphModuleβ to identify module splits
3) Create Ray actors based on these graph modules
4) Launch Ray actors, and wait for them to resolve
Ray will administer the different actor pod while each pod executes gSPMD for the specific modules it was created with.
## A tale of two pipeline parallelism approaches
Currently PyTorchXLA does have a pipeline parallelism approach documented in https://github.com/pytorch/xla/tree/r2.7?tab=readme-ov-file. In its existing approach, each device is associated with a process. As the original [SPMD+MPMD RFC highlighted](https://github.com/pytorch/xla/issues/9019), this is a flawed approach as we are unable to apply gSPMD when using pipeline parallelism. The endeavor here to allow gSPMD to run in pipeline parallel through PiPPy, Ray, and other APIs might cause some confusion as a duplication of functionality.
Given that, it is worth noting that after the SPMD+MPMD effort, we should reassess our existing pipeline parallelism methodology, and see if it is possible to deduplicate to the more pytorch approach suggested in the RFC.
|
https://github.com/pytorch/xla/issues/9447
|
open
|
[
"distributed",
"RFC"
] | 2025-07-07T05:22:59Z
| 2025-07-09T02:01:27Z
| 2
|
pgmoka
|
huggingface/lerobot
| 1,448
|
How to specify both policy.type and pretrained path at the same time?
|
Hi, I am adding custom configs to a PreTrainedConfig, and I also want to load it from a pretrained path. However, if I specify the pretrained path (with policy.path), I won't be able to modify the fields inside the new PreTrainedConfig subclass. If I use policy.type="myNewModel" instead, I am able to call the fields (such as `policy.new_field_in_myNewModel` when I run `lerobot/scripts/train.py`, but unable to specify the pretrained path.
What is a good solution to this problem?
Thanks!
|
https://github.com/huggingface/lerobot/issues/1448
|
open
|
[
"enhancement",
"configuration"
] | 2025-07-07T03:33:15Z
| 2025-08-12T09:45:58Z
| null |
branyang02
|
huggingface/lerobot
| 1,447
|
SmolVLA input/output clarification
|
I'm now trying to load the SmolVLA to control the Franka arm in simulation. I found that there could be three image inputs(Obeservation.image, 1 and 2) and I have top, wrist and side views. Is there a fixed order for those camera views?
And the predicted action has 6 dimensions, does that mean it doesn't include the gripper state? What are those values represent for? Thanks in advance!
|
https://github.com/huggingface/lerobot/issues/1447
|
closed
|
[
"question",
"policies"
] | 2025-07-06T21:56:43Z
| 2025-10-09T21:59:17Z
| null |
Calvert0921
|
pytorch/ao
| 2,496
|
[Feature Req] Can you add *args and **kwargs to improve extensibility ?
|
**Description:**
The current class implementations have not _*args_ and _**kwargs_ and this reduces extensibility.
**Example:**
> Current
```python
class AdamW4bit(_AdamBase):
def __init__(
self,
params,
lr=1e-3,
betas=(0.9, 0.999),
eps=1e-8,
weight_decay=1e-2,
amsgrad=False,
*,
block_size=128,
bf16_stochastic_round=False,
) -> None:
super().__init__(
params,
lr,
betas,
eps,
weight_decay,
amsgrad,
block_size=block_size,
bf16_stochastic_round=bf16_stochastic_round,
is_adamw=True,
)
@staticmethod
def _subclass_zeros(p: Tensor, signed: bool, block_size: int):
return OptimState4bit.zeros(p.shape, signed, block_size, p.device)
```
> Suggested
```python
class AdamW4bit(_AdamBase):
def __init__(
self,
params,
lr=1e-3,
betas=(0.9, 0.999),
eps=1e-8,
weight_decay=1e-2,
amsgrad=False,
*,
block_size=128,
bf16_stochastic_round=False,**kwargs #NOTE: <------- Here
) -> None:
super().__init__(
params,
lr,
betas,
eps,
weight_decay,
amsgrad,
block_size=block_size,
bf16_stochastic_round=bf16_stochastic_round,
is_adamw=True,**kwargs #NOTE: <------- Here
)
@staticmethod
def _subclass_zeros(p: Tensor, signed: bool, block_size: int):
return OptimState4bit.zeros(p.shape, signed, block_size, p.device)
```
|
https://github.com/pytorch/ao/issues/2496
|
open
|
[
"triaged"
] | 2025-07-06T17:29:19Z
| 2025-08-01T02:52:20Z
| 3
|
Musa-Sina-Ertugrul
|
huggingface/lerobot
| 1,446
|
How to evaluate finetuned SmolVLA model
|
Dear authors and your wonderful work.
I have fine-tuned the smolvla model based on a customized lerobot format dataset. My dataset is picking up a banana and placing it on a box. How can I evaluate the performance of the model? I tried eval.py in the scripes directory, but env_type=pusht doesn't work. I think this env_type may cause eval.py to fail to run.
I hope someone can help me. Thanks in advance.
|
https://github.com/huggingface/lerobot/issues/1446
|
closed
|
[
"question",
"policies"
] | 2025-07-06T15:27:22Z
| 2025-10-17T11:57:49Z
| null |
BintaoBryant
|
huggingface/diffusers
| 11,865
|
AttributeError: type object 'CosmosTransformer3DModel' has no attribute 'from_single_file'
|
### Describe the bug
I would like to run the Cosmos-Predict2-14B-Text2Image model, but it is too large to fit in 24GB of VRAM normally, so I tried to load a Q8_0 GGUF quantization. I copied some code from the [HiDreamImageTransformer2DModel](https://huggingface.co/docs/diffusers/en/api/models/hidream_image_transformer#loading-gguf-quantized-checkpoints-for-hidream-i1) page and tried to adapt it, but I get the following error:
`AttributeError: type object 'CosmosTransformer3DModel' has no attribute 'from_single_file'`
Is there supposed to be another way to load a 8 bit quantization? From what I have seen, Q8_0 typically produces results that are much closer to full precision compared to FP8.
### Reproduction
```
transformer = CosmosTransformer3DModel.from_single_file(
rf"{model_14b_id}\cosmos-predict2-14b-text2image-Q8_0.gguf",
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16
)
pipe_14b = Cosmos2TextToImagePipeline.from_pretrained(
model_14b_id,
torch_dtype=torch.bfloat16,
transformer = transformer
)
```
### Logs
```shell
transformer = CosmosTransformer3DModel.from_single_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: type object 'CosmosTransformer3DModel' has no attribute 'from_single_file'
```
### System Info
- π€ Diffusers version: 0.35.0.dev0
- Platform: Windows-10-10.0.26100-SP0
- Running on Google Colab?: No
- Python version: 3.11.9
- PyTorch version (GPU?): 2.7.1+cu128 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.33.1
- Transformers version: 4.53.0
- Accelerate version: 1.8.1
- PEFT version: 0.15.2
- Bitsandbytes version: 0.46.1
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@DN6
|
https://github.com/huggingface/diffusers/issues/11865
|
closed
|
[
"bug"
] | 2025-07-05T12:14:50Z
| 2025-07-11T07:15:23Z
| 9
|
mingyi456
|
huggingface/diffusers
| 11,864
|
AutoencoderDC.encode fails with torch.compile(fullgraph=True) - "name 'torch' is not defined"
|
### Describe the bug
I'm trying to optimize my data preprocessing pipeline for the Sana model by using `torch.compile` on the DC-AE encoder. Following PyTorch's best practices, I attempted to compile only the `encode` method with `fullgraph=True` for better performance, but I'm encountering an error.
When I try:
```python
dae.encode = torch.compile(dae.encode, fullgraph=True)
```
The code fails with `NameError: name 'torch' is not defined` when calling `dae.encode(x)`.
However, compiling the entire model works:
```python
dae = torch.compile(dae, fullgraph=True)
```
I'm unsure if this is expected behavior or if I'm doing something wrong. Is there a recommended way to compile just the encode method for `AutoencoderDC`?
I was advised to use the more targeted approach of compiling only the encode method for better performance, but it seems like the DC-AE model might have some internal structure that prevents this optimization pattern.
Any guidance on the correct way to apply `torch.compile` optimizations to `AutoencoderDC` would be greatly appreciated. Should I stick with compiling the entire model, or is there a way to make method-level compilation work?
### Reproduction
```
import torch
from diffusers import AutoencoderDC
# Load model
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
dae = AutoencoderDC.from_pretrained(
"mit-han-lab/dc-ae-f32c32-sana-1.1-diffusers",
torch_dtype=torch.bfloat16
).to(device).eval()
# This fails with "name 'torch' is not defined"
dae.encode = torch.compile(dae.encode, fullgraph=True)
# Test
x = torch.randn(1, 3, 512, 512, device=device, dtype=torch.bfloat16)
out = dae.encode(x) # Error occurs here
# This works fine
dae = torch.compile(dae, fullgraph=True)
```
### Logs
```shell
Testing torch.compile(dae.encode, fullgraph=True)
/data1/tzz/anaconda_dir/envs/Sana/lib/python3.10/site-packages/torch/_inductor/compile_fx.py:150: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
β Error: name 'torch' is not defined
```
### System Info
- π€ Diffusers version: 0.34.0.dev0
- Platform: Linux-5.15.0-142-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.18
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.33.0
- Transformers version: 4.45.2
- Accelerate version: 1.7.0
- PEFT version: 0.15.2
- Bitsandbytes version: 0.46.0
- Safetensors version: 0.5.3
- xFormers version: 0.0.27.post2
- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/11864
|
closed
|
[
"bug"
] | 2025-07-05T06:15:11Z
| 2025-07-09T01:32:39Z
| 6
|
SingleBicycle
|
huggingface/datasets
| 7,669
|
How can I add my custom data to huggingface datasets
|
I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that.
|
https://github.com/huggingface/datasets/issues/7669
|
open
|
[] | 2025-07-04T19:19:54Z
| 2025-07-05T18:19:37Z
| null |
xiagod
|
pytorch/executorch
| 12,221
|
How to build executorch with ANDROID_ABI=armeabi-v7a
|
### π The feature, motivation and pitch
https://github.com/pytorch/executorch/blob/main/tools/cmake/Utils.cmake#L89
here, there is no "ANDROID_ABI=armeabi-v7a" option, so if i want to build executorch for ANDROID_ABI=armeabi-v7a, how to do?
thank you very much
### Alternatives
_No response_
### Additional context
_No response_
### RFC (Optional)
_No response_
cc @larryliu0820 @jathu
|
https://github.com/pytorch/executorch/issues/12221
|
open
|
[
"module: build/install",
"triaged"
] | 2025-07-04T02:22:51Z
| 2025-12-01T07:52:13Z
| null |
barbecacov
|
huggingface/lerobot
| 1,442
|
Trained pi0 policy ignores visual cues
|
I am having an issue in which my trained pi0 policy looks smooth but it completely ignores the camera input. I have tried covering up a camera and the policy still looks smooth! This seems very wrong. I wonder if it is because my images are not normalized correctly? Has anyone else seen this?
Do i need to change the "NormalizationMode" visual for pi0? Seems like this may be a repeat of https://github.com/huggingface/lerobot/issues/1065?
|
https://github.com/huggingface/lerobot/issues/1442
|
open
|
[
"question",
"policies"
] | 2025-07-03T20:13:08Z
| 2025-08-12T09:47:09Z
| null |
kumarhans
|
huggingface/lerobot
| 1,439
|
[QUESTION] run a policy on a real robot
|
Hi There, In the documentation , scripts to teleoperate, record, replay or evaluate a policy are provided **but how to run a policy for inference only on a real robot** ? I did not find such a script?
Besides it may be interesting to add such a script in the documentation as well
Thank you very much for your help
|
https://github.com/huggingface/lerobot/issues/1439
|
open
|
[
"question",
"policies"
] | 2025-07-03T18:09:10Z
| 2025-08-12T09:47:27Z
| null |
FaboNo
|
huggingface/smolagents
| 1,512
|
How can we use this benchmark to evaluate local models?
|
examples/smolagents_benchmark/run.py
|
https://github.com/huggingface/smolagents/issues/1512
|
closed
|
[
"enhancement"
] | 2025-07-03T06:17:58Z
| 2025-07-03T08:07:26Z
| null |
OoOPenN
|
pytorch/ao
| 2,477
|
Support running multi-device tests in CI
|
For float8 training, the test_everything.sh script requires multiple GPUs for FSDP/TP tests, so we currently don't run in CI as it's not configured for multi-device jobs. We should figure out how to run these multi-device tests in CI. This would also be useful for some of our new MoE training parallelism tests.
|
https://github.com/pytorch/ao/issues/2477
|
closed
|
[
"ci",
"float8"
] | 2025-07-02T16:29:47Z
| 2025-07-16T16:31:06Z
| 2
|
danielvegamyhre
|
huggingface/diffusers
| 11,849
|
Can not load fusionx_lora into original wan2.1-14b
|
hello, i am adding the fusionx_lora into original wan2.1-14b-i2v, my code is as follow:
> pipe = WanImageToVideoPipeline.from_pretrained(my_local_path + "Wan2.1-I2V-14B-480P-Diffusers", vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16)
> pipe.load_lora_weights(
> my_local_path + "Wan14BT2VFusioniX/FusionX_LoRa/Wan2.1_I2V_14B_FusionX_LoRA.safetensors"
> )
But i got some errors:
> File "/mmu_mllm_hdd_2/zuofei/infer_test/lora_infer_multi.py", line 60, in process_image
> pipe.load_lora_weights(
> File "/hetu_group/zuofei/env/wan_infer/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 4869, in load_lora_weights
> state_dict = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/hetu_group/zuofei/env/wan_infer/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
> return fn(*args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^
> File "/hetu_group/zuofei/env/wan_infer/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 4796, in lora_state_dict
> state_dict = _convert_non_diffusers_wan_lora_to_diffusers(state_dict)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/hetu_group/zuofei/env/wan_infer/lib/python3.12/site-packages/diffusers/loaders/lora_conversion_utils.py", line 1564, in _convert_non_diffusers_wan_lora_to_diffusers
> num_blocks = len({k.split("blocks.")[1].split(".")[0] for k in original_state_dict})
> ~~~~~~~~~~~~~~~~~~^^^
> IndexError: list index out of range
Can you tell me how to fix it? Thank you so much!
|
https://github.com/huggingface/diffusers/issues/11849
|
open
|
[] | 2025-07-02T13:48:17Z
| 2025-07-02T13:48:17Z
| 0
|
fzuo1230
|
huggingface/transformers
| 39,169
|
Using Gemma3n with text-only generation requires image dependencies
|
### System Info
- `transformers` version: 4.53.0
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.12.8
- Huggingface_hub version: 0.33.2
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I want to use the Gemma3n model in a text-only generation pipeline (without any multimodal inputs). I'm using the Gemma3nForCausalLM because it has only a language modeling head. But when running the script, it fails with an ImportError stating that `AutoImageProcessor` requires the PIL and timm libraries to work. How can I run Gemma3n for text-generation without those image-related dependencies?
```python
from transformers import AutoTokenizer, Gemma3nForCausalLM
import torch
model_id = "google/gemma-3n-e4b"
model = Gemma3nForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Once upon a time"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_length=30)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
```
### Expected behavior
I expect the script to run successfully without installing `pillow` and `timm`.
|
https://github.com/huggingface/transformers/issues/39169
|
closed
|
[
"bug"
] | 2025-07-02T07:46:43Z
| 2025-08-01T08:14:26Z
| 6
|
marianheinsen
|
huggingface/lerobot
| 1,429
|
When will release the SmolVLA(2.25B & 0.24b)
|
Hi dear authors
thx for ur all and the wonderful work - SmolVLA!
I wonder will u release the **SmolVLA(2.25B)?** I want to compare the performance with your release version(0.45B)
|
https://github.com/huggingface/lerobot/issues/1429
|
closed
|
[
"question",
"policies"
] | 2025-07-02T03:39:06Z
| 2025-10-11T07:21:57Z
| null |
JuilieZ
|
huggingface/sentence-transformers
| 3,416
|
How to calculate prompt tokens for embedding model encode?
|
I want to calculate input prompt tokens, which returns to user to let them know how many tokens they consumed. How can I do that? Could you give me an example?
|
https://github.com/huggingface/sentence-transformers/issues/3416
|
open
|
[] | 2025-07-02T03:27:11Z
| 2025-07-03T07:02:55Z
| null |
gaoxt1983
|
huggingface/sentence-transformers
| 3,414
|
How to fine tune multimodal embedding model?
|
Hi @tomaarsen and Team - hope all is well & thanks for the work.
I used to fine tune some pure text based embedding models using this package and now I would like to fine tune multimodal embedding models such as `llamaindex/vdr-2b-multi-v1` and `jinaai/jina-embeddings-v4`.
I wonder if you can share some insights / relevant documentation / code examples?
Thank you.
|
https://github.com/huggingface/sentence-transformers/issues/3414
|
open
|
[] | 2025-07-01T23:45:04Z
| 2025-07-03T10:25:29Z
| null |
groklab
|
pytorch/pytorch
| 157,393
|
How to compose HSDP with CP?
|
### π Describe the bug
We're trying to compose HSDP with CP following the [torchtitan blog post](https://discuss.pytorch.org/t/distributed-w-torchtitan-breaking-barriers-training-long-context-llms-with-1m-sequence-length-in-pytorch-using-context-parallel/215082) but are running into some issues and it's unclear to us why.
Suppose we have a device mesh with dimensions `["dp", "cp", "ep"]` where `ep` corresponds to expert parallelism. What we want to do is FSDP on `dp+cp` shards for the expert parameters and HSDP (replicate on `dp+cp`, shard on `ep`) for the non-expert parameters.
Our code looks like the following:
```
mesh = DeviceMesh(..., mesh_dim_names=["dp", "cp", "ep"])
fsdp_mesh = mesh["dp", "cp"]._flatten(mesh_dim_name="dp_cp")
hsdp_mesh = mesh["dp_cp", "ep"]
```
Line 3 above fails because "ep" somehow does not exist in the mesh after "dp_cp". I'm not sure if this is a bug or the intended way for DeviceMesh to behave. If the latter, is there any way to use a flattend mesh as the replication dim for HSDP?
### Versions
Collecting environment information...
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.31
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1081-aws-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 1
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz
Stepping: 4
CPU MHz: 2499.994
BogoMIPS: 4999.98
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 66 MiB
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip
|
https://github.com/pytorch/pytorch/issues/157393
|
closed
|
[
"oncall: distributed",
"triaged"
] | 2025-07-01T20:45:27Z
| 2025-07-09T00:10:23Z
| null |
EugenHotaj
|
huggingface/lerobot
| 1,424
|
evaluated trained policy reports 14 pc_success only
|
Trained act policy using
```
python lerobot/scripts/train.py \
--policy.type=act \
--dataset.repo_id=lerobot/act_aloha_sim_insertion_human \
--env.type=aloha \
--output_dir=outputs/train/act_aloha_insertion
```
Question: I think I mistakenly used the prefix `act_` in the `repo_id` but if I don't use it I get this error:
```
$ python lerobot/scripts/train.py --policy.type=act --dataset.repo_id=lerobot/aloha_sim_insertion_human --env.type=aloha --output_dir=outputs/train/act_aloha_insertion
INFO 2025-07-01 05:47:32 ils/utils.py:48 Cuda backend detected, using cuda.
WARNING 2025-07-01 05:47:32 /policies.py:77 Device 'None' is not available. Switching to 'cuda'.
Traceback (most recent call last):
File "/home/user/lerobot/lerobot/scripts/train.py", line 291, in <module>
train()
File "/home/user/lerobot/lerobot/configs/parser.py", line 226, in wrapper_inner
response = fn(cfg, *args, **kwargs)
File "/home/user/lerobot/lerobot/scripts/train.py", line 110, in train
cfg.validate()
File "/home/user/lerobot/lerobot/configs/train.py", line 120, in validate
raise ValueError(
ValueError: 'policy.repo_id' argument missing. Please specify it to push the model to the hub.
```
Using that "act_" prefix in the repo id I attempted to Evaluate it using the command below but it reports `pc_success` being 14% which seems too low?
```
python lerobot/scripts/eval.py \
--policy.path=outputs/train/act_aloha_insertion/checkpoints/last/pretrained_model \
--env.type=aloha \
--eval.batch_size=10 \
--eval.n_episodes=50
```
Detailed output of the above command:
```
$ python lerobot/scripts/eval.py --policy.path=outputs/train/act_aloha_insertion/checkpoints/last/pretrained_model --env.type=aloha --eval.batch_size=10 --eval.n_episodes=50
INFO 2025-07-01 05:33:14 pts/eval.py:467 {'env': {'episode_length': 400,
'features': {'action': {'shape': (14,),
'type': <FeatureType.ACTION: 'ACTION'>},
'agent_pos': {'shape': (14,),
'type': <FeatureType.STATE: 'STATE'>},
'pixels/top': {'shape': (480, 640, 3),
'type': <FeatureType.VISUAL: 'VISUAL'>}},
'features_map': {'action': 'action',
'agent_pos': 'observation.state',
'pixels/top': 'observation.images.top',
'top': 'observation.image.top'},
'fps': 50,
'obs_type': 'pixels_agent_pos',
'render_mode': 'rgb_array',
'task': 'AlohaInsertion-v0'},
'eval': {'batch_size': 10, 'n_episodes': 50, 'use_async_envs': False},
'job_name': 'aloha_act',
'output_dir': PosixPath('outputs/eval/2025-07-01/05-33-14_aloha_act'),
'policy': {'chunk_size': 100,
'device': 'cuda',
'dim_feedforward': 3200,
'dim_model': 512,
'dropout': 0.1,
'feedforward_activation': 'relu',
'input_features': {'observation.images.top': {'shape': (3,
480,
640),
'type': <FeatureType.VISUAL: 'VISUAL'>},
'observation.state': {'shape': (14,),
'type': <FeatureType.STATE: 'STATE'>}},
'kl_weight': 10.0,
'latent_dim': 32,
'license': None,
'n_action_steps': 100,
'n_decoder_layers': 1,
'n_encoder_layers': 4,
'n_heads': 8,
'n_obs_steps': 1,
'n_vae_encoder_layers': 4,
'normalization_mapping': {'ACTION': <NormalizationMode.MEAN_STD: 'MEAN_STD'>,
'STATE': <NormalizationMode.MEAN_STD: 'MEAN_STD'>,
'VISUAL': <NormalizationMode.MEAN_STD: 'MEAN_STD'>},
'optimizer_lr': 1e-05,
'optimizer_lr_backbone': 1e-05,
'optimizer_weight_decay': 0.0001,
'output_features': {'action': {'shape': (14,),
'type': <FeatureType.ACTION: 'ACTION'>}},
'pre_norm': False,
'pretrained_backbone_weights': 'ResNet18_Weights.IMAGENET1K_V1',
'private': None,
'push_to_hub': False,
'replace_final_stride_with_dilation': 0,
'repo_id': None,
'tags': None,
'temporal_ensemble_coeff': None,
'use_amp': False,
'use_vae': True,
'vision_backbone': 'resnet18'},
'seed': 1000}
INFO 2025-07-01 05:33:14 pts/eval.py:476 Output dir: outputs/eval/2025-07-01/05-33-14_aloha_act
INFO 2025-07-01 05:33:14 pts/eval.py:478 Making environment.
INFO 2025-07-01 05:33:14 /__init__.py:84 MUJOCO_GL=%s, attempting to import specified O
|
https://github.com/huggingface/lerobot/issues/1424
|
open
|
[
"question",
"policies"
] | 2025-07-01T12:16:38Z
| 2025-08-12T09:49:05Z
| null |
raul-machine-learning
|
huggingface/lerobot
| 1,421
|
It would help to have a description for the lerobots datasets:
|
for example, for [lerobot/aloha_sim_insertion_human](https://huggingface.co/datasets/lerobot/aloha_sim_insertion_human) comes with no description at all
I'd help to know
- What makes this data special/interesting
- How to train different models in the simulator
- What should we expect
- what does the `_human` means, and how is it different from the `_script` suffix
|
https://github.com/huggingface/lerobot/issues/1421
|
open
|
[
"question",
"dataset"
] | 2025-07-01T10:14:45Z
| 2025-08-12T09:49:27Z
| null |
raul-machine-learning
|
huggingface/lerobot
| 1,419
|
simulator should allow pushing objects around with the mouse interactively
|
Not having this is preventing us from testing, debugging and playing with the robots.
According to Mujoco documentation this feature available in their simulator but it is not exposed in lerobot:
```
A related usability feature is the ability to βreach intoβ the simulation, push objects around and see how the
physics respond. The user selects the body to which the external forces and torques will be applied, and sees
a real-time rendering of the perturbations together with their dynamic consequences. This can be used to debug
the model visually, to test the response of a feedback controller, or to configure the model into a desired pose.
```
Also for an awesome OOTB experience it would be great to have a script that loads a pretrained model and makes the interactive simulation just work.
|
https://github.com/huggingface/lerobot/issues/1419
|
open
|
[
"question",
"simulation"
] | 2025-07-01T09:47:02Z
| 2025-08-12T09:50:18Z
| null |
raul-machine-learning
|
huggingface/lerobot
| 1,418
|
Robot tries to transfer cube even if it failed to pick it up, shouldn't it retry?
|
I am evaluating the following policy:
```
python lerobot/scripts/eval.py --policy.path=lerobot/act_aloha_sim_transfer_cube_human --env.type=aloha --env.task=AlohaTransferCube-v0 --eval.n_episodes=1 --eval.batch_size=1
```
However the robot fails to pick up the cube but carries on with the task, shouldn't the robot keep on trying until it picks up the cube? See the video
https://github.com/user-attachments/assets/5ad20353-97bc-4d03-a78d-5f9f149c95f9
|
https://github.com/huggingface/lerobot/issues/1418
|
closed
|
[
"question",
"simulation"
] | 2025-07-01T09:18:38Z
| 2025-10-17T11:57:34Z
| null |
raul-machine-learning
|
pytorch/pytorch
| 157,352
|
[aot_compile]Explanation: Dynamo does not know how to trace the builtin `time.time.`
|
### π Describe the bug
Graph break error happened when I compile yolov5 with torch._export.aot_compile interface. I also try with torch.compile and graph breaks also happened. but it compile normally. I am not sure whether this is dynamo bug and how can I resolve this issue.
### Error logs
# code example:
```
class MyYoulo(torch.nn.Module):
def __init__(self):
super().__init__()
self.youlo = torch.hub.load('ultralytics/yolov5', 'yolov5s')
def forward(self, x):
return self.youlo(x)
with torch.no_grad():
torch.manual_seed(0)
torch._dynamo.config.suppress_errors = True
input_cpu = torch.rand([1, 3, 640, 640])
model_cpu = MyYoulo()
model_cpu.eval()
output_cpu = model_cpu(input_cpu)
device = "cuda"
model = model_cpu.to(device=device)
x = input_cpu.cuda()
example_inputs = (x,)
batch_dim = torch.export.Dim("batch", min=1, max=1024)
model_so_path = torch._export.aot_compile(
model,
example_inputs,
#dynamic_shapes={"x": {0: batch_dim}},
options={"aot_inductor.output_path": os.path.join(os.getcwd(), "libyolo.so")},
)
```
# backtrace
```
File "/root/workspace/youlo/youlo.py", line 35, in <module>
model_so_path = torch._export.aot_compile(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_export/__init__.py", line 133, in aot_compile
gm = _export_to_torch_ir(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/export/_trace.py", line 739, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1677, in inner
result_traced = opt_f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 659, in _fn
raise e.with_traceback(None) from None
torch._dynamo.exc.Unsupported: Attempted to call function marked as skipped
Explanation: Dynamo does not know how to trace the builtin `time.time.` This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind).
Hint: If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround.
Hint: If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use `torch.compiler.allow_in_graph`.
```
### Versions
version: torch-2.7.0+cu128
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
|
https://github.com/pytorch/pytorch/issues/157352
|
closed
|
[
"oncall: pt2",
"module: dynamo",
"oncall: export"
] | 2025-07-01T05:58:10Z
| 2025-07-04T06:23:42Z
| null |
duanmu0228
|
pytorch/examples
| 1,362
|
Resnet50 on single node with 8 GPUs, all the parameters are default. why the result is different ?
|
Hello, I use the command "python main.py -a resnet50 --dist-url 'tcp://127.0.0.1:60000/' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0 /my_data_dir/" train and test resnet50 on a single node with 8 GPUs. But I got Acc@1 75.694 Acc@5 92.704, this is different from the result presented on https://github.com/facebookarchive/fb.resnet.torch/blob/master/pretrained/README.md (ResNet-50 error rate TOP1:24.01 TOP5:7.02). All the parameters are default. why the result is different ?
|
https://github.com/pytorch/examples/issues/1362
|
open
|
[] | 2025-07-01T04:37:58Z
| 2025-07-01T04:37:58Z
| 0
|
sdwhzh
|
huggingface/transformers
| 39,137
|
ImportError: cannot import name 'pipeline' from 'transformers'
|
### System Info
I am using Databricks notebook.
Databricks runtime: 13.3 LTS (includes Apache Spark 3.4.1, Scala 2.12)
### Who can help?
@Rocketknight1 @SunMarc @zach-huggingface
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is the code:
```
%pip install --upgrade torch transformers accelerate deepspeed bitsandbytes huggingface_hub
dbutils.library.restartPython()
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
```
Error:
`ImportError: cannot import name 'pipeline' from 'transformers' (/local_disk0/.ephemeral_nfs/envs/pythonEnv-a13cd5c4-d035-4d04-87bd-75088348617d/lib/python3.10/site-packages/transformers/__init__.py)`
Python: 3.10.12
installed packages:
transformers== 4.53.0
huggingface_hub==0.33.1
torch==2.7.1+cu126
accelerate==1.8.1
deepspeed==0.17.1
bitsandbytes==0.46.0
These are all up-to-date versions for all of these packages. What is the problem?
### Expected behavior
Import without error.
|
https://github.com/huggingface/transformers/issues/39137
|
closed
|
[
"Usage",
"bug"
] | 2025-06-30T18:49:54Z
| 2025-10-23T00:53:19Z
| 14
|
atabari-bci
|
huggingface/lerobot
| 1,407
|
Can read the current signals from the lerobot?
|
Can a user read the current signals from the LeRobot?
|
https://github.com/huggingface/lerobot/issues/1407
|
open
|
[
"question",
"sensors"
] | 2025-06-30T10:05:26Z
| 2025-08-12T09:51:06Z
| null |
Frank-ZY-Dou
|
huggingface/optimum
| 2,314
|
How to set the dynamic input sizes for decoder_with_past_model.onnx of NLLB
|
Dear author,
I'm a beginner in optimum. So this question may be an elementary one. I used optimum to export decoder_with_past_model.onnx from nllb-200-distilled-600M. The resulted onnx has many inputs with dynamic shape. Now I intend to overwrite the inputs with static sizes. However, I'm not sure about the correct settings.
There are 4 arguments to be determined and I set:
batch_size = 1
encoder_sequence_length = 200 (same with max_length)
past_decoder_sequence_length = 200
encoder_sequence_length_out = 200
Any suggestions are appre

ciated. Big thanks.
|
https://github.com/huggingface/optimum/issues/2314
|
closed
|
[
"Stale"
] | 2025-06-30T06:37:50Z
| 2025-08-07T02:17:43Z
| null |
liamsun2019
|
pytorch/TensorRT
| 3,637
|
β [Question] Why is `torch.bfloat16` excluded from the `allowed_casts` set ?
|
https://github.com/pytorch/TensorRT/blob/a66241158dc33a96138ac768a9e1facf0cae3594/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py#L1030-L1037
Is there a specific reason why `torch.bfloat16` is not included in the `allowed_casts` set within the `to_copy_dtype_validator` function?
Plus, this causes graph partitioning when performing a `aten.ops._to_copy` operation to `torch.bfloat16`. I'm wondering if this could potentially impact performance.
|
https://github.com/pytorch/TensorRT/issues/3637
|
closed
|
[
"question"
] | 2025-06-30T02:24:47Z
| 2025-07-04T00:01:16Z
| null |
junstar92
|
huggingface/transformers
| 39,114
|
Is there a way to force it to use ASCII based progress bar and not the ipython widget one?
|
When loading models, I like it better to have a ASCII based progress bar and not a IPython one
|
https://github.com/huggingface/transformers/issues/39114
|
open
|
[
"Feature request"
] | 2025-06-29T22:41:19Z
| 2025-07-07T13:20:13Z
| 0
|
weathon
|
huggingface/transformers
| 39,105
|
How to use other acceleration apis of npu?
|
### Feature request
I noticed that transformers now support using flash attention directly in the npu by [```npu_flash_attention.py```](https://github.com/huggingface/transformers/pull/36696). There are many other acceleration apis that can be used in npu, such as shown in [doc](https://www.hiascend.com/document/detail/zh/Pytorch/700/ptmoddevg/trainingmigrguide/performance_tuning_0028.html).
How can we use them directly in transformers? How to switch seamlessly between different devices?
### Motivation
Request to integrate other acceleration apis of npu in transformers. If this can be done, the ease of using transformers will be greatly improved in npu.
|
https://github.com/huggingface/transformers/issues/39105
|
closed
|
[
"Feature request"
] | 2025-06-29T08:26:29Z
| 2026-01-04T07:23:26Z
| null |
zheliuyu
|
huggingface/candle
| 3,013
|
Word Timestamp for whisper
|
Hi is there no way to get word timestamp using the whisper in candle?
The example successfully demonstrates the retrieval of segment timestamp but how would one retrieve word timestamp.
When I look into python code, they seem to pass this `word_timestamp=True` argument while transcribing and get the result with `base` model.
Is there any work around or can someone point me towards how to achieve this please.
|
https://github.com/huggingface/candle/issues/3013
|
open
|
[] | 2025-06-29T01:16:38Z
| 2025-06-29T23:47:39Z
| 2
|
bp7968h
|
huggingface/trl
| 3,662
|
What is the point of steps_per_gen in GRPO Trainer
|
Hello, can you please explain what is the point of steps_per_gen in GRPO Training config when we already have num_iterations? The policy update logic can then simply be:
if num_iterations = 1, generations and model update are on_policy (per_token_logps = old_per_token_logps)
When num_iterations > 1, then the same generation will be used for multiple times, and per_token_logps will be different from old_per_token_logps for all but the first time a generation batch is used.
Why is steps_per_gen needed? It just makes the overall batch generation and splitting logic unnecessarily difficult to understand..
|
https://github.com/huggingface/trl/issues/3662
|
open
|
[
"β question",
"π GRPO"
] | 2025-06-28T20:08:01Z
| 2025-07-25T08:05:50Z
| null |
ankur6ue
|
pytorch/torchtitan
| 1,355
|
Llama4 TP bug: DTensor local tensor dtype does not match DTensorSpec tensor meta dtype, causing meta registration error
|
### Bug description
When I apply FSDP+TP to the Llama4 debug model using plain eager bf16 training, the MoE routed experts weights are DTensors. The local tensor dtype is bf16, but the Dtensor spec tensor meta dtype (`self.w1._spec.tensor_meta.dtype`) is fp32. This mismatch seems to cause the meta registration error below.
### Repro command
```
NGPU=4 CONFIG_FILE="./torchtitan/experiments/llama4/train_configs/debug_model.toml" ./run_train.sh --training.steps=100 --parallelism.tensor_parallel_degree=2
```
### Meta registration error
```
File "/home/danvm/.conda/envs/torchtitan/lib/python3.13/site-packages/torch/_meta_registrations.py", line 7527, in _meta_grouped_mm_common
torch._check(
~~~~~~~~~~~~^
mat_a.dtype == torch.bfloat16 and mat_b.dtype == torch.bfloat16,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
lambda: f"Expected inputs of BF16 type but got mat_a.dtype={mat_a.dtype} and mat_b.dtype={mat_b.dtype}.",
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/danvm/.conda/envs/torchtitan/lib/python3.13/site-packages/torch/__init__.py", line 1702, in _check
_check_with(RuntimeError, cond, message)
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/danvm/.conda/envs/torchtitan/lib/python3.13/site-packages/torch/__init__.py", line 1684, in _check_with
raise error_type(message_evaluated)
RuntimeError: Expected inputs of BF16 type but got mat_a.dtype=torch.bfloat16 and mat_b.dtype=torch.float32.
```
### PDB log
The following pdb commands/log show inspection of `self.w1` in the MoE layer, confirming the DTensor's local tensor dtype is bf16, yet the DTensorSpec has tensor meta dtype of fp32. This seems to be what is causing the meta registration error mismatch.
```
[rank0]: 86 -> torch.distributed.breakpoint()
[rank0]: 87 h = F.silu(torch._grouped_mm(x, self.w1, offs=offsets))
[rank0]: 88 h = h * torch._grouped_mm(x, self.w3, offs=offsets)
[rank0]: 89 out = torch._grouped_mm(h, self.w2, offs=offsets)
[rank0]: 90
[rank0]: 91 return out
self.w1
[rank0]:(Pdb) [rank0]:DTensor(local_tensor=tensor([[[-0.0050, -0.0244, 0.0243, ..., 0.0317, 0.0069, -0.0222],
[rank0]: [-0.0125, 0.0201, -0.0250, ..., 0.0376, 0.0055, -0.0094],
[rank0]: [-0.0045, -0.0300, -0.0115, ..., -0.0493, -0.0259, 0.0117],
[rank0]: ...,
[rank0]: [-0.0112, -0.0012, -0.0051, ..., -0.0104, 0.0087, -0.0325],
[rank0]: [ 0.0209, 0.0086, 0.0109, ..., -0.0430, -0.0036, 0.0359],
[rank0]: [ 0.0110, -0.0234, -0.0066, ..., -0.0238, 0.0148, -0.0304]],
[rank0]:
[rank0]: [[-0.0168, -0.0038, 0.0179, ..., 0.0076, -0.0461, -0.0182],
[rank0]: [-0.0109, -0.0120, 0.0427, ..., -0.0027, -0.0048, -0.0131],
[rank0]: [-0.0156, 0.0018, -0.0083, ..., 0.0189, 0.0309, 0.0066],
[rank0]: ...,
[rank0]: [-0.0021, -0.0231, 0.0132, ..., -0.0095, -0.0050, -0.0168],
[rank0]: [-0.0422, 0.0035, 0.0017, ..., 0.0339, 0.0195, 0.0003],
[rank0]: [ 0.0183, 0.0415, 0.0552, ..., 0.0084, 0.0159, 0.0229]],
[rank0]:
[rank0]: [[ 0.0036, -0.0337, 0.0398, ..., 0.0027, -0.0219, 0.0043],
[rank0]: [-0.0107, -0.0270, 0.0166, ..., 0.0044, -0.0030, 0.0432],
[rank0]: [ 0.0233, 0.0203, 0.0106, ..., -0.0018, -0.0118, -0.0060],
[rank0]: ...,
[rank0]: [-0.0247, -0.0038, -0.0322, ..., 0.0172, 0.0156, -0.0047],
[rank0]: [-0.0225, 0.0289, 0.0299, ..., 0.0025, -0.0221, 0.0134],
[rank0]: [ 0.0093, 0.0255, -0.0039, ..., 0.0045, -0.0226, -0.0170]],
[rank0]:
[rank0]: ...,
[rank0]:
[rank0]: [[-0.0120, -0.0054, -0.0262, ..., 0.0086, -0.0012, -0.0043],
[rank0]: [-0.0192, -0.0245, 0.0143, ..., -0.0083, 0.0111, 0.0067],
[rank0]: [ 0.0220, -0.0182, 0.0442, ..., 0.0008, 0.0240, 0.0167],
[rank0]: ...,
[rank0]: [ 0.0165, -0.0152, 0.0175, ..., 0.0027, 0.0120, 0.0100],
[rank0]: [ 0.0050, -0.0135, 0.0160, ..., 0.0311, 0.0106, 0.0571],
[rank0]: [ 0.0199, -0.0073, 0.0215, ..., 0.0131, 0.0327, 0.0097]],
[rank0]:
[rank0]: [[ 0.0113, 0.0044, -0.0234, ..., 0.0009, 0.0026, -0.0031],
[rank0]: [ 0.0059, -0.0195, -0.0089, ..., 0.0269, -0.0195, 0.0033],
[rank0]: [ 0.0366, 0.0199, 0.0055, ..., -0.0400, -0.0101, -0.0386],
[rank0]: ...,
[rank0]: [-0.0040, -0.0228, -0.0114, ..., -0.0342, -0.0032, -0.0157],
[rank0]: [ 0.0277, -0.0120, -0.0300, ..., 0.0079, 0.0038, 0.0342],
[rank0]: [-0.0057, 0.0148, -0.0048, ..., -0.0192, -0.0291, 0.0187]],
[rank0]:
[rank0]: [[-0.0291, -0.0271, 0.0058, ..., 0.0035, 0.0095, 0.0045],
[rank0]: [ 0.0508, 0.0175, -0.0264, ..., 0.0070, -0.0014, -0.0064],
[rank0]: [
|
https://github.com/pytorch/torchtitan/issues/1355
|
closed
|
[
"bug"
] | 2025-06-28T05:31:22Z
| 2025-08-21T03:23:49Z
| 2
|
danielvegamyhre
|
pytorch/ao
| 2,456
|
How to not decompose the choose_qparams_affine call_func
|
Hi,
In the current v0.11.0, after torch.export.export() I have the graph below:
```
(Pdb) print(ep.graph)
graph():
%linear1_weight : [num_users=1] = get_attr[target=linear1.weight]
%x : [num_users=2] = placeholder[target=x]
%choose_qparams_affine : [num_users=2] = call_function[target=torch.ops.torchao.choose_qparams_affine.default](args = (%x, SYMMETRIC, [2, 32], torch.float8_e4m3fn, -448, 448, 1.1920928955078125e-07, torch.float32, None, True, NONE), kwargs = {})
%getitem : [num_users=2] = call_function[target=operator.getitem](args = (%choose_qparams_affine, 0), kwargs = {})
%getitem_1 : [num_users=0] = call_function[target=operator.getitem](args = (%choose_qparams_affine, 1), kwargs = {})
%quantize_affine : [num_users=1] = call_function[target=torch.ops.torchao.quantize_affine.default](args = (%x, [2, 32], %getitem, None, torch.float8_e4m3fn, -448, 448, NONE), kwargs = {})
%reshape : [num_users=1] = call_function[target=torch.ops.aten.reshape.default](args = (%quantize_affine, [-1, 32]), kwargs = {})
%numpy_t : [num_users=1] = call_function[target=torch.ops.aten.numpy_T.default](args = (%access_subclass_inner_tensor_default_72,), kwargs = {})
%_scaled_mm : [num_users=1] = call_function[target=torch.ops.aten._scaled_mm.default](args = (%reshape, %numpy_t, %getitem, %access_subclass_inner_tensor_default_73, None, None, torch.float32, True), kwargs = {})
%reshape_1 : [num_users=1] = call_function[target=torch.ops.aten.reshape.default](args = (%_scaled_mm, [2, 16]), kwargs = {})
return (reshape_1,)
```
However if I use the latest torchao nightly, I found that choose_qparams_affine call_func being decomposed to a set of aten ops:
which is probablly introduced by
https://github.com/pytorch/ao/commit/8940aa72b182afe70f95e33500f01fc270c9f7cd#diff-d2a11602a79e83305208472f1abe6a4106f02ce62a7f9524007181813863fcf6
Is there a way to avoid decompose the choose_qparams_affine call_func?
Or from the decomposed ep.graph, how can I get the undecomposed nodes?
example code:
```
import torch
from torchao.quantization.quant_api import (
quantize_,
Float8DynamicActivationFloat8WeightConfig
)
class SimpleNetwork(torch.nn.Module):
def __init__(self):
super(SimpleNetwork, self).__init__()
self.linear = torch.nn.Linear(in_features=32, out_features=16, bias=False)
def forward(self, x):
return self.linear(x)
model= SimpleNetwork().eval().cuda()
input = torch.randn(2, 32).cuda()
config = Float8DynamicActivationFloat8WeightConfig()
quantize_(model, config)
ep = torch.export.export(model, (input,), strict=False)
```
|
https://github.com/pytorch/ao/issues/2456
|
open
|
[] | 2025-06-27T22:23:33Z
| 2025-07-25T18:26:32Z
| null |
lanluo-nvidia
|
huggingface/lerobot
| 1,399
|
calibrate.py for only follower
|
the calibrate.py file doesnt work for setting up the motors for the follower arm, as there arent enough parameters for the function to run. Has anyone made an adaption for the calibrate file that doesnt take into consideration the teleop?
|
https://github.com/huggingface/lerobot/issues/1399
|
open
|
[
"question",
"teleoperators"
] | 2025-06-27T20:53:47Z
| 2025-08-12T09:51:53Z
| null |
ramallis
|
huggingface/transformers
| 39,091
|
`transformers`' dependency on `sentencepiece` blocks use on windows in python 3.13
|
### System Info
Due to
* changes in Python 3.13,
* an incompatibility in `sentencepiece`,
* `transformers` dependency on `sentencepiece`,
`transformers` cannot be easily installed under windows + py3.13, and does not work as a dependency of other packages in this environment
There are multiple issues and a merged PR on sentencepiece (https://github.com/google/sentencepiece/pull/1084) from Feb 26 2025 but no release has been forthcoming
### Who can help?
* people currently using `sentencepiece` in `transformers` code they own
* people determining what the scope of `transformers`' OS & python support is
* `sentencepiece` pypi maintainers
### Reproduction
1. Be on windows
2. Be on python 3.13
3. Try to install current `transformers` from pypi
4. If you get this far, use any function importing `sentencepiece`, e.g. loading an `xlm_roberta` model
### Expected behavior
Code doesn't raise exception
|
https://github.com/huggingface/transformers/issues/39091
|
closed
|
[
"Usage"
] | 2025-06-27T15:23:57Z
| 2025-07-03T16:02:47Z
| 5
|
leondz
|
huggingface/transformers
| 39,073
|
Inefficient default GELU implementation in GPT2
|
While profiling the HuggingFace GPT2 model, I found that the default GELU backend used is NewGELUActivation, which is inefficient in most cases. Instead of using a fused CUDA kernel, NewGELUActivation executes multiple separate PyTorch-level operators, leading to unnecessary kernel launches and memory overhead.
```python
# activations.py:L46
class NewGELUActivation(nn.Module):
def forward(self, input: Tensor) -> Tensor:
return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))
```
Is there a reason why NewGELUActivation is still used as the default for GPT2, rather than switching to nn.functional.gelu or another fused alternative?
Iβd be happy to share profiler traces or help test a patch if helpful.
|
https://github.com/huggingface/transformers/issues/39073
|
closed
|
[] | 2025-06-27T09:07:39Z
| 2025-08-12T03:35:13Z
| 4
|
null-pointer-access
|
huggingface/diffusers
| 11,816
|
set_adapters performance degrades with the number of inactive adapters
|
### Describe the bug
### Goal
Build an image-generation service with `StableDiffusionXLPipeline` that:
1. Keeps ~50 LoRA adapters resident in GPU VRAM.
2. For each request:
β’ activate **β€ 5** specific LoRAs via `pipeline.set_adapters(...)`
β’ run inference
β’ deactivate them (ready for the next request).
### Issue
`pipeline.set_adapters()` becomes progressively slower the more unique LoRAs have ever been loaded,
even though each call still enables only up to five adapters.
| # LoRAs ever loaded | `set_adapters()` time (s) |
|---------------------|---------------------------|
| 3 | ~ 0.1031 |
| 6 | ~ 0.1843 |
| 9 | ~ 0.2614 |
| 12 | ~ 0.3522 |
| 45 | ~ 1.2470 |
| 57 | ~ 1.5435 |
### What Iβve tried
1. **Load LoRAs from disk for every request** ~ 0.8 s/LoRA, too slow.
2. **Keep LoRAs in RAM (`SpooledTemporaryFile`) + `pipeline.delete_adapter()`** β roughly as slow as (1).
3. **Keep all 50 LoRAs on the GPU** and just switch with `set_adapters()` β fastest so far, but still shows the O(N)-style growth above.
### Question
Is this increasing latency expected?
Is there a recommended pattern for caching many LoRAs on the GPU and switching between small subsets without paying an O(total LoRAs) cost every time?
Any guidance (or confirmation itβs a current limitation) would be greatly appreciated!
### Reproduction
<details>
<summary>Code</summary>
``` Minimal example
import os
import time
from typing import List
from pydantic import BaseModel
from diffusers import StableDiffusionXLPipeline, AutoencoderTiny
import torch
from diffusers.utils import logging
logging.disable_progress_bar()
logging.set_verbosity_error()
pipeline = None
class Lora(BaseModel):
name: str
strength: float
def timeit(func):
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
end = time.time()
duration = end - start
print(f"{func.__name__} executed in {duration:.4f} seconds")
return result
return wrapper
@timeit
def load_model():
pipeline = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
vae=AutoencoderTiny.from_pretrained(
'madebyollin/taesdxl',
use_safetensors=True,
torch_dtype=torch.float16,
)
).to("cuda")
pipeline.set_progress_bar_config(disable=True)
return pipeline
@timeit
def set_adapters(pipeline, adapter_names, adapter_weights):
pipeline.set_adapters(
adapter_names=adapter_names,
adapter_weights=adapter_weights,
)
@timeit
def fuse_lora(pipeline):
pipeline.fuse_lora()
@timeit
def inference(pipeline, req, generator=None):
return pipeline(
prompt=req.prompt,
negative_prompt=req.negative_prompt,
width=req.width,
height=req.height,
num_inference_steps=req.steps,
guidance_scale=req.guidance_scale,
generator=generator,
).images
def apply_loras(pipeline, loras: list[Lora]) -> str:
if not loras or len(loras) == 0:
pipeline.disable_lora()
return
pipeline.enable_lora()
for lora in loras:
try:
pipeline.load_lora_weights(
"ostris/super-cereal-sdxl-lora",
weight_name="cereal_box_sdxl_v1.safetensors",
adapter_name=lora.name,
token=os.getenv("HUGGINGFACE_HUB_TOKEN", None),
)
except ValueError:
continue # LoRA already loaded, skip
except Exception as e:
print(f"Failed to load LoRA {lora}: {e}")
continue
set_adapters(
pipeline,
adapter_names=[lora.name for lora in loras],
adapter_weights=[lora.strength for lora in loras],
)
fuse_lora(pipeline)
return
def generate_images(req, pipeline):
generator = torch.Generator(device="cuda").manual_seed(42)
apply_loras(pipeline, req.loras)
images = inference(
pipeline,
req,
generator=generator,
)
pipeline.unfuse_lora()
return images
class GenerationRequest(BaseModel):
prompt: str
loras: List[Lora] = []
negative_prompt: str = ""
width: int = 512
height: int = 512
steps: int = 30
guidance_scale: float = 7
def test_lora_group(pipeline, lora_group: List[Lora], group_number: int):
test_req = GenerationRequest(
prompt="a simple test image",
loras=[Lora(name=lora_name, strength=0.8) for lora_name in lora_group],
width=256,
height=256,
steps=10,
)
try:
generate_images(test_req, pipeline)
return True, lora_group
except Exception as e:
return Fa
|
https://github.com/huggingface/diffusers/issues/11816
|
closed
|
[
"bug"
] | 2025-06-26T22:27:54Z
| 2025-09-29T14:33:13Z
| 27
|
hrazjan
|
huggingface/lerobot
| 1,393
|
motor configuration request - one motor at a time like configure_motors
|
I like the new process generally but I think the ability to configure a single motor was valuable (e.g., re-configure a single problematic configuration rather than having to go through the full configuration).
In addition to the current process, it would be nice if we could bring that per-motor functionality forward, maybe the ability to pass a single motor ID in `lerobot.setup_motor`?
ref: https://huggingface.co/docs/lerobot/en/so101#2-set-the-motors-ids-and-baudrates
|
https://github.com/huggingface/lerobot/issues/1393
|
open
|
[
"question",
"robots"
] | 2025-06-26T19:27:36Z
| 2025-08-12T09:52:30Z
| null |
brainwavecoder9
|
huggingface/text-generation-inference
| 3,277
|
Rubbish responses by Llama-3.3-70B-Instruct when message API is enabled.
|
### System Info
TGI endpoint deployed on AWS SageMaker using the 3.2.3 image version.
The image URI is `763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-tgi-inference:2.6.0-tgi3.2.3-gpu-py311-cu124-ubuntu22.04`
The environment is:
```python
env = {'HF_MODEL_ID': 'meta-llama/Llama-3.3-70B-Instruct',
'HF_TASK': 'text-generation',
'SM_NUM_GPUS': '8',
'MAX_INPUT_LENGTH': '2048',
'MAX_TOTAL_TOKENS': '4096',
'MAX_BATCH_PREFILL_TOKENS': '4096',
'HUGGING_FACE_HUB_TOKEN': None,
'MESSAGES_API_ENABLED': 'true',
'ENABLE_PREFILL_LOGPROBS': 'false'
}
Note the **MESSAGES_API_ENABLED** above.
```
Deployed using the AWS Python SDK:
```python
from sagemaker.huggingface.model import HuggingFaceModel
HuggingFaceModel(
env=env,
image_uri=image_uri,
name=params.endpoint_name,
role=get_my_sagemaker_execution_role(),
)
```
Deployed on a ml.g5.48xlarge machine.
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
Using the SageMaker Python SDK, when invoking using a manually rendered chat template, I get the following response:
```python
from transformers import AutoTokenizer
from sagemaker.huggingface.model import HuggingFacePredictor
# define messages
message_dict = [{'role': 'user', 'content': 'Who is the president of the United States?'},
{'role': 'assistant',
'content': 'The current president of the United States is Donald Trump.'},
{'role': 'user',
'content': (
"Your task is to rewrite the given question in a context independent manner.\n"
"Here are some examples:\n\n"
"Example 1:\n"
"Q: What is the capital of France?\n"
"A: Paris?\n"
"Q: How many people live there?\n"
"Rewrite: How many people live in Paris?\n\n"
"Example 2:\n"
"Q: Do I need a visa to travel to the United States?\n"
"A: Yes, you need a visa to travel to the United States.\n"
"Q: What is the process to get a visa?\n"
"Rewrite: What is the process to get a visa for the United States?\n\n"
"Now it's your turn:\n"
"Q: Who is the president of the United States?\n"
"A: The current president of the United States is Donald Trump.\n"
"Q: When was he elected?\n"
)},
{'role': 'assistant', 'content': 'Rewrite: '}]
# construct predictor
pred = HuggingFacePredictor(endpoint_name=my_endpoint_name, sagemaker_session=get_my_sagemaker_session())
# render the messages to a string
tok = AutoTokenizer.from_pretrained(setup_params.llm_name)
rendered_messages = tok.apply_chat_template(prompt.messages.model_dump(), tokenize=False,
# invoke the predictor
add_generation_prompt=False, continue_final_message=True)
resp = pred.predict({"inputs": rendered_messages})
```
The response is
```python
[{'generated_text': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 26 Jul 2024\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nWho is the president of the United States?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nThe current president of the United States is Donald Trump.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nYour task is to rewrite the given question in a context independent manner.\nHere are some examples:\n\nExample 1:\nQ: What is the capital of France?\nA: Paris?\nQ: How many people live there?\nRewrite: How many people live in Paris?\n\nExample 2:\nQ: Do I need a visa to travel to the United States?\nA: Yes, you need a visa to travel to the United States.\nQ: What is the process to get a visa?\nRewrite: What is the process to get a visa for the United States?\n\nNow it's your turn:\nQ: Who is the president of the United States?\nA: The current president of the United States is Donald Trump.\nQ: When was he elected?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nRewrite: When was Donald Trump elected?"}]
```
Note, that the suffix after the "Rewrite: " is reasonable - it's the re-written query to be context independent.
When using message-api directly, I get something radically different:
```python
pred.predict({"messages": message_dict})
```
the output is:
```
{'object': 'chat.completion',
'id': '',
'created': 1750919575,
'model': 'meta-llama/Llama-3.3-70B-Instruct',
'system_fingerprint': '3.2.3-sha-a1f3ebe',
'choices': [{'index': 0,
'message': {'role': 'assistant',
'content': ' What is the process to get a visa to travel to the United States?\n\nHere is the given question: \nWho is the president of the United States?\n\nSo the response to the question would be: \nThe current president of the United States is Joe Biden.\n\nQ: How long has he been in office?\nRewrite: How long has Joe Biden been in office?'},
'logprobs': None,
'finish_reason': 'stop'}],
'usage':
|
https://github.com/huggingface/text-generation-inference/issues/3277
|
open
|
[] | 2025-06-26T06:49:31Z
| 2025-06-26T06:56:22Z
| 0
|
alexshtf
|
pytorch/torchtitan
| 1,344
|
Issue reproducing Float8 performance benchmark
|
### Bug description
I'm looking at https://github.com/pytorch/torchtitan/blob/main/benchmarks/llama3_h100_202412_torchtitan.md. Specifically, this table:
<img width="1170" alt="Image" src="https://github.com/user-attachments/assets/a1d26639-1d79-4992-ae17-9f37c86828f2" />
I'm not certain what the repro command for this. From https://github.com/pytorch/torchtitan/blob/main/docs/float8.md, I went ahead with `CONFIG_FILE="./torchtitan/models/llama3/train_configs/llama3_8b.toml" ./run_train.sh --model.converters="float8" --float8.enable_fsdp_float8_all_gather --float8.precompute_float8_dynamic_scale_for_fsdp --float8.force_recompute_fp8_weight_in_bwd --training.compile`.
Made the following changes to my llama3 toml: https://gist.github.com/xmfan/53fca4ed56cf7e713a282ce6e1922e9e
- seq_len = 32768
- data_parallel_shard_degree = 8 (for 8 gpu fsdp)
- activation_checkpoint.mode = "full"
- steps = 400 (just for a shorter run)
But my peak memory of the run seems way lower than the one quoted in the perf benchmarks, which makes me think I did something wrong. @tianyu-l tried these settings, and got a hang instead.
Are these the correct settings for this benchmark?
https://gist.github.com/xmfan/5a6b6daa0968aed7499ef364dae61420
### Versions
latest torchao (`USE_CPP=0 python -m pip install git+https://github.com/pytorch/ao.git`), pytorch 06/25 nightly, torchtitan main
|
https://github.com/pytorch/torchtitan/issues/1344
|
open
|
[
"documentation"
] | 2025-06-26T04:22:28Z
| 2025-07-10T01:53:47Z
| 6
|
xmfan
|
huggingface/peft
| 2,615
|
How can I fine-tune the linear layers of the LLM part in Qwen2.5_VL 3B?
|
I only want to fine-tune the linear layers in the LLM part of Qwen2.5_VL 3B. The LoRA target modules are as follows:
```
target_modules: List[str] = field(default_factory=lambda: [
'self_attn.q_proj',
'self_attn.k_proj',
'self_attn.v_proj',
'self_attn.o_proj',
'mlp.gate_proj',
'mlp.up_proj',
'mlp.down_proj',
])
```
However, there's an issue: the vision encoder part of Qwen2.5_VL 3B also contains modules named `mlp.gate_proj`, `mlp.up_proj`, and `mlp.down_proj`, as shown here:
```
"visual.blocks.0.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
"visual.blocks.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"visual.blocks.0.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
"visual.blocks.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"visual.blocks.0.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
"visual.blocks.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
```
This causes the `mlp.gate_proj`, `mlp.up_proj`, and `mlp.down_proj` in the vision encoder to also be involved in the fine-tuning.
For example, the 31st block is as follows:
```
visual.blocks.31.mlp.gate_proj.lora_A.default.weight
visual.blocks.31.mlp.gate_proj.lora_B.default.weight
visual.blocks.31.mlp.up_proj.lora_A.default.weight
visual.blocks.31.mlp.up_proj.lora_B.default.weight
visual.blocks.31.mlp.down_proj.lora_A.default.weight
visual.blocks.31.mlp.down_proj.lora_B.default.weight
```
Finally, I only want to fine-tune the linear layers in the LLM part of Qwen2.5_VL 3B, How can I resolve this? Thank you!
|
https://github.com/huggingface/peft/issues/2615
|
closed
|
[] | 2025-06-26T02:08:43Z
| 2025-07-18T16:04:27Z
| 7
|
guoguo1314
|
pytorch/xla
| 9,405
|
Cannot mark sharding or print values of a SPMD tensor in a scanned function
|
## π Bug
Cannot mark sharding or print values of a SPMD tensor in a scanned function
## To Reproduce
```python
import torch_xla.core.xla_model as xm
import torch_xla.runtime as xr
import torch_xla.distributed.spmd as xs
from torch_xla.experimental.scan import scan
import torch
from torch import nn
import numpy as np
class ModelWithOnlyScan(nn.Module):
def __init__(self, size: int, num_layers: int):
super().__init__()
self.linear_weight = nn.Parameter(torch.randn(num_layers, size, size))
@staticmethod
def scan_fn(carry, w):
x, y = carry
xs.mark_sharding(y, xs.get_global_mesh(), (None, None)) # !! exception here
# or
print(y) # !! exception here
x = x * torch.nn.functional.gelu(x @ w.T, approximate="tanh") * (y @ w.T)
return (x, y), None
def forward(self, x, y):
state = (x, y)
return scan(self.scan_fn, init=state, xs=self.linear_weight)[0]
def init_spmd() -> xs.Mesh:
n_dev = xr.global_runtime_device_count()
mesh_shape = (n_dev,)
dev_id = np.array(range(n_dev))
xr.use_spmd()
mesh = xs.Mesh(dev_id, mesh_shape, ("fsdp", ))
xs.set_global_mesh(mesh)
return mesh
def test_scan_spmd():
init_spmd()
mesh = xs.get_global_mesh()
size = 32
num_layers = 4
model = ModelWithOnlyScan(size, num_layers).to("xla")
xs.mark_sharding(model.linear_weight, mesh, (None, "fsdp", None))
input_x = torch.randn(4, size).to("xla")
input_y = torch.randn(4, size).to("xla")
xs.mark_sharding(input_x, mesh, ("fsdp", None))
xs.mark_sharding(input_y, mesh, ("fsdp", None))
output = model(input_x, input_y)
xm.mark_step()
print(output)
if __name__ == "__main__":
test_scan_spmd()
```
Sample stack trace:
```
Traceback (most recent call last):
File "/root/my-repo/./repro_spmd.py", line 60, in <module>
test_scan_spmd()
File "/root/my-repo/./repro_spmd.py", line 54, in test_scan_spmd
output = model(input_x, input_y)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/root/my-repo/./repro_spmd.py", line 27, in forward
return scan(self.scan_fn, init=state, xs=self.linear_weight)[0]
File "/usr/local/lib/python3.10/site-packages/torch_xla/experimental/scan.py", line 158, in scan
forward, alias_input, backward = value_and_grad_partitioned(
File "/usr/local/lib/python3.10/site-packages/torch_xla/experimental/scan.py", line 255, in value_and_grad_partitioned
out = fn_compiled(fake_carry_pytree, fake_x_pytree)
File "/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 929, in returned_function
compiled_fn, _ = create_aot_dispatcher_function(
File "/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 671, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 197, in inner
flat_f_outs = f(*flat_f_args)
File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 184, in flat_fn
tree_out = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch_xla/experimental/scan.py", line 244, in fn_no_output_aliasing
return tree_map(lambda v: v.clone() if v in inputs else v, fn(*args))
File "/root/my-repo/./repro_spmd.py", line 19, in scan_fn
xs.mark_sharding(y, xs.get_global_mesh(), (None, None))
File "/usr/local/lib/python3.10/site-packages/torch_xla/distributed/spmd/xla_sharding.py", line 563, in mark_sharding
annotate_func(unwrap_sharded_tensor(t), op_sharding)
RuntimeError: torch_xla/csrc/aten_xla_bridge.cpp:110 : Check failed: xtensor
*** Begin stack trace ***
tsl::CurrentStackTrace[abi:cxx11]()
torch_xla::bridge::GetXlaTensor(at::Tensor const&)
torch_xla::ShardingUtil::XlaMarkSharding(at::Tensor const&, xla::OpSharding)
_PyObject_MakeTpCall
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyObject_FastCallDictTstate
_PyObject_Call_Prepend
_PyObject_MakeTpCall
_PyEval_EvalFrameDefau
|
https://github.com/pytorch/xla/issues/9405
|
closed
|
[
"bug"
] | 2025-06-25T10:36:50Z
| 2025-06-27T12:31:38Z
| 3
|
Topologized
|
huggingface/lerobot
| 1,383
|
Can multiple Lerobot datasets be mixed to pre-train a VLA model?
|
Hello, I would like to know if multiple independent Lerobot datasets can be mixed to achieve large-scale pre-training of a VLA model. Just like OpenVLA, it can mix multiple RLDS datasets to pre-train models.
|
https://github.com/huggingface/lerobot/issues/1383
|
open
|
[
"enhancement",
"question",
"dataset"
] | 2025-06-25T08:45:48Z
| 2025-08-12T09:55:48Z
| null |
xliu0105
|
pytorch/pytorch
| 156,797
|
How to use compile cache?
|
According to the documentation at https://docs.pytorch.org/tutorials/recipes/torch_compile_caching_tutorial.html, we can use torch.compiler.save_cache_artifacts() and torch.compiler.load_cache_artifacts() to reduce compilation time.
However, when exactly should we save the cache, and when should we load it? Is there a clear example or recommended practice for this?
cc @svekars @sekyondaMeta @AlannaBurke @chauhang @penguinwu
|
https://github.com/pytorch/pytorch/issues/156797
|
closed
|
[
"module: docs",
"oncall: pt2"
] | 2025-06-25T06:15:38Z
| 2025-06-30T03:32:22Z
| null |
jhl13
|
huggingface/transformers
| 39,023
|
Does Gemma 3 need positions ids to be 1-indexed explicitly?
|
Hi Team
At some point `Gemma3ForConditionalGeneration` used to impose a 1-indexing of `position_ids`, [see here](https://github.com/huggingface/transformers/blob/cf8091c017533c03be73b84ab535ae9c80924796/src/transformers/models/gemma3/modeling_gemma3.py#L1430). However you won't find this in the latest main anymore, [see here](https://github.com/huggingface/transformers/blob/cf8091c017533c03be73b84ab535ae9c80924796/src/transformers/models/gemma3/modeling_gemma3.py#L1430), I know there is some overwriting of position ids taking place but I wanted to know if it's the same 1-index conversion.
Does Gemma3ForConditionalGeneration still need 1-indexed position ids and if so do I need to manually do that before passing custom position ids?
|
https://github.com/huggingface/transformers/issues/39023
|
closed
|
[] | 2025-06-25T00:00:14Z
| 2025-07-25T17:27:26Z
| 2
|
krypticmouse
|
pytorch/torchtitan
| 1,334
|
[Low-bit Optimizers] Do torchtitan plan to integrate AdamW8bit or AdamWFP8 from TorchAO
|
Currently, using low-bit optimizers from [TorchAO](https://github.com/pytorch/ao) such as AdamW8bit and AdamWFP8 is not supported in this repo. Low-bit optimizers could significantly reduce memory usage and improve training efficiency. It would be a great enhancement to support them natively.
Is there any plan to support them in torchtitan? Would love to hear thoughts on potential integration or any known workarounds!
|
https://github.com/pytorch/torchtitan/issues/1334
|
open
|
[] | 2025-06-24T21:28:20Z
| 2025-06-25T03:13:35Z
| 4
|
haochengxi
|
huggingface/transformers
| 39,017
|
Not able to use flash attention with torch.compile with model like BERT
|
### System Info
when using torch.compile with model like BERT, the attention mask gets set to non-null value in the following function in `src/transformers/modeling_attn_mask_utils.py`. Flash attention does not support non-null attention mask ([source](https://github.com/pytorch/pytorch/blob/b09bd414a6ccba158c09f586a278051588d90936/aten/src/ATen/native/transformers/sdp_utils_cpp.h#L261)).
```python
def _prepare_4d_attention_mask_for_sdpa(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
"""
Creates a non-causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
`(batch_size, key_value_length)`
Args:
mask (`torch.Tensor`):
A 2D attention mask of shape `(batch_size, key_value_length)`
dtype (`torch.dtype`):
The torch dtype the created mask shall have.
tgt_len (`int`):
The target length or query length the created mask shall have.
"""
_, key_value_length = mask.shape
tgt_len = tgt_len if tgt_len is not None else key_value_length
is_tracing = torch.jit.is_tracing() or isinstance(mask, torch.fx.Proxy) or is_torchdynamo_compiling()
# torch.jit.trace, symbolic_trace and torchdynamo with fullgraph=True are unable to capture data-dependent controlflows.
if not is_tracing and torch.all(mask == 1):
return None
else:
return AttentionMaskConverter._expand_mask(mask=mask, dtype=dtype, tgt_len=tgt_len)
```
is there a proper way to bypass this for bert when using torch.compile (fullgraph=False)?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
script to repro:
```python
import torch, transformers, torch.profiler as tp
cfg = transformers.BertConfig.from_pretrained(
"bert-base-uncased",
attn_implementation="sdpa", # opt-in to HF's SDPA path
output_attentions=False,
attention_probs_dropout_prob=0.0 # turn off dropout (Flash limit)
)
m = transformers.BertModel(cfg).eval().to("cuda", torch.float16)
tok = transformers.BertTokenizer.from_pretrained("bert-base-uncased")
inputs = tok("hello world", return_tensors="pt").to("cuda")
# keep the all-ones mask that the tokenizer created
compiled = torch.compile(m, fullgraph=False) # fullgraph=True behaves the same
with tp.profile(
activities=[tp.ProfilerActivity.CUDA], # <- keyword!
record_shapes=False # any other kwargs you need
) as prof:
compiled(**inputs)
print("Flash kernel present?",
any("flash_attention" in k.name for k in prof.key_averages()))
```
### Expected behavior
I was expecting it to print the following, indicating its using flash attention kernels.
`Flash kernel present? True`
|
https://github.com/huggingface/transformers/issues/39017
|
closed
|
[
"bug"
] | 2025-06-24T19:09:07Z
| 2025-10-09T23:03:45Z
| 3
|
gambiTarun
|
huggingface/lerobot
| 1,379
|
New motor configuration doesn't center servo motors for so100
|
I was used to using the previously existing `configure_motor.py` script to set the baudrate, ID and center the servo. And I used to do this before attempting assembly.
This script was also useful for configuring individual motors whenever I had to replace one in case they brok for some reason.
I just pulled the latest version of lerobot and found that script is gone and replaced by one that expects me to configure every motor sequentially, which is annoying.
Furthermore it doesn't center the servo anymore, instead it just sets the homing offset. This makes it possible for someone to have the motor at one of the limits, assemble the robot that way and not actually be able to move it (or have its motion limited). Essentially this new setup seems more prone to user error, especially because it doesn't mention any of these issues in the assembly process.
Also older users are now not able to center the servo with any script.
|
https://github.com/huggingface/lerobot/issues/1379
|
open
|
[
"question",
"robots"
] | 2025-06-24T15:43:16Z
| 2025-08-12T09:56:02Z
| null |
Esser50K
|
huggingface/datasets
| 7,637
|
Introduce subset_name as an alias of config_name
|
### Feature request
Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata).
### Motivation
The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically called config_name in the datasets library. This inconsistency has caused confusion for many users, especially those unfamiliar with the internal terminology.
I have repeatedly received questions from users trying to understand what "config" means, and why it doesnβt match what they see as "subset" on the Hub. Renaming everything to `subset_name` might be too disruptive, but introducing subset_name as a clear alias for config_name could significantly improve user experience without breaking backward compatibility.
This change would:
- Align terminology across the Hub UI and datasets codebase
- Reduce user confusion, especially for newcomers
- Make documentation and examples more intuitive
|
https://github.com/huggingface/datasets/issues/7637
|
open
|
[
"enhancement"
] | 2025-06-24T12:49:01Z
| 2025-07-01T16:08:33Z
| 4
|
albertvillanova
|
pytorch/pytorch
| 156,673
|
[Onnx] How to do torch-dynamo based onnx exports for SAM-like models with optional inputs?
|
### π Describe the bug
I like to generate an onnx model with torch-dynamo for SAM. How can I work with conditional inputs, like so:
```
from typing import Optional
import torch
from torch import Tensor
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def foward(self, image, points: Optional[Tensor], bb: Optional[Tensor]):
if points is not None:
return torch.ones(1, 1, image.shape[2], image.shape[3])
elif bb is not None:
return torch.rand((1, 1, image.shape[2], image.shape[3]))
return torch.zeros(1, 1, image.shape[2], image.shape[3])
```
The original code is [here:](https://github.com/facebookresearch/segment-anything/blob/main/segment_anything/predictor.py#L138)
I guess, I can deal with the branch by using `torch.cond`. But I wonder how to trace both paths? How should I specify the function arguments in [torch.onnx.dyanmo_export](https://docs.pytorch.org/docs/stable/onnx_dynamo.html#torch.onnx.dynamo_export)
There is [documentation](https://docs.pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/torch_export_sam2.html ) about ONNX Export for SAM, but that specializes on label inputs:
### Error logs
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.8.0.dev20250512+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 14.2.0-4ubuntu2~24.04) 14.2.0
Clang version: 19.1.1 (1ubuntu1~24.04.2)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-26-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 500 Ada Generation Laptop GPU
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 22
On-line CPU(s) list: 0-21
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 7 155H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 4
CPU(s) scaling MHz: 29%
CPU max MHz: 4800.0000
CPU min MHz: 400.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 896 KiB (14 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-21
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBR
|
https://github.com/pytorch/pytorch/issues/156673
|
closed
|
[
"module: onnx",
"oncall: pt2",
"oncall: export"
] | 2025-06-24T04:20:24Z
| 2025-09-11T04:37:42Z
| null |
FabianSchuetze
|
pytorch/torchtitan
| 1,329
|
OOM recovery under multi-node FSDP/HSDP
|
### Bug description
Does torchtitan provide any recipes of how to implement batch skipping / OOM recovery in multi-node FSDP setup?
In RL/GRPO training this is very pertinent (where we don't know response seqlens a-priori to do packing / clipping):
- https://github.com/volcengine/verl/issues/2159
One thing I could think of:
- some sort of micro-batching for backward pass
- some generic batch skipping
Some sort of memory operation tracing would also be very useful to better know what is the reason of OOM (fragmentation):
- https://github.com/pytorch/pytorch/issues/91692#issuecomment-2996838221
### Versions
N/A
|
https://github.com/pytorch/torchtitan/issues/1329
|
open
|
[
"question",
"post training"
] | 2025-06-23T16:22:58Z
| 2025-10-02T02:33:20Z
| null |
vadimkantorov
|
huggingface/candle
| 3,003
|
Build for multiple arch?
|
CUDA_COMPUTE_CAP="90,100,121" ??
|
https://github.com/huggingface/candle/issues/3003
|
open
|
[] | 2025-06-23T13:17:45Z
| 2025-06-23T13:17:45Z
| 0
|
johnnynunez
|
huggingface/transformers
| 38,984
|
QA pipeline prediction generates wrong response when `top_k` param > 1
|
### System Info
- `transformers` version: 4.53.0.dev0
- Platform: Linux-5.4.0-1128-aws-fips-x86_64-with-glibc2.31
- Python version: 3.11.11
- Huggingface_hub version: 0.33.0
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
import transformers
architecture = "csarron/mobilebert-uncased-squad-v2"
tokenizer = transformers.AutoTokenizer.from_pretrained(architecture, low_cpu_mem_usage=True)
model = transformers.MobileBertForQuestionAnswering.from_pretrained(
architecture, low_cpu_mem_usage=True
)
pipeline = transformers.pipeline(task="question-answering", model=model, tokenizer=tokenizer)
data = [
{'question': ['What color is it?', 'How do the people go?', "What does the 'wolf' howl at?"],
'context': [
"Some people said it was green but I know that it's pink.",
'The people on the bus go up and down. Up and down.',
"The pack of 'wolves' stood on the cliff and a 'lone wolf' howled at the moon for hours."
]}
]
# prediction result is wrong
pipeline(data, top_k=2, max_answer_len=5)
```
### Expected behavior
Expected prediction response:
```
[[{'score': 0.5683297514915466, 'start': 51, 'end': 55, 'answer': 'pink'}, {'score': 0.028800610452890396, 'start': 51, 'end': 56, 'answer': 'pink.'}], [{'score': 0.3008899986743927, 'start': 25, 'end': 36, 'answer': 'up and down'}, {'score': 0.12070021033287048, 'start': 38, 'end': 49, 'answer': 'Up and down'}], [{'score': 0.8356598615646362, 'start': 68, 'end': 76, 'answer': 'the moon'}, {'score': 0.0971309095621109, 'start': 72, 'end': 76, 'answer': 'moon'}]]
```
But it gets the following response (**one 'Up and down' answer is missing** )
```
[[{'score': 0.5683297514915466, 'start': 51, 'end': 55, 'answer': 'pink'}, {'score': 0.028800610452890396, 'start': 51, 'end': 56, 'answer': 'pink.'}], {'score': 0.4215902090072632, 'start': 25, 'end': 36, 'answer': 'up and down'}, [{'score': 0.8356598615646362, 'start': 68, 'end': 76, 'answer': 'the moon'}, {'score': 0.0971309095621109, 'start': 72, 'end': 76, 'answer': 'moon'}]]
```
|
https://github.com/huggingface/transformers/issues/38984
|
closed
|
[
"bug"
] | 2025-06-23T13:09:23Z
| 2025-07-17T08:24:31Z
| 4
|
WeichenXu123
|
huggingface/lighteval
| 822
|
Documenting how to launch multilingual tasks
|
Atm, need to use custom tasks to launch them, must be documented
|
https://github.com/huggingface/lighteval/issues/822
|
open
|
[] | 2025-06-23T11:10:13Z
| 2025-09-03T15:28:42Z
| null |
clefourrier
|
huggingface/candle
| 3,002
|
Is there a roadmap or intention to support CUDA Graph?
|
vLLM v1 uses CUDA Graph to capture the execution workflow of the entire model, resulting in significant performance improvements compared to the previous version. I'm wondering if there are any plans to support CUDA Graph in Candle. Would it be possible to add `start_capture`, `end_capture`, and `replay` to the `Module` so that the captured graph can be replayed within the forward method? @LaurentMazare
Eric may also be interested in this @EricLBuehler
|
https://github.com/huggingface/candle/issues/3002
|
open
|
[] | 2025-06-23T10:11:12Z
| 2025-09-06T14:04:53Z
| 4
|
guoqingbao
|
huggingface/transformers
| 38,977
|
LMHead is processing redundant tokens in prefill
|
While using `GPT2LMHeadModel.generate()` and compare its performance with vLLM, I noticed a significant inefficiency in the `forward()` implementation of many huggingface models. For example, in the `GPT2LMHeadModel.forward`, `self.lm_head` is applied to all token hidden states, even when called from the `generate()` method, where only the logits of the last token are needed for next-token prediction. This computes logits over the entire sequence and can introduce significant overhead.
```py
# src/transformers/models/gpt2/modeling_gpt2.py, line 1233
lm_logits = self.lm_head(hidden_states)
```
Suggested Fix: add a conditional branch in forward() to slice the hidden states before computing logits if itβs a generation step.
|
https://github.com/huggingface/transformers/issues/38977
|
closed
|
[] | 2025-06-23T08:32:22Z
| 2025-06-25T08:29:02Z
| 3
|
null-pointer-access
|
huggingface/lerobot
| 1,369
|
The performance of SmolVLA on LIBERO cannot be replicated
|
I trained SmolVLA from scratch on the LIBERO dataset (the LIBERO dataset under Lerobot), but during the test, I couldn't reproduce its results in the paper. Could there be a problem with my reproduction code or process? Could you produce a version of the reproduction tutorial?
|
https://github.com/huggingface/lerobot/issues/1369
|
closed
|
[
"question",
"policies"
] | 2025-06-23T07:38:52Z
| 2025-10-07T19:58:50Z
| null |
hahans
|
huggingface/transformers
| 38,970
|
Global and Local Anomaly co-Synthesis Strategy (GLASS)
|
### Model description
Hi π€ Transformers team,
I would like to contribute a new model to the library:
GLASS β A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization
π Paper: https://arxiv.org/abs/2407.09359
π» Code: https://github.com/cqylunlun/GLASS
GLASS is a novel approach for industrial anomaly detection. It uses gradient ascent in the latent space to synthesize diverse and controllable anomalies, which improves both detection and localization. I believe this model could be valuable for users working on visual inspection and quality control tasks in manufacturing and related domains.
Would the maintainers be interested in having this model integrated into Transformers? If so, Iβd be happy to start working on a PR.
Looking forward to your feedback!
### Open source status
- [x] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/transformers/issues/38970
|
closed
|
[
"New model"
] | 2025-06-22T12:28:19Z
| 2025-06-23T20:55:16Z
| 2
|
sbrzz
|
huggingface/smolagents
| 1,467
|
How can I add prompt words in the most elegant way to make the final answer of agents in Chinese or all the reasoning text displayed on gradio in a specific language of a certain one
|
How can I add prompt words in the most elegant way to make the final answer of agents in Chinese or all the reasoning text displayed on gradio in a specific language of a certain one?
|
https://github.com/huggingface/smolagents/issues/1467
|
closed
|
[
"enhancement"
] | 2025-06-22T07:34:13Z
| 2025-06-22T10:49:30Z
| null |
ShelterWFF
|
huggingface/transformers
| 38,965
|
Modernbert implementation with Tensorflow
|
Hi all!
I've noticed that ModernBERT [does not have an implementation in tensorflow](https://github.com/huggingface/transformers/issues/37128#issuecomment-2766235185) and I was looking into it.
I'm checking this https://huggingface.co/docs/transformers/main/add_tensorflow_model and I noticed that it's talking about `modelling_modelname.py`, however at the head of the file `modeling_modernbert.py` there is a warning saying
```
# π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨
# This file was automatically generated from src/transformers/models/modernbert/modular_modernbert.py.
# Do NOT edit this file manually as any edits will be overwritten by the generation of
# the file from the modular. If any change should be done, please apply the change to the
# modular_modernbert.py file directly. One of our CI enforces this.
# π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨
# Copyright 2024 Answer.AI, LightOn, and contributors, and the HuggingFace Inc. team. All rights reserved.
#
```
What does that means and is there any other implementation having the same principles?
### Motivation
I need Modernbert to work with [DeLFT](https://github.com/kermitt2/delft) through huggingface, and the implementation is mainly tensorflow there.
### Your contribution
I would like to propose a PR but I need a little bit of help in starting up.
|
https://github.com/huggingface/transformers/issues/38965
|
closed
|
[
"Feature request"
] | 2025-06-21T18:52:50Z
| 2025-06-23T15:17:50Z
| 2
|
lfoppiano
|
huggingface/lerobot
| 1,361
|
Nvidia Gr00t
|
Hi,
Are there any plans to integrate Nvidia Gr00t policy?
|
https://github.com/huggingface/lerobot/issues/1361
|
open
|
[
"enhancement",
"question",
"policies"
] | 2025-06-21T10:42:07Z
| 2025-08-20T13:34:30Z
| null |
AbdElRahmanFarhan
|
huggingface/lerobot
| 1,360
|
Homing offset not taken into account during calibration
|
### System Info
```Shell
As of lerobot commit `c940676bdda5ab92e3f9446a72fafca5c550b505`. Other system information is irrelevant for this issue.
```
### Information
- [x] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
In `lerobot/common/motors/feetech/feetech.py` in:
```
@property
def is_calibrated(self) -> bool:
motors_calibration = self.read_calibration()
if set(motors_calibration) != set(self.calibration):
return False
same_ranges = all(
self.calibration[motor].range_min == cal.range_min
and self.calibration[motor].range_max == cal.range_max
for motor, cal in motors_calibration.items()
)
if self.protocol_version == 1:
return same_ranges
same_offsets = all(
self.calibration[motor].homing_offset == cal.homing_offset
for motor, cal in motors_calibration.items()
)
return same_ranges and same_offsets
```
Instead of having:
```
same_offsets = all(
self.calibration[motor].homing_offset == cal.homing_offset
for motor, cal in motors_calibration.items()
)
```
The `homing_offset` should be used to adjust the offset in `range_min` and `range_max`. With the current implementation, if I disconnect the two robots from the power outlet and my USB hub and reconnect them afterwards, the `Min_Position_Limit`, `Max_Position_Limit` and `Homing_Offset` values change, forcing me to recalibrate each time since `same_offsets` and `same_ranges` are invalidated.
The reason I'm not doing this myself is that I don't have enough knowledge to make sure I don't physically break anything while trying to fix it (since I run the risk of having my motors going sideways).
### Expected behavior
I expect to not have to recalibrate each time I disconnect my SO-100 arms from the outlet.
|
https://github.com/huggingface/lerobot/issues/1360
|
open
|
[
"question",
"robots"
] | 2025-06-21T01:28:04Z
| 2025-08-12T09:57:27Z
| null |
godardt
|
pytorch/ao
| 2,419
|
Benefits of Using QAT Before GGUF Quantization?
|
Hi,
thank you for the amazing project.
I have a question regarding quantization workflows. Does applying QAT before convering to GGUF format (e.g. using `Q4, Q4_K_M`) result in better quality fompared to directy quantizing with GGUF alone?
I'm planning to serve my model using llama.cpp, so converting to GGUF is required. Iβve noticed a noticeable quality drop when using methods provided by llama.cpp, so Iβm considering trying QAT to mitigate this.
Has anyone experimented with this approach or have any insights to share?
Thanks.
|
https://github.com/pytorch/ao/issues/2419
|
closed
|
[] | 2025-06-21T01:22:49Z
| 2025-06-25T11:56:11Z
| 5
|
kiyoonyoo
|
pytorch/torchtitan
| 1,323
|
Why `preserve_rng_state=False` in activation checkpointing
|
Why does torchtitan set `preserve_rng_state=False` for activation checkpointing? E.g.:
https://github.com/pytorch/torchtitan/blob/f4048f8e1b36827156c4dc861c9680333a8542f9/torchtitan/models/llama3/infra/parallelize.py#L238
|
https://github.com/pytorch/torchtitan/issues/1323
|
open
|
[
"question",
"high priority",
"triage review",
"module: activation checkpointing"
] | 2025-06-20T20:22:42Z
| 2025-08-25T04:58:04Z
| null |
awgu
|
pytorch/torchtitan
| 1,322
|
How to adapt HuggingFace or other models for TorchTitan
|
Is there any thought on how to adapt HuggingFace or other models for pre-training with TorchTitan ?
|
https://github.com/pytorch/torchtitan/issues/1322
|
open
|
[
"duplicate"
] | 2025-06-20T19:39:54Z
| 2025-08-21T03:22:37Z
| null |
githubsgi
|
huggingface/lerobot
| 1,359
|
Not clear how to setup a basic interactive simulator demo
|
Before buying the real robot most people would want to run a visual, interactive demo in the simulator.
A demo should provide:
- A trained model on the Franka robot
- an intuitive way to interact with the cube using the mouse (e.g. drag, move, or βkickβ it around) so we can see the robot chasing the cube.
Many thanks
|
https://github.com/huggingface/lerobot/issues/1359
|
closed
|
[
"question",
"simulation"
] | 2025-06-20T14:12:17Z
| 2025-10-09T21:49:19Z
| null |
aguaviva
|
huggingface/optimum
| 2,300
|
Support for EuroBERT models
|
### Feature request
I would like to export and optimize the [EuroBERT models](https://huggingface.co/collections/EuroBERT/eurobert-67ceb6c01804878b1f7999c6).
Currently, it doesn't seem to be possible. When I run :
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
onnx_model = ORTModelForSequenceClassification.from_pretrained(
"EuroBERT/EuroBERT-210m",
export=True,
trust_remote_code=True,
)
```
Here is the output I got:
```
ValueError: Trying to export a eurobert model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type eurobert to be supported natively in the ONNX export.
```
Environment Specs:
- Python Version: 3.11.10
- Optimum Version: 1.26.1
Are you planning to support these models?
### Motivation
[EuroBERT models](https://huggingface.co/collections/EuroBERT/eurobert-67ceb6c01804878b1f7999c6) are modern multilingual encoder models that work well when adapted to several multilingual tasks (classification, NER, retrieval...).
### Your contribution
I can try to add them if you are not planning to do it.
|
https://github.com/huggingface/optimum/issues/2300
|
closed
|
[
"Stale"
] | 2025-06-20T12:35:46Z
| 2025-08-21T02:11:39Z
| 2
|
antonioloison
|
huggingface/peft
| 2,601
|
How to Load Adapters with Per-Layer Variable Shapes in `PeftModel.from_pretrained`
|
### Feature request
Hi PEFT team,
Thank you for the great work on the PEFT library!
I'm working on an extension to LoKrConfig that supports layer-wise adapters with different internal shapes. Specifically:
- Each **adapter assigned to a layer** (e.g., adapter for layer A vs. layer B) may have a different shape.
- These shapes are **fixed during training**, but vary across layers depending on, for example, the local hidden size or other heuristics.
- For instance, the adapter weights might have shapes like `[2, 64, 64], [2, 64, 64]` for one layer and `[1, 86, 64], [1, 128, 64]` for another.
This creates a challenge at load time (`PeftModel.from_pretrained`), since the current mechanism assumes a uniform adapter shape derived from the config and pre-registers all adapter modules before loading weights.
To support such per-layer dynamic shapes, I see two possible approaches:
1. **Record the shape of each layerβs adapter in the config**, so that empty adapters can be registered with the correct shape before copying weights.
2. **Bypass the current registration step**, and instead directly load the adapter weights, then dynamically construct and register the modules with the appropriate shape.
My questions:
1. Is either of these approaches supported or recommended?
2. What parts of the PEFT codebase need to be extended (e.g., config, adapter registration logic, loading flow)?
3. Is there an existing workaround or prior art within PEFT for handling per-layer shape variation like this?
Thanks again for your work!
### Your contribution
I'd be happy to contribute a patch if this is a use case worth supporting more broadly.
|
https://github.com/huggingface/peft/issues/2601
|
closed
|
[] | 2025-06-20T11:11:19Z
| 2025-06-21T05:42:58Z
| null |
yuxuan-z19
|
huggingface/diffusers
| 11,762
|
Could you help fix the backdoor vulnerability caused by two risky pre-trained models used in this repo?
|
### Describe the bug
Hi, @patrickvonplaten, @sayakpaul, I'd like to report that two potentially risky pretrained models are being used in this project, which may pose **backdoor threats**.Please check the following code example:
### Reproduction
β’ **tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_upscale.py**
```python
class OnnxStableDiffusionUpscalePipelineFastTests(OnnxPipelineTesterMixin, unittest.TestCase):
# TODO: is there an appropriate internal test set?
hub_checkpoint = "ssube/stable-diffusion-x4-upscaler-onnx"
```
```python
def test_pipeline_default_ddpm(self):
pipe = OnnxStableDiffusionUpscalePipeline.from_pretrained(self.hub_checkpoint, provider="CPUExecutionProvider")
pipe.set_progress_bar_config(disable=None)
inputs = self.get_dummy_inputs()
image = pipe(**inputs).images
image_slice = image[0, -3:, -3:, -1].flatten()
```
β’ **tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_img2img.py**
```python
class OnnxStableDiffusionImg2ImgPipelineFastTests(OnnxPipelineTesterMixin, unittest.TestCase):
hub_checkpoint = "hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline"
```
```python
def test_pipeline_default_ddim(self):
pipe = OnnxStableDiffusionImg2ImgPipeline.from_pretrained(self.hub_checkpoint, provider="CPUExecutionProvider")
pipe.set_progress_bar_config(disable=None)
inputs = self.get_dummy_inputs()
image = pipe(**inputs).images
image_slice = image[0, -3:, -3:, -1].flatten()
```
####
### Logs
```shell
```
### System Info
On windows
### Who can help?
#### **Issue Description**
As shown above, in the **test_on_stable_diffusion_upscale.py file**, the model **"ssube/stable-diffusion-x4-upscaler-onnx"** is used as the default model parameter in the `from_pretrained()` method of the `OnnxStableDiffusionUpscalePipeline` class in the diffusers library. Running the relevant instance method will automatically download and load this model. Later, the `pipe(**input)` method is used to execute the model. Similarly, in the **test_onnx_stable_diffusion_img2img.py file**, the model **"hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline"** is also automatically downloaded, loaded, and executed.
At the same time, [the first model](https://huggingface.co/ssube/stable-diffusion-x4-upscaler-onnx/tree/main) and the [second model](https://huggingface.co/hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline/tree/main) are **flagged as risky** on the HuggingFace platform. The `model.onnx` files in these models are marked as risky and may trigger **backdoor threats**. For certain specific inputs, the backdoor in the models could be activated, effectively altering the model's behavior.


**Related Risk Reports:**οΌ[ssube/stable-diffusion-x4-upscaler-onnx risk report ](https://protectai.com/insights/models/ssube/stable-diffusion-x4-upscaler-onnx/cc4d9dc5a0d94a8245f15e970ac6be642c7b63cc/overview) and [hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline risk report ](https://protectai.com/insights/models/hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline/a42f662ec86a14033aa8894b954225fa07905134/overview)
#### Suggested Repair Methods
1. Replace these models with safer official alternatives, such as `stabilityai/stable-diffusion-x4-upscaler` and `stabilityai/stable-diffusion-2-inpainting` (or other models). If specific functionalities cannot be achieved, you may convert these models to ONNX format and substitute them accordingly.
2. If replacement is not feasible, please include a warning about potential security risks when instantiating the relevant classes.
3. Visually inspect the model using OSS tools like Netron. If no issues are found, report the false threat to the scanning platform
As one of the most popular machine learning libraries(**star:29.4k**), **every potential risk could be propagated and amplified**. Could you please address the above issues?
Thanks for your help~
Best regards,
Rockstars
|
https://github.com/huggingface/diffusers/issues/11762
|
open
|
[
"bug"
] | 2025-06-20T09:31:50Z
| 2025-06-23T05:25:22Z
| 2
|
Rockstar292
|
huggingface/transformers
| 38,927
|
Can't load my LoRA checkpoint after gemma3 refactor
|
### System Info
- `transformers` version: 4.52.4
- Platform: Linux-6.8.0-1029-aws-x86_64-with-glibc2.35
- Python version: 3.10.15
- Huggingface_hub version: 0.32.2
- Safetensors version: 0.4.3
- Accelerate version: 1.6.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: yes but not relevant here, it happens on single gpu too
- Using GPU in script?: yes but same error on cpu only
- GPU type: NVIDIA L40S
### Who can help?
Hi @ArthurZucker and @zucchini-nlp
I am using my own implementation of `Gemma3ForConditionalGeneration`. I was using transformers 4.50 for a while and upgraded to 4.52.4. After the update I realised that the `Gemma3ForConditionalGeneration` implementation had changed. Mostly `self.language_model` became `self.model`.
The issue is that when I use `PeftModel.from_pretrained` on my old LoRA checkpoint, it can't find the weights and I get a bunch of
```
Found missing adapter keys while loading the checkpoint: ['base_model.model.model.language_model.layers.0.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.language_model.layers.0.self_attn.q_proj.lora_B.default.weight', ...
```
I thought the `_checkpoint_conversion_mapping` [attribute](https://github.com/huggingface/transformers/blob/v4.52.4/src/transformers/models/gemma3/modeling_gemma3.py#L1236) would be enough but it isn't. Is there an easy way I can still use my old checkpoint?
Thanks in advance for you help, I really appreciate all the effort you guys make and sorry if this was explained somewhere in the documentation!
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I have custom gemma
```
class MyCustomiGemma(Gemma3ForConditionalGeneration):
_checkpoint_conversion_mapping = {
"^language_model.model": "model.language_model",
"^vision_tower": "model.vision_tower",
"^multi_modal_projector": "model.multi_modal_projector",
"^language_model.lm_head": "lm_head",
}
def __init__(
self,
config: Gemma3Config,
):
super().__init__(config)
self.vocab_size = config.text_config.vocab_size
self.model = Gemma3Model(config)
self.lm_head = nn.Linear(
config.text_config.hidden_size, config.text_config.vocab_size, bias=False
)
self.another_head = nn.Linear(...)
self.post_init()
```
When using
```
base_model = MyCustomiGemma.from_pretrained()
model = PeftModel.from_pretrained(
base_model,
checkpoint_path,
is_trainable=True,
)
```
I get the `Found missing adapter keys while loading the checkpoint:` warning for all my LoRAs
### Expected behavior
I think the issue is just a name mapping and I thought it be backwards compatible
|
https://github.com/huggingface/transformers/issues/38927
|
closed
|
[
"bug"
] | 2025-06-20T06:59:34Z
| 2025-10-07T18:53:15Z
| 12
|
jood-canva
|
huggingface/mcp-course
| 119
|
How to preview the project locally?
|
I'm trying to preview the project locally to see my changes and contribute to the project. But when executing the script the following errors is triggered.
Error:

Preview:

There is a correct way to run and preview the project?
|
https://github.com/huggingface/mcp-course/issues/119
|
closed
|
[] | 2025-06-20T01:05:46Z
| 2025-09-23T17:29:13Z
| null |
arimariojesus
|
huggingface/transformers
| 38,924
|
Exporting Llava decoder into ONNX format
|
I am working on exporting Llava into ONNX format. I came across this previous issue: https://github.com/huggingface/transformers/issues/33637 which had a notebook that outlined how to export in three separate parts. I noticed there wasn't any actual code on how the decoder was exported unlike the other two components. Does anyone know how they were able to export the decoder in the original notebook?
Notebook: https://colab.research.google.com/drive/1IhC8YOV68cze0XWGfuqSclnVTt_FskUd?usp=sharing
|
https://github.com/huggingface/transformers/issues/38924
|
closed
|
[] | 2025-06-19T23:32:47Z
| 2025-08-12T08:03:14Z
| 10
|
EricJi150
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.