repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/candle
| 2,974
|
Any good first issues a newcomer could tackle?
|
Hey! I've been using this crate for a while now and would love to start contributing back! I notice that your issues aren't labelled, who should I contact/do you have a list of issues that would be good for me?
|
https://github.com/huggingface/candle/issues/2974
|
open
|
[] | 2025-05-29T04:19:18Z
| 2025-05-30T18:25:37Z
| 3
|
Heidar-An
|
pytorch/torchtitan
| 1,237
|
[Bug] Potential bugs in "_grouped_mm" in Llama4 MoE codes
|
### Bug description
### Descriptions for Bugs.
I encountered NaN loss values when running Llama 4 MoE experimental codes.
The errors come from [here](https://github.com/pytorch/torchtitan/blob/ed2bbc07dda35ce26187bb0d743115381e884b35/torchtitan/experiments/llama4/model/moe.py#L85-L87).
Afaik `offsets` are defined as `torch.cumsum(num_local_tokens_per_expert)` and `x` (`routed_input`) is permuted with the shape of `original_shape + num_experts * ALIGN_SIZE_M`.
Thus, there was a difference between `x.shape[0]` and `offsets[-1]`.
I'm not sure which expert will be allocated for those redundant tensors in x in `grouped_mm`.
I believe the expected behavior would be the outputs from them should always be 0, because they are filled with 0 values.
But `_grouped_mm` sometimes results in large values, which first index of outputs gets `inf` elements ([here](https://github.com/pytorch/torchtitan/blob/ed2bbc07dda35ce26187bb0d743115381e884b35/torchtitan/experiments/llama4/model/moe.py#L322)).
### How to Reproduce?
1. I used [Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) tokenizer.
2. I used `debug_model.toml`, but with different batch size and seq_len in 1 H200 GPU. Here is the running script:
```
torchrun --nnodes 1 --nproc_per_node 1 ./torchtitan/train.py \
--job.config_file ./torchtitan/experiments/llama4/train_configs/debug_model.toml --job.dump_folder ./outputs/250528_grouped_mm_debug \
--profiling.save_traces_folder profile_trace --comm.trace_buf_size 0 --checkpoint.folder ./checkpoints/250528_grouped_mm_debug --checkpoint.interval 13000 \
--training.steps 114440 --training.batch_size 1 --training.seq_len 2048 \
--metrics.log_freq 100 --lr_scheduler.warmup_steps 1000 --optimizer.lr 6e-4 \
--parallelism.data_parallel_shard_degree 1 --parallelism.tensor_parallel_degree 1
```
3. Add `x = x.to(torch.bfloat16)` and `..., dtype=torch.bfloat16)` for `self.w1`, `self.w2`, and `self.w3`, since 1 GPU will automatically use torch.float32 in the code and `_grouped_mm` requires tensors are in GPU.
4. I used `pdb` to get intermediate outputs one by one.
### Results and Expected Behaviors.
Routed outputs sometimes show the following results (at the first step or a few steps later):
```
offsets : tensor([ 176, 416, 736, 992, 1296, 1584, 1840, 2096], device='cuda:0', dtype=torch.int32)
x.shape : torch.Size([2176, 256])
h = F.silu(torch._grouped_mm(x, self.w1, offs=offsets)) :
tensor([[ 3.7598e-02, -9.3262e-02, 1.3965e-01, ..., -1.7822e-02,
-2.2949e-02, 2.0020e-02],
[ 1.1572e-01, 2.2461e-01, 3.1641e-01, ..., 8.6060e-03,
-5.3711e-02, -2.7100e-02],
[ 1.4551e-01, 2.1973e-02, 1.3086e-01, ..., -2.5269e-02,
3.7354e-02, -1.5503e-02],
...,
[-0.0000e+00, 2.9297e-02, -0.0000e+00, ..., 5.2246e-02,
7.7462e+18, -1.8066e-02],
[ 2.8531e+26, 5.1025e-02, -0.0000e+00, ..., 1.1670e-01,
3.2028e-28, 1.5076e-02],
[ 6.3348e+26, 3.8818e-02, 4.0250e+01, ..., -2.8229e-03,
2.4844e-32, -8.6670e-03]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SiluBackward0>)
h = h * torch._grouped_mm(x, self.w3, offs=offsets)
tensor([[-1.8692e-03, -2.8992e-03, 1.6327e-03, ..., -1.5564e-03,
-1.0681e-02, 5.1022e-05],
[-5.5237e-03, 6.0425e-03, 1.0864e-02, ..., 9.8419e-04,
3.0396e-02, -4.2152e-04],
[-1.6785e-03, -4.5776e-04, -2.0142e-03, ..., 1.0193e-02,
-4.6082e-03, -1.3733e-04],
...,
[ 0.0000e+00, 1.2054e-03, -0.0000e+00, ..., -2.5177e-03,
3.5863e+11, -1.7548e-03],
[ -inf, 6.3705e-04, 0.0000e+00, ..., 9.5825e-03,
-2.9000e+02, 3.2234e-04],
[ 8.4410e+07, 4.0588e-03, -1.0379e+31, ..., 3.7432e-05,
1.2387e-07, -1.3733e-03]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<MulBackward0>)
out = torch._grouped_mm(h, self.w2, offs=offsets)
tensor([[ 6.3782e-03, 4.0894e-03, -1.3672e-02, ..., -8.4839e-03,
-2.8229e-03, -3.9978e-03],
[-1.9379e-03, -4.6387e-03, 8.5449e-03, ..., -4.8523e-03,
-4.4861e-03, -1.4114e-03],
[-3.1128e-03, -2.5177e-03, -3.4332e-03, ..., 1.3062e-02,
-6.7139e-03, -7.6904e-03],
...,
[-1.6251e-03, -1.3279e-10, -7.3787e+19, ..., -5.1659e-10,
-3.8780e+34, -3.5834e-10],
[ 4.7055e+34, -1.6735e-09, 6.0889e+18, ..., -1.1205e-09,
7.1024e+24, 3.1287e-10],
[-2.4087e-21, -2.1682e-09, 3.0898e+20, ..., 2.9831e-09,
2.4898e-30, 5.5297e-10]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<GroupedMmBackward0>)
```
We expect that tensors, where the sequence positions are from 2096 to 2176, should be always zero.
This causes to hidden states to have nan values, and nan values of loss eventually.
### Versions
Python 3.13 with the following packages:
```
absl-py==2.2.2
aiohappyeyeballs==2.6.1
aiohttp==3.11.18
aiosignal==1.3.2
annotated-types==0.7.0
ast
|
https://github.com/pytorch/torchtitan/issues/1237
|
closed
|
[] | 2025-05-29T00:07:09Z
| 2025-07-08T04:54:37Z
| 8
|
raymin0223
|
pytorch/xla
| 9,259
|
need an incremental build script
|
## 🚀 Feature
After making a small change to the source code, we should be able to do an incremental build that only rebuilds the affected targets. We need to document how to do that. It may require writing a script that can be easily invoked.
## Motivation
Currently we recommend developers to run https://github.com/pytorch/xla/blob/master/scripts/build_developer.sh to rebuild after a change. However, this script doesn't a full rebuild (even though it may benefit from build caching), making it unnecessarily slow.
We should have a smart build script (e.g. based on make and/or bazel) that skips the rebuilding of things that haven't changed).
|
https://github.com/pytorch/xla/issues/9259
|
closed
|
[
"tech debt",
"build"
] | 2025-05-28T23:15:38Z
| 2025-05-30T01:30:56Z
| 4
|
zhanyong-wan
|
huggingface/xet-core
| 358
|
How can I have snapshot_download to have continue feature? Errors became very common
|
Whenever some error happens and i run same code, it starts from 0
It is XET enabled repo and hf xet installed
I really need to have resume feature
my entire code
```
from huggingface_hub import snapshot_download
import os
import argparse
def download_models(target_dir=None):
"""
Download models from HuggingFace hub to specified directory
Args:
target_dir (str, optional): Target directory for downloads.
If None, uses current working directory
"""
# Set repo ID
repo_id = "MonsterMMORPG/Kohya_Train"
# Use provided target dir or default to current working directory
download_dir = target_dir if target_dir else os.getcwd()
# Create target directory if it doesn't exist
os.makedirs(download_dir, exist_ok=True)
try:
snapshot_download(
local_dir=download_dir,
repo_id=repo_id
)
print(f"\nDOWNLOAD COMPLETED to: {download_dir}")
print("Check folder content for downloaded files")
except Exception as e:
print(f"Error occurred during download: {str(e)}")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Download models from HuggingFace hub')
parser.add_argument('--dir', type=str, help='Target directory for downloads', default=None)
args = parser.parse_args()
download_models(args.dir)
```
|
https://github.com/huggingface/xet-core/issues/358
|
closed
|
[
"enhancement"
] | 2025-05-28T22:30:19Z
| 2025-11-20T17:08:35Z
| null |
FurkanGozukara
|
pytorch/xla
| 9,256
|
Docs build issues errors / warnings on duplicate labels (anchors)
|
Docs build indicates that the docs have duplicate labels (aka anchors). These predate the recent changes to myst but now that we have standardized on the same tooling as upstream PT, we should now start fixing these. Here is an output. Note that you have to manually clean by deleting the build directory to force a full rebuild.
(nightly311) yho_google_com@t1v-n-50ea3a23-w-0:/mnt/disks/yho/pytorch/xla/docs$ ./docs_build.sh
Obtaining pytorch_sphinx_theme from git+https://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme (from -r requirements.txt (line 4))
Updating ./src/pytorch-sphinx-theme clone
Running command git fetch -q --tags
Running command git reset --hard -q 4125c834e1aa0945fde6ef58ff2f77f7abedc460
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... done
Preparing editable metadata (pyproject.toml) ... done
Requirement already satisfied: sphinx==5.0.0 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from -r requirements.txt (line 3)) (5.0.0)
Requirement already satisfied: sphinxcontrib.katex==0.8.6 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from -r requirements.txt (line 8)) (0.8.6)
Requirement already satisfied: sphinx-copybutton==0.5.0 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from -r requirements.txt (line 13)) (0.5.0)
Requirement already satisfied: myst-parser==0.18.1 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from -r requirements.txt (line 15)) (0.18.1)
Requirement already satisfied: myst-nb==0.16 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from -r requirements.txt (line 18)) (0.16.0)
Requirement already satisfied: sphinxcontrib-applehelp in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.0.0)
Requirement already satisfied: sphinxcontrib-devhelp in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.0.0)
Requirement already satisfied: sphinxcontrib-jsmath in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (1.0.1)
Requirement already satisfied: sphinxcontrib-htmlhelp>=2.0.0 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.1.0)
Requirement already satisfied: sphinxcontrib-serializinghtml>=1.1.5 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.0.0)
Requirement already satisfied: sphinxcontrib-qthelp in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.0.0)
Requirement already satisfied: Jinja2>=2.3 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (3.1.6)
Requirement already satisfied: Pygments>=2.0 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.19.1)
Requirement already satisfied: docutils<0.19,>=0.14 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (0.18.1)
Requirement already satisfied: snowballstemmer>=1.1 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (3.0.1)
Requirement already satisfied: babel>=1.3 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.17.0)
Requirement already satisfied: alabaster<0.8,>=0.7 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (0.7.16)
Requirement already satisfied: imagesize in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (1.4.1)
Requirement already satisfied: requests>=2.5.0 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (2.32.3)
Requirement already satisfied: packaging in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from sphinx==5.0.0->-r requirements.txt (line 3)) (25.0)
Requirement already satisfied: markdown-it-py<3.0.0,>=1.0.0 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from myst-parser==0.18.1->-r requirements.txt (line 15)) (2.2.0)
Requirement already satisfied: mdit-py-plugins~=0.3.1 in /mnt/disks/yho/miniconda/envs/nightly311/lib/python3.11/site-packages (from myst-parser==0.18.1->-r requirements.txt (line 15)) (0.3.5)
Requirement already satisfied: pyyaml in /mnt/disks/yho/miniconda/envs/nig
|
https://github.com/pytorch/xla/issues/9256
|
closed
|
[
"documentation"
] | 2025-05-28T19:02:14Z
| 2025-07-16T22:48:17Z
| 1
|
yaoshiang
|
huggingface/transformers
| 38,452
|
Memory saving by upcasting logits for only non-ignored positions
|
### Feature request
In [`loss_utils.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/loss/loss_utils.py), logits are upcasted for float32 for some losses. This can waste memory for cases where certain labels are `ignore_index`. This is especially true for fine tuning cases where one chooses to calculate loss only on the completion. They would keep label as -100 for prompt tokens and upcasting those logits would be unnecessary. We can instead call `logits.float()` after we have our final labels. This would be especially useful for `ForCausalLMLoss` as that seems to be the most likely use case.
### Motivation
When fine tuning a causal LM, one can choose to calculate loss only on the completion, thus setting labels for prompt tokens to be -100. Upcasting logits at those positions when calculating loss is not needed. Avoiding that can save memory. Most likely use case is `ForCausalLMLoss`.
### Your contribution
An example for `ForCausalLMLoss`:
```
def ForCausalLMLoss(
logits,
labels,
vocab_size: int,
num_items_in_batch: Optional[int] = None,
ignore_index: int = -100,
shift_labels: Optional[torch.Tensor] = None,
**kwargs,
) -> torch.Tensor:
# Don't upcast yet
# logits = logits.float()
if shift_labels is None:
# Shift so that tokens < n predict n
labels = nn.functional.pad(labels, (0, 1), value=ignore_index)
shift_labels = labels[..., 1:].contiguous()
# Flatten the tokens
logits = logits.view(-1, vocab_size)
shift_labels = shift_labels.view(-1)
# Upcast to float if we need to compute the loss to avoid potential precision issues
# Now that we have our final labels, take only the useful logits and then upcast
logits = logits[shift_labels != ignore_index]
shift_labels = shift_labels[shift_labels != ignore_index]
logits = logits.float()
# Enable model parallelism
shift_labels = shift_labels.to(logits.device)
# Calculate loss on truncated logits and labels
loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs)
return loss
```
We can do something similar in `ForMaskedLMLoss` on line 83 instead of 77. `ForTokenClassification` does not take `ignore_index` as an argument but we can still do the same here because `fixed_cross_entropy` does take `ignore_index`.
Another alternative was to move the upcasting to inside `fixed_cross_entropy` but a few losses don't do that. So, that might change/break existing things.
Let me know if this change sounds good. I can submit a PR.
|
https://github.com/huggingface/transformers/issues/38452
|
open
|
[
"Feature request"
] | 2025-05-28T18:58:52Z
| 2025-05-29T12:38:15Z
| 1
|
harshit2997
|
huggingface/speech-to-speech
| 163
|
how to use this with Livekit Agent?
|
how to use this with Livekit Agent?
|
https://github.com/huggingface/speech-to-speech/issues/163
|
open
|
[] | 2025-05-28T18:27:11Z
| 2025-05-28T18:27:11Z
| null |
Arslan-Mehmood1
|
huggingface/transformers
| 38,448
|
num_items_in_batch larger than the actual useful token when computing loss
|
def fixed_cross_entropy(source, target, num_items_in_batch: int = None, ignore_index: int = -100, **kwargs):
I check the shape of the inputs and find follows:
In [1]: logits.shape
Out[1]: torch.Size([4, 896, 152064])
In [2]: labels.shape
Out[2]: torch.Size([4, 896])
In [3]: num_items_in_batch
Out[3]: 4390
Why is 4390>4*896?
|
https://github.com/huggingface/transformers/issues/38448
|
closed
|
[] | 2025-05-28T15:28:05Z
| 2025-05-31T02:30:07Z
| 4
|
SHIFTTTTTTTT
|
huggingface/transformers
| 38,435
|
[i18n-ro] Translating docs to Romanian
|
Hi!
Let's bring the documentation to all the Romanian-speaking community 🌐
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) (in progress, [see](https://github.com/zero-point/transformers/tree/add_ro_translation_to_readme))
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go 🔥
-->
|
https://github.com/huggingface/transformers/issues/38435
|
open
|
[
"WIP"
] | 2025-05-28T12:01:48Z
| 2025-05-28T15:53:39Z
| 2
|
zero-point
|
huggingface/transformers
| 38,428
|
[Question] The logic of data sampler in data parallel.
|
Hi, thanks for your attention.
When reading the source code of transformers, I cannot understand the implementation of `_get_train_sampler` in `trainer.py`. Why the default data sampler is `RandomSampler` rather than `DistributedSampler`? How does the trainer handle the sampler for data parallel?
reference code: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L975
|
https://github.com/huggingface/transformers/issues/38428
|
closed
|
[] | 2025-05-28T08:49:13Z
| 2025-07-06T08:02:36Z
| 3
|
kxzxvbk
|
pytorch/TensorRT
| 3,536
|
❓ [Question] Do you have any plan to release v2.6.1 ?
|
## ❓ Question
Hello, Torch-TensorRT team,
I'd like to ask if there are any plans to release a patch version, such as v2.6.1.
The current release (v2.6.0) includes a `breakpoint()` call left in [the code](https://github.com/pytorch/TensorRT/blob/v2.6.0-rc3/py/torch_tensorrt/dynamo/conversion/custom_ops_converters.py#L57), which halts execution and makes the release unusable in production environments unless modified manually or installed from the source. Since `Torch-TensorRT` tightly couples with a specific TensorRT version and PyTorch version, there's currently no alternative.
A quick patch release would be greatly appreciated.
Thanks for your great work.
|
https://github.com/pytorch/TensorRT/issues/3536
|
closed
|
[
"question"
] | 2025-05-28T08:37:18Z
| 2025-06-03T04:50:48Z
| null |
junstar92
|
huggingface/transformers
| 38,425
|
Can not load TencentBAC/Conan-embedding-v2
|
### System Info
Description
When attempting to load the “Conan-embedding-v2” model directly via transformers.AutoModel.from_pretrained, I get a ValueError indicating that the repo’s config.json lacks a model_type key. This prevents the Transformers library from inferring which model class to instantiate.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModel
model = AutoModel.from_pretrained("TencentBAC/Conan-embedding-v2")
ValueError: Unrecognized model in TencentBAC/Conan-embedding-v2.
Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, bart, bert, …, whisper, xlnet, …
### Expected behavior
AutoModel.from_pretrained("TencentBAC/Conan-embedding-v2") should load the model automatically, or at minimum provide guidance on how to set the correct model_type.
|
https://github.com/huggingface/transformers/issues/38425
|
closed
|
[
"bug"
] | 2025-05-28T08:21:23Z
| 2025-05-28T14:58:03Z
| 1
|
shanekao-sks
|
huggingface/accelerate
| 3,596
|
How to distribute the model into multiple GPUs using accelerate?
|
I have 4 GPUs. If I only use a single GPU to train the model, there will be an OutOfMemoryError raised. How can I distribute the model into all the 4 GPUs to avoid the OutOfMemoryError using accelerate?
|
https://github.com/huggingface/accelerate/issues/3596
|
closed
|
[] | 2025-05-28T06:27:08Z
| 2025-05-28T14:06:18Z
| null |
GeorgeCarpenter
|
huggingface/candle
| 2,971
|
Enhance the usability of the tensor struct
|
Hello,
I’m currently learning how to use Candle with the book Dive into Deep Learning, but implementing the code in Candle. I noticed that Candle is missing some practical utility functions, such as:
* The Frobenius norm
* dot product (vector or matrix dot product)
* matrix-vector multiplication
While these functions aren’t overly complex to implement manually, having them natively supported by the Tensor struct would significantly improve usability.
I’ve tried adding some of these functions myself to extend Candle’s functionality (to make it more user-friendly).
|
https://github.com/huggingface/candle/issues/2971
|
closed
|
[] | 2025-05-28T03:41:44Z
| 2025-05-29T07:41:02Z
| 1
|
ssfdust
|
huggingface/transformers.js
| 1,323
|
Cannot get the SAM model running like in example
|
### Question
I've found that transformers.js supports SAM as written in 2.14.0 release notes.
https://github.com/huggingface/transformers.js/releases/tag/2.14.0
I'm running the code on a M1 mac in a Brave browser.
But after I've used and adapted the example script, I can actually see in my browser console that the model is loaded and the browser is working.
<img width="1129" alt="Image" src="https://github.com/user-attachments/assets/fd256c77-62f5-4da2-a44c-cbb022333789" />
But then suddenly it crashes with following error:
```
transformers.js:11821 Uncaught Error: An error occurred during model execution: "Missing the following inputs: input_points, input_labels.
```
**My adapted code looks like this:**
````javascript
// using version 3.5.1
import {AutoProcessor, RawImage, SamModel} from "./node_modules/@huggingface/transformers/dist/transformers.js";
const model = await SamModel.from_pretrained('Xenova/slimsam-77-uniform');
const processor = await AutoProcessor.from_pretrained('Xenova/slimsam-77-uniform');
const img_url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/corgi.jpg';
const raw_image = await RawImage.read(img_url);
const input_points = [[[340, 250]]] // 2D localization of a window
const inputs = await processor(raw_image, input_points);
const outputs = await model(inputs); /// Error happens here
const masks = await processor.post_process_masks(outputs.pred_masks, inputs.original_sizes, inputs.reshaped_input_sizes);
console.log(masks);
// [
// Tensor {
// dims: [ 1, 3, 410, 614 ],
// type: 'bool',
// data: Uint8Array(755220) [ ... ],
// size: 755220
// }
// ]
const scores = outputs.iou_scores;
console.log(scores);
// Tensor {
// dims: [ 1, 1, 3 ],
// type: 'float32',
// data: Float32Array(3) [
// 0.8350210189819336,
// 0.9786665439605713,
// 0.8379436731338501
// ],
// size: 3
// }
````
Markup:
````html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
</head>
<body>
<h1>SAM DEMO</h1>
<script src="main.js" type="module">
</script>
<pre id="pre"></pre>
</body>
</html>
````
Can you maybe give me a hint what's the issue here or what I must e.g. change according to major version changes.
Thanks so much :-)
|
https://github.com/huggingface/transformers.js/issues/1323
|
closed
|
[
"question"
] | 2025-05-27T20:01:49Z
| 2025-11-29T12:32:29Z
| null |
BernhardBehrendt
|
pytorch/tutorials
| 3,367
|
💡 [REQUEST] - Proposal: Add Tutorial on Differentiable Decision Forests (DNDF-style)
|
### 🚀 Describe the improvement or the new tutorial
### Proposal: Add a Tutorial/Documentation Example on Differentiable Decision Forests
**Overview**
This is a proposal to add a well-documented example or tutorial demonstrating a *Differentiable Decision Forest* model in PyTorch — inspired by the Deep Neural Decision Forests paper (Kontschieder et al., ICCV 2015).
The goal is not to introduce a new `torch.nn` module, but rather to show how such a model can be implemented using native PyTorch operations in a transparent and educational way.
**Why This?**
- Combines the interpretability of decision trees with the feature learning power of neural networks.
- Uses soft routing (sigmoid decisions) and learnable leaf distributions (softmax) to allow end-to-end backpropagation.
- Offers an alternative to traditional ensembles or black-box classifiers, especially for tabular and hybrid domains.
**What the Tutorial Would Include**
- Overview of the model structure (CNN → decision trees)
- How to implement soft decisions and routing probabilities (μ) with PyTorch ops like `sigmoid`, `softmax`, `einsum`, `gather`, etc.
- Joint optimization of routing and leaf distributions
- Training on MNIST or tabular datasets
- Emphasis on "Simple over Easy" — no custom abstractions
**Reference**
- [Kontschieder et al., Deep Neural Decision Forests, ICCV 2015](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/ICCV15_DeepNDF_main.pdf)
**Final Note**
This is not a request to add this as a built-in PyTorch module — in fact, that might go against PyTorch's *Simple over Easy* philosophy.
Instead, this would be best suited as a community-contributed tutorial or example in the official [PyTorch Tutorials](https://github.com/pytorch/tutorials) repository or documentation site.
Extended Note
I'm currently in the middle of university exams and may not be able to actively contribute for a few weeks — but I’d be very interested in helping develop the tutorial afterwards.
### Existing tutorials on this topic
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/tutorials/issues/3367
|
open
|
[
"tutorial-proposal"
] | 2025-05-27T10:01:23Z
| 2025-07-02T15:00:18Z
| 6
|
Tunahanyrd
|
huggingface/chat-ui
| 1,836
|
Search feature tasks
|
We implemented a first version of the search chat feature in #1823, there's still some todos if people feel like tackling:
- [ ] Right now we only return the N most relevant snippets, we would need to return all matching conversations and implement infinite loading & pagination. The building blocks already exist in `NavMenu.svelte` they need to be ported over.
- [ ] - It would be nice to show, below the conversation title, a little sample of text which matches the search query, so we can see why it matched, right now we only show the title.
|
https://github.com/huggingface/chat-ui/issues/1836
|
closed
|
[
"enhancement",
"help wanted",
"front",
"back"
] | 2025-05-27T08:17:44Z
| 2025-06-02T14:30:40Z
| 7
|
nsarrazin
|
huggingface/transformers
| 38,396
|
Can I disable all CI works in my forked version of Transformers?
|
After I synced the `main` branch of Transformers in my forked version, github keeps running CI works and fails. Can I disable it? Thanks.
|
https://github.com/huggingface/transformers/issues/38396
|
closed
|
[] | 2025-05-27T04:44:07Z
| 2025-05-28T18:06:31Z
| 2
|
ChengLyu
|
huggingface/doc-builder
| 564
|
How to ignore some line when applying style?
|
I have this in my code:
```python
expected_output = textwrap.dedent("""\
╭────────────────────── Step 42 ───────────────────────╮
│ ┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┓ │
│ ┃ Prompt ┃ Completion ┃ Correctness ┃ Format ┃ │
│ ┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━┩ │
│ │ The sky is │ blue. │ 0.12 │ 0.79 │ │
│ ├────────────┼──────────────┼─────────────┼────────┤ │
│ │ The sun is │ in the sky. │ 0.46 │ 0.10 │ │
│ └────────────┴──────────────┴─────────────┴────────┘ │
╰──────────────────────────────────────────────────────╯
""")
```
And it gets reformatted into this:
```python
expected_output = textwrap.dedent("""\
╭────────────────────── Step 42 ───────────────────────╮ │ ┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┓
│ │ ┃ Prompt ┃ Completion ┃ Correctness ┃ Format ┃ │ │ ┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━┩ │ │
│ The sky is │ blue. │ 0.12 │ 0.79 │ │ │ ├────────────┼──────────────┼─────────────┼────────┤ │ │ │ The sun is
│ in the sky. │ 0.46 │ 0.10 │ │ │ └────────────┴──────────────┴─────────────┴────────┘ │
╰──────────────────────────────────────────────────────╯
""")
```
is there a way to avoid this?
|
https://github.com/huggingface/doc-builder/issues/564
|
open
|
[] | 2025-05-26T21:58:08Z
| 2025-05-26T21:59:13Z
| null |
qgallouedec
|
huggingface/safetensors
| 609
|
Properties data
|
### Feature request
Please add properties for the content of safetensor files.
(Which can be read without the requirement to load the whole thing ...)
### Motivation
Rename all your safetensor files to a numeric value from 1.safetensors to n.safetensors, where n is the amount of such files you have.
Now try to find out, what is inside, like:
- Model type (checkpoint, lora, ip-adapter-files, anything else)
- Application type (SD1, SD2, SD3, SDXL, FLUX, Audio, Video and more)
- Original name
- Version
- and more ...
The safetensor file is like a package without any description. There's something inside, but you don't have any possibility to see what it is.
What users are missing is the package label that tells them, what's inside, like anything in the warehouse. If you go shopping, such a label tells you the name, the producers name, the weight and normally something about the ingredients.
It would be very useful, if a safetensor package could do this too.
### Your contribution
I just have the idea.
I don't know how to PR ...
|
https://github.com/huggingface/safetensors/issues/609
|
closed
|
[] | 2025-05-26T20:06:13Z
| 2025-06-16T12:13:08Z
| 2
|
schoenid
|
huggingface/open-r1
| 660
|
How to control the number of responses per query for each benchmark?
|
Hi, thank you for the great work!
In the README, I noticed that you mention the use of different numbers of responses per query for estimating pass@1 across benchmarks. For example:
Benchmark | Number of responses per query
-- | --
AIME 2024 | 64
MATH-500 | 4
GPQA Diamond | 8
LiveCodeBench | 16
However, I'm unable to find where in the code or CLI these values are configured. When running the following example:
```
NUM_GPUS=1
MODEL=deepseek-ai/{model_name}
MODEL_ARGS="model_name=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,data_parallel_size=$NUM_GPUS,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
OUTPUT_DIR=data/evals/$MODEL
lighteval vllm $MODEL_ARGS "lighteval|aime24|0|0" \
--use-chat-template \
--output-dir $OUTPUT_DIR
```
Does this automatically sample 64 responses per query for AIME24, as indicated in the table? Or do I need to explicitly specify the number of responses? If so, how can I pass that parameter through the CLI?
|
https://github.com/huggingface/open-r1/issues/660
|
open
|
[] | 2025-05-26T14:38:15Z
| 2025-05-27T15:32:50Z
| null |
Zoeyyao27
|
huggingface/transformers
| 38,377
|
Why are the model classes in unit tests imported directly from the transformer package instead of directly importing the model classes in the file? Is there any special consideration?
|
### Feature request
Take qwen3MoE unit test as an example:
if is_torch_available():
import torch
from transformers import (
Qwen3MoeForCausalLM,
Qwen3MoeForQuestionAnswering,
Qwen3MoeForSequenceClassification,
Qwen3MoeForTokenClassification,
Qwen3MoeModel,
)
Why not this:
from src.transformers.models.qwen3_moe.modeling_qwen3_moe import (
Qwen3MoeForCausalLM,
Qwen3MoeForQuestionAnswering,
Qwen3MoeForSequenceClassification,
Qwen3MoeForTokenClassification,
Qwen3MoeModel,
)
### Motivation
Unit tests should guard their own code files
### Your contribution
No PR has been submitted yet
|
https://github.com/huggingface/transformers/issues/38377
|
open
|
[
"Feature request"
] | 2025-05-26T11:41:19Z
| 2025-05-26T11:41:19Z
| 0
|
ENg-122
|
huggingface/transformers
| 38,375
|
Unable to run run_instance_segmentation_no_trainer with HF Accelerate
|
### System Info
I am trying to run the [examples/pytorch/instance-segmentation/run_instance_segmentation_no_trainer.py](https://github.com/huggingface/transformers/blob/d1b92369ca193da49f9f7ecd01b08ece45c2c9aa/examples/pytorch/instance-segmentation/run_instance_segmentation_no_trainer.py) with HF Accelerate. I was able to run the other Trainer API example successfully, but the No Trainer (Accelerate) version is facing the following bug.
This is using the `4.52.0.dev0` instance. The only change I've made was to change epochs=2.
The following error arose, when trying to prompt for more information, ChatGPT suggests it could be the following issues but I have no idea on what could be the root cause. No other related issues found and the docs bot was not working. Would appreciate advice on how to run this example script as I hope to adopt it for my task.
| **Category** | **Potential Issue** | **Explanation** | **Recommended Fix** |
|----------------------------|--------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|
| **Model Config Mismatch** | Mismatch in `num_labels` vs checkpoint (81 vs 3) | Causes some layers (e.g., `class_predictor`) to be randomly initialized, might desync ranks | Set `config.num_labels = 3` **before** loading the model or use a matching checkpoint |
| **DDP Desynchronization** | Different logic across ranks (e.g., `if rank == 0:` doing extra things) | All ranks must call collectives in the same order and time | Ensure logic is **identical** across all ranks |
| **Evaluation in DDP** | Evaluation logic not synchronized | Can cause hanging during collective ops like `all_gather` | Skip evaluation for non-zero ranks or use `if rank == 0:` carefully |
| **GPU Communication** | NCCL timeout or deadlock due to driver/hardware/GIL issues | Long-running or stuck collectives cause watchdog termination | Set env vars: `NCCL_BLOCKING_WAIT=1`, `NCCL_ASYNC_ERROR_HANDLING=1`, and reduce batch size if needed |
| **Distributed Setup** | Improper `accelerate` or `torchrun` configuration | One process might be behaving incorrectly | Test with single GPU first: `CUDA_VISIBLE_DEVICES=0 accelerate launch --num_processes=1 ...` |
| **Deprecated Args** | `_max_size` passed to `Mask2FormerImageProcessor` | Harmless, but messy | Remove `_max_size` from processor initialization |
| **Resource Overload** | GPU memory, bandwidth, or CPU bottleneck | Can indirectly cause slowdowns or crashes | Monitor with `nvidia-smi`, lower batch size, reduce `num_workers` |
Error message below:
```
loading weights file model.safetensors from cache at /home/jiayi/.cache/huggingface/hub/models--facebook--mask2former-swin-tiny-coco-instance/snapshots/22c4a2f15dc88149b8b8d9f4d42c54431fbd66f6/model.safetensors
Instantiating SwinBackbone model under default dtype torch.float32.
All model checkpoint weights were used when initializing Mask2FormerForUniversalSegmentation.
Some weights of Mask2FormerForUniversalSegmentation were not initialized from the model checkpoint at facebook/mask2former-swin-tiny-coco-instance and are newly initialized because the shapes did not match:
- class_predictor.bias: found shape torch.Size([81]) in the checkpoint and torch.Size([3]) in the model instantiated
- class_predictor.weight: found shape torch.Size([81, 256]) in the checkpoint and torch.Size([3, 256]) in the model instantiated
- criterion.empty_weight: found shape torch.Size([81]) in the checkpoint and torch.Size([3]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/raid/jiayi/safety_barrier_breach/mask2former_hf/venv/lib/python
|
https://github.com/huggingface/transformers/issues/38375
|
closed
|
[
"bug"
] | 2025-05-26T10:23:04Z
| 2025-07-05T08:03:07Z
| 3
|
gohjiayi
|
huggingface/huggingface_hub
| 3,117
|
how to download huggingface model files organize the http header and so on in other language
|
Hi,
I want to use another language like java or scala to download huggging face model and config.json. but meet connnect error , it is not make sense . so I want to know does huggingface have some more setting to download file ?
````
package torch.tr
import java.io.FileOutputStream
import java.net.URI
import java.net.http.{HttpClient, HttpRequest, HttpResponse}
import java.time.Duration
object HuggingFaceDownloader {
def main(args: Array[String]): Unit = {
val fileUrl = "https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf/resolve/main/config.json"
val savePath = "config.json"
val headers = Map(
"Accept-Encoding" -> "identity",
// "user-agent" -> "transformers/0.0.1; java/23.0.2+7-58; hf_hub/null; java/23.0.2; file_type/config; from_autoclass/false; session_id/1AC306C59B944E9EA06A482682BE9584; unknown/None",
"authorization" -> "Bearer hf_XXAdogOLotfVSVFMKrWXSITeByDgRe"
)
try {
downloadFile(fileUrl, savePath, headers)
println(s"文件下载成功,保存路径: $savePath")
} catch {
case e: Exception =>
System.err.println(s"文件下载失败: ${e.getMessage}")
e.printStackTrace()
}
}
def downloadFile(fileUrl: String, savePath: String, headers: Map[String, String]): Unit = {
val client = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(10))
.followRedirects(HttpClient.Redirect.NORMAL)
.build()
val requestBuilder = HttpRequest.newBuilder()
.uri(URI.create(fileUrl))
.GET()
headers.foreach { case (key, value) =>
requestBuilder.header(key, value)
}
val request = requestBuilder.build()
val response = client.send(request, HttpResponse.BodyHandlers.ofInputStream())
if (response.statusCode() == 200) {
val inputStream = response.body()
val outputStream = new FileOutputStream(savePath)
try {
val buffer = new Array[Byte](4096)
var bytesRead = inputStream.read(buffer)
while (bytesRead != -1) {
outputStream.write(buffer, 0, bytesRead)
bytesRead = inputStream.read(buffer)
}
} finally {
inputStream.close()
outputStream.close()
}
} else {
throw new Exception(s"下载失败,状态码: ${response.statusCode()}")
}
}
}
```
```
package dev.transformers4j.transformers;
import java.io.BufferedInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.net.URL;
public class HuggingFaceDownloader2 {
public static void main(String[] args) {
String fileUrl = "https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf/resolve/main/config.json";
String savePath = "config.json"; // 本地保存的文件路径
try {
downloadFile(fileUrl, savePath);
System.out.println("文件下载成功,保存路径: " + savePath);
} catch (IOException e) {
System.err.println("文件下载失败: " + e.getMessage());
e.printStackTrace();
}
}
/**
* 从指定 URL 下载文件并保存到本地路径
* @param fileUrl 要下载的文件的 URL
* @param savePath 本地保存的文件路径
* @throws IOException 如果在下载或保存文件过程中发生 I/O 错误
*/
public static void downloadFile(String fileUrl, String savePath) throws IOException {
URL url = new URL(fileUrl);
try (BufferedInputStream in = new BufferedInputStream(url.openStream());
FileOutputStream fileOutputStream = new FileOutputStream(savePath)) {
System.out.println("<UNK>: " + savePath);
byte[] dataBuffer = new byte[1024];
int bytesRead;
while ((bytesRead = in.read(dataBuffer, 0, 1024)) != -1) {
fileOutputStream.write(dataBuffer, 0, bytesRead);
}
}
}
}
```
|
https://github.com/huggingface/huggingface_hub/issues/3117
|
open
|
[] | 2025-05-26T10:00:25Z
| 2025-06-15T14:55:48Z
| null |
mullerhai
|
huggingface/agents-course
| 510
|
anyone can run unit 1 dumm agent notebook????
|
<img width="1226" alt="Image" src="https://github.com/user-attachments/assets/1813be3d-0d73-478e-86fa-11304e796614" />
|
https://github.com/huggingface/agents-course/issues/510
|
closed
|
[
"question"
] | 2025-05-25T03:00:04Z
| 2025-06-25T09:03:52Z
| null |
chaoshun2025
|
pytorch/torchtitan
| 1,223
|
How to pretrain from scratch a Qwen 2.5 7B-base model using Torchtitan?
|
HI team,
Thank you for the excellent work!
Could you please tell me where to find example scripts/templates for pretraining from scratch a Qwen 2.5 7B-base model using Torchtitan?
Thanks again!
|
https://github.com/pytorch/torchtitan/issues/1223
|
closed
|
[] | 2025-05-25T00:42:15Z
| 2025-08-21T03:18:41Z
| null |
tjoymeed
|
huggingface/transformers
| 38,346
|
Why is return_assistant_tokens_mask and continue_final_message incompatible?
|
I'm currently authoring a new chat template, and while debugging encountered the check for this, however when uncommenting the check, the resulting mask and template both seem to still be correct. So I'm curious as to why or whether this check is needed at all?
I can see it was introduced in [the original PR](https://github.com/huggingface/transformers/pull/33198), however there doesn't seem to be any justification/explanation for this assertion.
|
https://github.com/huggingface/transformers/issues/38346
|
closed
|
[] | 2025-05-24T23:44:13Z
| 2025-07-02T08:03:11Z
| 2
|
nyxkrage
|
huggingface/candle
| 2,967
|
Logit Discrepancy Between Candle and PyTorch When Using XLM-RoBERTa Model
|
When running the same XLM-RoBERTa model (`s-nlp/xlmr_formality_classifier` - [HF](https://huggingface.co/s-nlp/xlmr_formality_classifier) ) in both Candle and PyTorch, I'm observing significant differences in the logits produced by the model's classification head for identical inputs. Is this expected behavior? See [this repository](https://github.com/jpe90/candle-pytorch-parity-testing/tree/master/xlm-roberta-finetuned) for a reproduction.
## Environment/Setup
- Model: `s-nlp/xlmr_formality_classifier`
- Candle version: 0.9.1
- Model SHA256: `66037d963856d6d001f3109d2b3cf95c76bce677947e66f426299c89bc1b58e7`
- OS: macOS
## Observed Behavior
Given identical inputs, the logits produced by Candle and PyTorch differ significantly:
**Candle logits:**
```
[[2.0820313, -1.7548828], [0.7783203, -0.5629883], [1.2871094, -1.0039063], [2.1601563, -1.9277344]]
```
**PyTorch logits:**
```
[[ 2.6433, -2.3445],
[ 1.0379, -0.9621],
[ 1.4154, -1.2704],
[ 3.4423, -3.1726]]
```
## Expected Behavior
I would expect the logits to be extremely close (within floating-point precision differences) when running the same model with identical inputs across different frameworks.
## Steps to Reproduce
1. Clone the repository: https://github.com/jpe90/candle-pytorch-parity-testing
2. Run the PyTorch implementation in `/xlm-roberta-finetuned/pytorch/main.py`
3. Run the Candle implementation in `/xlm-roberta-finetuned/candle/src/main.rs`
4. Compare the logits produced by both implementations
## Additional Context
- The tokenization appears to be identical between both implementations (identical token IDs)
- I checked and made sure model checksums match at runtime
- Config seems to match ([see here](https://github.com/jpe90/candle-pytorch-parity-testing/blob/master/xlm-roberta-finetuned/troubleshooting.md))
## Questions
1. Should I expect identical (or very close) logits between PyTorch and Candle implementations?
2. If differences are expected, what is the acceptable range of variation?
3. Could these differences impact more sensitive applications that rely on logit values rather than just the final classifications?
4. Are there known issues with XLM-RoBERTa models specifically in Candle?
|
https://github.com/huggingface/candle/issues/2967
|
closed
|
[] | 2025-05-24T17:24:33Z
| 2025-05-26T10:45:24Z
| 2
|
jpe90
|
huggingface/diffusers
| 11,607
|
with a custom attention processor for Flux.dev, inference time changes when manually load and inject the transformer model into a flux pipeline versus let the flux pipeline constructor load the transformer internally.
|
With a custom attention processor for Flux.dev transformer, the inference time is different between the following two ways:
1. Manually load and inject the transformer into a flux.dev pipeline
2. Let the pipeline constructor load the transformer internally
The inference time of the first way is about 15% slower than second way.
What is the reason?
I built diffusers from the source code.
Any insights are appreciated!
|
https://github.com/huggingface/diffusers/issues/11607
|
closed
|
[] | 2025-05-24T06:42:11Z
| 2025-05-26T01:27:00Z
| 1
|
LinchuanXuTheSEAAI
|
huggingface/transformers
| 38,326
|
Allow `MllamaModel` to accept `pixel_values` and `inputs_embeds`
|
### Feature request
`MllamaModel` does not allow users to pass `pixel_values` and `inputs_embeds` simultaneously:
https://github.com/huggingface/transformers/blob/54cd86708d2b63a1f696ee1c59384a2f04100f57/src/transformers/models/mllama/modeling_mllama.py#L1702-L1705
However, commenting out those lines and running the follow script does generate the same logits:
```python
import torch
from transformers import MllamaForConditionalGeneration, AutoProcessor
model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct"
model = MllamaForConditionalGeneration.from_pretrained(
model_id, device_map="auto", torch_dtype=torch.bfloat16
)
processor = AutoProcessor.from_pretrained(model_id)
messages = [
[
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://llava-vl.github.io/static/images/view.jpg",
},
{"type": "text", "text": "What does the image show?"},
],
}
],
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model(**inputs)
# Manually compute inputs_embeds
input_ids = inputs.pop("input_ids")
inputs_embeds = model.get_input_embeddings()(input_ids)
new_outputs = model(inputs_embeds=inputs_embeds, **inputs)
assert torch.allclose(outputs.logits, new_outputs.logits)
```
### Motivation
Being able to pass `inputs_embeds` along with `pixel_values` enables soft embeddings to be passed to the model in addition to images, which is useful for prompt tuning.
### Your contribution
Could contribute a PR removing the check assuming there isn't something I'm unaware of about the check.
|
https://github.com/huggingface/transformers/issues/38326
|
closed
|
[
"Feature request"
] | 2025-05-23T15:26:28Z
| 2025-05-27T16:33:57Z
| 1
|
dxoigmn
|
pytorch/audio
| 3,918
|
`io.UnsupportedOperation: seek` when using `torchaudio.io.StreamWriter` with a File-like object
|
### 🐛 Describe the bug
In [the tutorial for `StreamWriter`](https://docs.pytorch.org/audio/stable/tutorials/streamwriter_basic_tutorial.html#file-like-objects), it is clearly stated that `StreamWriter` works with File-like object that implements `io.RawIOBase.write`. However, when I used `StreamWriter` with the [Google Cloud Storage `BlobWriter`](https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.fileio.BlobWriter) object that implements `write` but not `seek`, an error is thrown on calling `StreamWriter.close()`:
```python
from google.cloud.storage import Blob
from torch.io import StreamWriter
blob = Blob(name=..., bucket=...)
with blob.open("wb") as f:
writer = StreamWriter(dst=f)
with writer.open():
...
```
```
self = <torio.io._streaming_media_encoder.StreamingMediaEncoder object at 0x110096190>
def close(self):
"""Close the output
:py:class:`StreamingMediaEncoder` is also a context manager and therefore supports the
``with`` statement.
It is recommended to use context manager, as the file is closed automatically
when exiting from ``with`` clause.
See :py:meth:`StreamingMediaEncoder.open` for more detail.
"""
if self._is_open:
> self._s.close()
E io.UnsupportedOperation: seek
.venv/lib/python3.11/site-packages/torio/io/_streaming_media_encoder.py:451: UnsupportedOperation
```
Clearly `seek` is called in `close()`, which causes this error. For now, can I get around this issue by not calling `close` on the `writer` object but do call `close` the `blob` object?
### Versions
google-cloud-storage 3.1.0
torchaudio 2.6.0
|
https://github.com/pytorch/audio/issues/3918
|
open
|
[] | 2025-05-23T15:24:45Z
| 2025-05-23T15:40:48Z
| 0
|
digicosmos86
|
huggingface/transformers
| 38,323
|
`PYTHONOPTIMIZE=2` seems not work with `transformers-`based library
|
### System Info
I am currently having the latest package install.
torch 2.6.0+cu124
transformers 4.51.3
sentence-transformers 4.1.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Error:
```python
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "D:\Dataset\AgentAI\.venv\Lib\site-packages\transformers\modeling_utils.py", line 5494, in <module>
class SQuADHead(nn.Module):
...<113 lines>...
)
File "D:\Dataset\AgentAI\.venv\Lib\site-packages\transformers\modeling_utils.py", line 5513, in SQuADHead
@replace_return_docstrings(output_type=SquadHeadOutput, config_class=PretrainedConfig)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Dataset\AgentAI\.venv\Lib\site-packages\transformers\utils\doc.py", line 1194, in docstring_decorator
lines = func_doc.split("\n")
^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'split'
```
A simple reproduction:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('all-MiniLM-L6-v2')
embedding = model.encode("What is the capital of France?")
print(embedding.shape)
```
### Expected behavior
This is not actually an issue, but I expect a documentation update from `transformers` maintainer to any end-users who use or develop a `transformers-` based library on the function `replace_return_docstrings` at `src/transformers/utils/doc.py` is to don't strip out the docstring by switching the option `PYTHONOPTIMIZE=2` to reduce the size of the bytecode. The use of `PYTHONOPTIMIZE=1` is OK
The reason is that the function `replace_return_docstrings` is expecting to be a decorator function without supporting the case of empty docstring. In some case, such as web hosting on Docker or production environment, or hosting an LLM without tool call where we usually strip out the docstring.
In the reproduction above (my use-case), I am just need to run the RAG search and thus don't need the docstring to be there.
|
https://github.com/huggingface/transformers/issues/38323
|
closed
|
[
"bug"
] | 2025-05-23T14:24:34Z
| 2025-05-26T14:29:17Z
| 1
|
IchiruTake
|
huggingface/candle
| 2,965
|
Are there any support for complex number?
|
Are there any support for complex number?
|
https://github.com/huggingface/candle/issues/2965
|
closed
|
[] | 2025-05-23T09:33:47Z
| 2025-11-23T22:16:54Z
| 1
|
hndrbrm
|
huggingface/accelerate
| 3,586
|
Where is PartialState._shared_state initialized?
|
Hi! When I step through the code line by line, before this line ([entering into `__init__` of `AcceleratorState`](https://github.com/huggingface/accelerate/blob/v0.34.2/src/accelerate/state.py#L856 )) , `PartialState._shared_state`returns
```
{}
```
But after entering into `__init__` of `AcceleratorState`, `PartialState._shared_state`returns
```
{'_cpu': False, 'backend': 'nccl', 'device': device(type='cuda', index=0), 'debug': False, 'distributed_type': <DistributedType.DEE...EEPSPEED'>, 'num_processes': 1, 'process_index': 0, 'local_process_index': 0, 'fork_launched': False}
```
I'm wondering where is `PartialState._shared_state` initialized?
|
https://github.com/huggingface/accelerate/issues/3586
|
closed
|
[] | 2025-05-23T08:17:44Z
| 2025-06-30T15:08:15Z
| null |
SonicZun
|
pytorch/ao
| 2,249
|
int4_weight_only get plain weight are padded
|
I try to quantize a model with int4_weight_only, and want to get the plained weight, but found the weight has been padded. To reproduce it, run the following script:
```python
import torch
from transformers import TorchAoConfig, AutoModelForCausalLM
model_name = "JackFram/llama-68m"
quantization_config = TorchAoConfig("int4_weight_only")
quantized_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="cuda:0", quantization_config=quantization_config)
print(quantized_model.model.layers[0].self_attn.q_proj.weight.tensor_impl.get_plain()[0].shape)
print(quantized_model.model.layers[0].self_attn.q_proj.weight.tensor_impl.get_plain()[0])
```
output
```
(768, 1024)
tensor([[11, 12, 8, ..., 0, 0, 0],
[ 5, 6, 5, ..., 0, 0, 0],
[ 5, 7, 7, ..., 0, 0, 0],
...,
[ 7, 5, 2, ..., 0, 0, 0],
[ 6, 1, 7, ..., 0, 0, 0],
[ 8, 11, 9, ..., 0, 0, 0]], device='cuda:0', dtype=torch.int32)
```
The original shape should be `(768, 768)`, but the plained weight shape is `(768, 1024)`. Can we have a remove padding process in `get_plain()` function?
|
https://github.com/pytorch/ao/issues/2249
|
open
|
[
"question",
"quantize_"
] | 2025-05-23T07:17:20Z
| 2025-06-24T20:14:53Z
| null |
jiqing-feng
|
huggingface/transformers
| 38,300
|
Will Gemma 3n be added to transformers?
|
### Model description
Question: Are there plans from Google or Huggingface to implement Gemma 3n in other frameworks?
I've seen the LiteRT weights and Android App Link on Huggingface, and was wandering if it would be possible to convert the model architecture in the *.task file to a transformer pytorch Module?
Personally I'll really interested in the Per-Layer Embeddings and MatFormer implementation they used, but do not have any experience with Tensorflow Lite
### Open source status
- [ ] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://huggingface.co/google/gemma-3n-E4B-it-litert-preview
|
https://github.com/huggingface/transformers/issues/38300
|
closed
|
[
"New model"
] | 2025-05-22T15:26:20Z
| 2025-06-30T07:07:53Z
| 4
|
TheMrCodes
|
huggingface/transformers
| 38,281
|
KeyError in Llama-4-Maverick-17B-128E-Instruct-FP8 Inference with Offloading
|
### Issue Description
Loading `meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8` succeeds with `transformers==4.51.0`, but inference fails with `KeyError: 'model.layers.37.feed_forward.experts.gate_up_proj'` during `model.generate`. This occurs on 4x NVIDIA RTX A6000 (~196GB VRAM, CUDA 12.4, Python 3.12.3, Ubuntu 24.04.2) with offloading, critical for sentiment analysis (~100–150GB/day, ~85–90% accuracy). Disabling MoE (`num_experts=0`) didn’t resolve it.
### Steps to Reproduce
1. Install dependencies:
```bash
pip install torch==2.4.1 accelerate==1.7.0 compressed-tensors==0.9.4 transformers==4.51.0
2. Confirm model files (~389GB, 84 .safetensors) at /mnt/data/ai_super_palace/models/llama4/.
3. Run:
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
os.environ["TORCHVISION_DISABLE_NMS"] = "1"
model = AutoModelForCausalLM.from_pretrained(
'/mnt/data/ai_super_palace/models/llama4',
torch_dtype=torch.float16,
device_map="auto",
low_cpu_mem_usage=True,
offload_folder="/mnt/data/ai_super_palace/models/llama4/offload",
config={"parallel_style": "none"}
)
tokenizer = AutoTokenizer.from_pretrained('/mnt/data/ai_super_palace/models/llama4')
prompt = "What is the sentiment of this text: 'I love this product, it's amazing!'"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
4. Error:
KeyError: 'model.layers.37.feed_forward.experts.gate_up_proj'
**Environment**
Transformers: 4.51.0
Python: 3.12.3
PyTorch: 2.4.1
CUDA: 12.4
Accelerate: 1.7.0
Compressed-tensors: 0.9.4
OS: Ubuntu 24.04.2 LTS
Hardware: 4x NVIDIA RTX A6000 (~196GB VRAM)
Model: meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
**Additional Details**
Model card requires transformers>=4.51.0, supports FP8 via compressed-tensors.
Warnings: Uninitialized MoE weights (feed_forward.experts.*), offloaded parameters (VRAM limit).
Prior errors (TypeError: NoneType not iterable) resolved with config={"parallel_style": "none"}.
Suspect bug in accelerate offloading or MoE weight initialization.
**Request**
Is this a known llama4 MoE offloading issue?
Can MoE weights be initialized or offloading fixed?
Workaround for inference without re-downloading (~389GB)?
Urgent for sentiment analysis.
**Logs**
See traceback above. config.json (40KB) available.
Thank you!
|
https://github.com/huggingface/transformers/issues/38281
|
closed
|
[] | 2025-05-22T05:45:30Z
| 2025-07-27T08:03:11Z
| 4
|
pchu2025
|
pytorch/xla
| 9,236
|
make README work for people using python 3.12/13
|
## 📚 Documentation
The installation instructions in README fail if the user has python 3.12 or 3.13 as the default. (Currently pytorch-xla only works with python 3.8-3.11.)
We should:
- document the requirement for the python version.
- add workaround instructions for people whose default python version is not 3.8-3.11.
|
https://github.com/pytorch/xla/issues/9236
|
open
|
[
"documentation"
] | 2025-05-22T00:33:29Z
| 2025-05-22T16:09:41Z
| 4
|
zhanyong-wan
|
huggingface/transformers
| 38,268
|
Group beam search with sampling?
|
### Feature request
In the current generation code, group beam search is necessarily greedy. From a theoretical point of view, it is not very clear why that should be the case, since the diversity penalty is applied on the logits anyway, yielding a full distribution from which sampling can still be performed.
### Motivation
I think there is a reasonable use case for such a feature: diversity beam search is very useful in particular for modalities like biological sequences which increasingly use the transformers library, but I could see it be useful as well for natural language or code, to generate diverse paths without falling to the drawbacks of greedy generation. From a more abstract point of view it is also seemingly unjustified to allow sampling for standard beam search and not for diversity beam search.
### Your contribution
I am aware of the work in #30810 so don't want to disrupt but would be happy to look into it.
|
https://github.com/huggingface/transformers/issues/38268
|
open
|
[
"Feature request"
] | 2025-05-21T18:08:59Z
| 2025-06-06T18:11:13Z
| 4
|
adrian-valente
|
huggingface/candle
| 2,961
|
Shape Mismatch in MatMul During Forward Pass of ModernBertForSequenceClassification
|
ModernBertForSequenceClassification model (hidden size = 768, sequence length = 128) to categorize text into one of classes. During the initial training epoch, however, the forward pass fails with a “shape mismatch in matmul” error.
Is there any way to solve this?
#Error log
Tokenized shape: [4, 128]
Attention mask shape: [4, 128]
Input IDs shape: [4, 128]
Attention mask shape: [4, 128]
First sample token count: 128
Error in forward pass: shape mismatch in matmul, lhs: [4, 128], rhs: [768, 768]
Input shape: [4, 128], Attention mask shape: [4, 128]
Error: shape mismatch in matmul, lhs: [4, 128], rhs: [768, 768]
#Expected Behavior
Input IDs should be a tensor of shape (batch_size, sequence_length) whose values are token indices (integers) and which the embedding layer then projects into the model’s hidden dimension (hidden_size = 768) before any matrix multiplication with weight matrices of shape (768, 768)
The forward pass should succeed without dimension errors, yielding logits of shape (batch_size, num_classes).
#Code
```
use candle_core::{Device, Tensor, D, DType, Error};
use candle_nn::{ops, loss, VarBuilder, optim::{Optimizer},var_map::VarMap};
use candle_transformers::models::modernbert::{ClassifierConfig, ClassifierPooling, ModernBertForSequenceClassification,Config
};
use hf_hub::{api::sync::Api, Repo, RepoType};
use tokenizers::{PaddingParams, Tokenizer};
use std::collections::HashMap;
use candle_optimisers::adam::{ParamsAdam, Adam};
use rand::{seq::SliceRandom, SeedableRng};
use rand::rngs::StdRng;
// Training settings
const LEARNING_RATE: f64 = 2e-5;
const EPOCHS: usize = 5;
const BATCH_SIZE: usize = 8;
const SEQ_LEN: usize = 128; // Sequence length
const SEED: u64 = 42;
// Data structure for text and label mapping
type LabeledDataset = HashMap<String, usize>;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Device selection (CPU or GPU)
let device = candle_examples::device(true)?;
println!("Using device: {:?}", device);
// HuggingFace API configuration
let revision = "main".to_string();
let api = Api::new()?;
let model_id = "answerdotai/ModernBERT-base".to_string();
let repo = api.repo(Repo::with_revision(
model_id,
RepoType::Model,
revision,
));
// Load tokenizer and model configuration
let tokenizer_filename = repo.get("tokenizer.json")?;
let config_filename = repo.get("config.json")?;
let weights_filename = repo.get("model.safetensors")?;
// Load configuration file
let config = std::fs::read_to_string(config_filename)?;
let mut config: Config = serde_json::from_str(&config)?;
// Output model configuration
println!("Model config:");
println!(" Hidden size: {}", config.hidden_size);
println!(" Intermediate size: {}", config.intermediate_size);
println!(" Max position embeddings: {}", config.max_position_embeddings);
println!(" Num attention heads: {}", config.num_attention_heads);
println!(" Num hidden layers: {}", config.num_hidden_layers);
println!(" Vocab size: {}", config.vocab_size);
// Check configuration compatibility
if config.max_position_embeddings < SEQ_LEN {
println!("Warning: SEQ_LEN ({}) is larger than max_position_embeddings ({}), adjusting SEQ_LEN",
SEQ_LEN, config.max_position_embeddings);
}
// Initialize tokenizer
let mut tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(Error::msg)?;
// Padding and truncation settings
tokenizer
.with_padding(Some(PaddingParams {
strategy: tokenizers::PaddingStrategy::Fixed(SEQ_LEN),
pad_id: config.pad_token_id,
pad_token: "[PAD]".to_string(),
pad_type_id: 0,
pad_to_multiple_of: None,
direction: tokenizers::PaddingDirection::Right,
}))
.with_truncation(Some(tokenizers::TruncationParams {
max_length: SEQ_LEN,
strategy: tokenizers::TruncationStrategy::LongestFirst,
stride: 0,
direction: tokenizers::TruncationDirection::Right,
}))
.map_err(Error::msg)?;
// Configure label mappings
let mut id2label = HashMap::new();
let mut label2id = HashMap::new();
let class_names = vec!["News", "Entertainment", "Sports", "Technology"];
for (i, name) in class_names.iter().enumerate() {
id2label.insert(i.to_string(), name.to_string());
label2id.insert(name.to_string(), i.to_string());
}
// Add classifier configuration
config.classifier_config = Some(ClassifierConfig {
id2label: id2label.clone(),
label2id: label2id.clone(),
classifier_pooling: ClassifierPooling::CLS, // Use [CLS] token for pooling
});
// Create variable map for the model
let mut varmap = VarMap::new();
// Load model weights
varmap.load(weights_filename)?;
let vb = VarBuilder::from_varmap(&varmap
|
https://github.com/huggingface/candle/issues/2961
|
closed
|
[] | 2025-05-21T14:25:07Z
| 2025-06-08T12:11:46Z
| 2
|
whitebox2
|
pytorch/pytorch
| 154,027
|
How to add custom attributes to torch tensor?
|
### 🚀 The feature, motivation and pitch
How can I add custom attributes like device_local or host_local to a PyTorch tensor without affecting TensorImpl or StorageImpl? I have a use case where I need to convert an external tensor into a PyTorch tensor while preserving such properties
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/154027
|
closed
|
[] | 2025-05-21T09:13:16Z
| 2025-05-21T13:42:11Z
| null |
bailuan
|
pytorch/vision
| 9,079
|
Build pytorch trunk from source and build vision from source makes `import torchvision;` fail
|
### 🐛 Describe the bug
If I build pytorch from turnk (2.8+1478d0185c29) and build vision from source, I can't run `import torchvision;`.
```
import torchvision
```
will report: `RuntimeError: operator torchvision::nms does not exist`.
It will succeed if I replace the version of pytorch from trunk to branch `release/2.7`. (Build from source still).
How can I build vision with pytorch from source?
### Versions
trunk d02b1845a2fabea1eb8f9d09310369a5cbb5514f
|
https://github.com/pytorch/vision/issues/9079
|
open
|
[] | 2025-05-21T03:25:05Z
| 2025-09-02T15:27:37Z
| 3
|
ChuanqiXu9
|
pytorch/pytorch
| 154,009
|
SourcelessBuilder.create does not know how to wrap <class '__main__.InFlexData'>
|
### 🐛 Describe the bug
I am trying to use torch compile on my functions and encounter this issue. I attached a minimum test program so anyone can reproduce the issue.
```python
from dataclasses import dataclass
import torch
@dataclass(frozen=True)
class BaseFlexData:
dtype: torch.dtype | None = None
def view(self, x: torch.Tensor):
if self.dtype is None:
return x
return x.view(self.dtype)
def reinterpret(self, x):
if self.dtype is None or x.dtype.itemsize > 1:
return x
return x.view(self.dtype)
@dataclass(frozen=True)
class InFlexData(BaseFlexData):
scale: torch.Tensor | None = None
@property
def is_per_batch(self):
return False if self.scale is None else len(self.scale) > 1
@dataclass(frozen=True)
class OutFlexData(BaseFlexData):
expected_scale: torch.Tensor | None = None
actual_scale: torch.Tensor | None = None
checksum_scale: torch.Tensor | None = None
def __iter__(self):
yield self.expected_scale
yield self.actual_scale
yield self.checksum_scale
@dataclass(frozen=True)
class FlexCtx:
lhs_data: InFlexData = InFlexData()
rhs_data: InFlexData = InFlexData()
out_data: OutFlexData = OutFlexData()
@dataclass
class DummyClass:
flex_ctx: FlexCtx = FlexCtx()
def __post_init__(self):
assert self.flex_ctx.rhs_data.scale is None, "flex and mx_ctx cannot be used together"
@torch.compile(fullgraph=True)
def dummy_method():
var = DummyClass(flex_ctx=FlexCtx(rhs_data=InFlexData()))
return var
dummy_method()
```
### Error logs
```
TORCHDYNAMO_VERBOSE=1 python test_compile.py
Traceback (most recent call last):
File "/home/eecs/yongye.zhu/vllm/tests/kernels/moe/test_compile.py", line 56, in <module>
dummy_method()
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 685, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1463, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 624, in __call__
return _compile(
^^^^^^^^^
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1087, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 778, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 817, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1423, in transform_code_object
transformations(instructions, code_options)
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 742, in transform
tracer.run()
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3508, in run
super().run()
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
^^^^^^^^^^^
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1253, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 828, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2934, in CALL
self._call(inst)
File "/home/eecs/yongye.zhu/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2928, in _call
self.call_function(fn, args, kwargs
|
https://github.com/pytorch/pytorch/issues/154009
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-dataclasses",
"vllm-compile",
"module: vllm"
] | 2025-05-21T02:34:19Z
| 2025-10-24T16:39:07Z
| null |
zyongye
|
huggingface/transformers
| 38,243
|
<spam>
|
We are looking for an experienced Machine Learning Engineer for a BTC/USDT prediction project using CNN, LSTM, and Transformers. The goal is to forecast cryptocurrency price movements with a target accuracy of 90%+.
More details here:[ ](https://gist.github.com/DandBman/c76a548b1972da50ffe6bbdd93fdd613)
|
https://github.com/huggingface/transformers/issues/38243
|
closed
|
[] | 2025-05-20T22:14:11Z
| 2025-05-21T13:14:41Z
| 0
|
DandBman
|
huggingface/diffusers
| 11,590
|
Infinite (not literally) length video creation using LTX-Video?
|
First of all thanks to Aryan (0.9.7 integration) and DN6 (adding GGUF). Model is quite good and output is also promising.
I need help in creating continuous video using the last frame. 1 trick is to generate the video, extract the last frame and do inference. Is there any easy way where I can do this in loop.
My thought is
1. Use text encoder to generate prompt embed once and then remove text encoders from memory
2. Loop the inference code, once complete extract the last latent (preferred as I can upscale using LTXLatentUpsamplePipeline) frame or image and again create image1 and condition with that frame...and continue doing this for n iterations.
3. Also need to save the video locally for each inference, otherwise OOM.
Any thoughts / suggestions?
```python
import torch
import gc
from diffusers import GGUFQuantizationConfig
from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline, LTXVideoTransformer3DModel
from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition
from diffusers.utils import export_to_video, load_video, load_image
transformer_path = f"https://huggingface.co/wsbagnsv1/ltxv-13b-0.9.7-distilled-GGUF/blob/main/ltxv-13b-0.9.7-distilled-Q3_K_S.gguf"
# transformer_path = f"https://huggingface.co/wsbagnsv1/ltxv-13b-0.9.7-distilled-GGUF/blob/main/ltxv-13b-0.9.7-distilled-Q8_0.gguf"
transformer_gguf = LTXVideoTransformer3DModel.from_single_file(
transformer_path,
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
)
pipe = LTXConditionPipeline.from_pretrained(
"Lightricks/LTX-Video-0.9.7-distilled",
transformer=transformer_gguf,
torch_dtype=torch.bfloat16
)
# pipe.to("cuda")
# pipe.enable_sequential_cpu_offload()
pipe.enable_model_cpu_offload()
pipe.vae.enable_tiling()
height, width = 480, 832
num_frames = 151
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
prompt = "hyperrealistic digital artwork of a young woman walking confidently down a garden pathway, wearing white button-up blouse with puffed sleeves and blue denim miniskirt, long flowing light brown hair caught in gentle breeze, carrying a small black handbag, bright sunny day with blue sky and fluffy white clouds, lush green hedges and ornamental plants lining the stone pathway, traditional Asian-inspired architecture in background, photorealistic style with perfect lighting, unreal engine 5, ray tracing, 16K UHD. camera follows subject from front as she walks forward with elegant confidence"
image1 = load_image( "assets/ltx/00039.png" )
condition1 = LTXVideoCondition(
image=image1,
frame_index=0,
)
width=512
height=768
num_frames = 161
# LOOP HERE
latents = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
conditions=[condition1],
width=width,
height=height,
num_frames=num_frames,
guidance_scale=1.0,
num_inference_steps=4,
decode_timestep=0.05,
decode_noise_scale=0.025,
image_cond_noise_scale=0.0,
guidance_rescale=0.7,
generator=torch.Generator().manual_seed(42),
output_type="latent",
).frames
# save video locally
# Update image1 = load_image( latent/image from current inference to be used with next inference)
```
|
https://github.com/huggingface/diffusers/issues/11590
|
closed
|
[] | 2025-05-20T13:37:36Z
| 2025-05-20T19:51:20Z
| 1
|
nitinmukesh
|
pytorch/ao
| 2,228
|
[Quant] Can quant not be decomposed on inductor?
|
torch.ops.torchao.dequantize_affine decomposed to convert_element_type and mul.
Inductor will do constant_fold before pattern matching
On constant_fold, inductor replace fp8 weight and some previous operations with fp32 weight
Is this as expected?
Now register_decomposition on [register_decomposition](https://github.com/pytorch/ao/blob/96aec6a3e713687c1728a20a08d5c54db0344377/torchao/utils.py#L226)
This sample test can reproduce the issue
```python
import os
os.environ["OMP_NUM_THREADS"] = "1"
os.environ["TORCHINDUCTOR_FREEZING"] = "1"
os.environ["TORCH_COMPILE_DEBUG"] = "0"
os.environ["TORCHDYNAMO_PRINT_GUARD_FAILS"] = "0"
from typing import Callable, List, Optional, Union
import torch
from torch import nn
import torchao
#import torchao.quantization.pt2e.quantizer.x86_inductor_quantizer as xiq
def dequantize_per_tensor(
input: torch.Tensor,
scale: torch.Tensor,
output_dtype: torch.dtype
) -> torch.Tensor:
res = torch.ops.torchao.dequantize_affine(
input=input,
block_size=input.shape,
scale=scale,
zero_point=torch.tensor(0),
input_dtype=torch.float8_e4m3fn,
)
if output_dtype != torch.float:
res = res.to(output_dtype)
return res
def quantize_per_tensor(
input: torch.Tensor,
scale: torch.Tensor,
) -> torch.Tensor:
return torch.ops.torchao.quantize_affine(
input=input,
block_size=input.shape,
scale=scale,
zero_point=torch.tensor(0),
output_dtype=torch.float8_e4m3fn,
)
class Perceptron(torch.nn.Module):
def __init__(
self,
in_size: int,
out_size: int,
bias: bool = True,
activation: Union[
torch.nn.Module,
Callable[[torch.Tensor], torch.Tensor],
] = torch.relu,
device: Optional[torch.device] = None,
dtype: torch.dtype = torch.float32,
) -> None:
super().__init__()
self._out_size = out_size
self._in_size = in_size
self._linear: nn.Linear = nn.Linear(
self._in_size,
self._out_size,
bias=bias,
device=device,
dtype=dtype,
)
self._activation_fn: Callable[[torch.Tensor], torch.Tensor] = activation
def forward(self, input: torch.Tensor) -> torch.Tensor:
return self._activation_fn(self._linear(input))
class MLP(torch.nn.Module):
def __init__(
self,
in_size: int,
layer_sizes: List[int],
bias: bool = True,
activation: Union[
str,
Callable[[], torch.nn.Module],
torch.nn.Module,
Callable[[torch.Tensor], torch.Tensor],
] = torch.relu,
device: Optional[torch.device] = None,
dtype: torch.dtype = torch.float32,
) -> None:
super().__init__()
if activation == "relu":
activation = torch.relu
elif activation == "sigmoid":
activation = torch.sigmoid
if not isinstance(activation, str):
self._mlp: torch.nn.Module = torch.nn.Sequential(
*[
Perceptron(
layer_sizes[i - 1] if i > 0 else in_size,
layer_sizes[i],
bias=bias,
activation=activation,
device=device,
dtype=dtype,
)
for i in range(len(layer_sizes))
]
)
else:
assert (
ValueError
), "This MLP only support str version activation function of relu, sigmoid, and swish_layernorm"
def forward(self, input: torch.Tensor) -> torch.Tensor:
return self._mlp(input)
class DenseArch(nn.Module):
def __init__(
self,
in_features: int,
layer_sizes: List[int],
device: Optional[torch.device] = None,
) -> None:
super().__init__()
self.model: nn.Module = MLP(
in_features, layer_sizes, bias=True, activation="relu", device=device
)
def forward(self, features: torch.Tensor) -> torch.Tensor:
return self.model(features)
def inc_convert(model, dtype):
model.eval()
qtype = torch.float8_e4m3fn
#from torch.ao.quantization.fx._decomposed import quantize_per_tensor, dequantize_per_tensor
from torch.nn import functional as F
class FP8QDQLinear(torch.nn.Module):
def __init__(self, in_features, out_features):
super().__init__()
self.weight = torch.empty((out_features, in_features),)
self.weight_scale = None
self.scale = None
self.bias = None
def forward(self, input):
weight = dequantize_per_tensor(
self.weight.data,
self.weight_scale,
dtype,
)
q_input = quantize_per_tensor(
|
https://github.com/pytorch/ao/issues/2228
|
closed
|
[
"question",
"triaged"
] | 2025-05-20T09:25:54Z
| 2025-06-25T08:22:25Z
| null |
shiyang-weng
|
huggingface/agents-course
| 501
|
[BUG] Notebook on HF Hub is not updated
|
"Workflows in LlamaIndex" [course page](https://huggingface.co/learn/agents-course/unit2/llama-index/workflows#creating-workflows) is referring notebook on [HF Hub](https://huggingface.co/agents-course/notebooks/blob/main/unit2/llama-index/workflows.ipynb), which is not the updated version from [GitHub](https://github.com/huggingface/agents-course/blob/main/notebooks/unit2/llama-index/workflows.ipynb).
The old version contains bug in loop event workflow so update is needed.
|
https://github.com/huggingface/agents-course/issues/501
|
closed
|
[
"question"
] | 2025-05-20T06:45:26Z
| 2025-05-29T05:28:46Z
| null |
karenwky
|
huggingface/open-r1
| 649
|
how to evaluate use local models and datasets?
|
I change the readme eval command like following:
**MODEL=./deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
OUTPUT_DIR=./data/evals/
# AIME 2024
TASK=aime24
lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \
--custom-tasks src/open_r1/evaluate.py \
--use-chat-template \
--output-dir $OUTPUT_DIR \
--cache-dir ./datasets/aime24**
but it try to use the network,and get a network error,how can i do to solve this problem?
|
https://github.com/huggingface/open-r1/issues/649
|
open
|
[] | 2025-05-20T05:57:29Z
| 2025-05-20T05:57:29Z
| null |
SiqingHe
|
huggingface/lerobot
| 1,130
|
Drive mode reversed on calibration.
|
I had an issue where after calibrating drive_mode was reversed for one of my motors (0 vs. 1) as a result, moving the leader in one direction caused the follower to go the opposite direction.
Saw some suggestions that moving it through the full range of motion resolved this but I wasn't able to get that to work. I could also see cases where this could be problematic during initial setup. @Lemin2 suggested to always set this to 0 across the board, which does seem like a good fix, unless there's a reason want to control reverse mode.
In any case I would expect the calibration process to be consistent for both arms, else this issue will be encountered. If reverse mode is needed maybe have a step in the calibration processes to ensure consistency.
FYI in case anyone encounters this the solution is to go into `.cache/calibration/<arm>/<each of your arms>.json`
Seems to be the same cause for #441 and #930
|
https://github.com/huggingface/lerobot/issues/1130
|
open
|
[
"bug",
"question",
"robots"
] | 2025-05-20T03:08:06Z
| 2025-07-16T06:50:20Z
| null |
brainwavecoder9
|
pytorch/TensorRT
| 3,525
|
❓ [Question] How to save the compiled while using torch.compile
|
For the example below, how do I save the compiled model?
backend = "torch_tensorrt"
tp_model = torch.compile(
tp_model,
backend=backend,
options={
"truncate_long_and_double": True,
"enabled_precisions": {torch.float32, torch.float16},
"use_python_runtime": True,
"min_block_size": 1,
},
dynamic=False,
)
|
https://github.com/pytorch/TensorRT/issues/3525
|
open
|
[
"question"
] | 2025-05-20T03:06:53Z
| 2025-05-20T15:15:27Z
| null |
klin2024
|
pytorch/torchchat
| 1,543
|
[IMPORTANT] torchchat sunset
|
**As of May 19th 2025, we are halting active development on torchchat.**
The original intent of torchchat was to both demonstrate how to run LLM inference using PyTorch and improve the performance and functionality of the entire PyTorch ecosystem.
Since torchchat’s launch, we’ve seen vLLM become the dominant player for server-side LLM inference. We’re ecstatic to have [vLLM join the PyTorch Ecosystem](https://pytorch.org/blog/vllm-joins-pytorch/) and recommend folks use them for hosting LLMs in server production environments. Given the growth of vLLM and others, we do not see the need to maintain an active demonstration of how to run LLM inference using PyTorch.
We are very proud of the performance and functionality improvements we saw in the PyTorch ecosystem over the last year, including:
- The performance of LLM inference increase by multiples for every device we support (CUDA, CPU, MPS, ARM, etc)
- Working code, demonstrating how to run LLM inference for all the major execution modes (Eager, Compile, AOTI and ET) giving users a starting point for using PyTorch for LLM inference from server to embedded devices and everything in between
- Quantization expand to support the most popular schemes and bit sizes
- torchchat become the testing grounds for new advancements ([experimental torchao kernels](https://github.com/pytorch/torchchat/blob/fd3059bf830494cf14dd474af348c7ebb3d6be76/docs/quantization.md#experimental-torchao-lowbit-kernels), [MPS compile](https://github.com/pytorch/pytorch/blob/31f175ea9a00b1ca392858cd0d160706201b12da/torch/_inductor/codegen/mps.py), [AOTI Packaging](https://github.com/pytorch/pytorch/blob/f2e8e41855caaae6ed7254f7abf4e31122363722/docs/source/torch.compiler_aot_inductor.rst#aotinductor-ahead-of-time-compilation-for-torchexport-ed-models))
There’s still plenty of exciting work to do across the LLM Inference space and PyTorch will stay invested in improving things.
We appreciate and thank everyone in the community for all that you’ve contributed.
Thanks to our contributors:
@mikekgfb @Jack-Khuu @metascroy @malfet @larryliu0820 @kirklandsign @swolchok @vmpuri @kwen2501 @Gasoonjia @orionr @guangy10 @byjlw @lessw2020 @mergennachin @GregoryComer @shoumikhin @kimishpatel @manuelcandales @lucylq @desertfire @gabe-l-hart @seemethere @iseeyuan @jerryzh168 @leseb @yanbing-j @mreso @fduwjj @Olivia-liu @angelayi @JacobSzwejbka @ali-khosh @nlpfollower @songhappy @HDCharles @jenniew @silverguo @zhenyan-zhang-meta @ianbarber @dbort @kit1980 @mcr229 @georgehong @krammnic @xuedinge233 @anirudhs001 @shreyashah1903 @soumith @TheBetterSolution @codereba @jackzhxng @KPCOFGS @kuizhiqing @kartikayk @nobelchowdary @mike94043 @vladoovtcharov @prideout @sanchitintel @cbilgin @jeffdaily @infil00p @msaroufim @zhxchen17 @vmoens @wjunLu
-**PyTorch Team**
|
https://github.com/pytorch/torchchat/issues/1543
|
open
|
[] | 2025-05-20T02:41:03Z
| 2025-05-20T11:06:54Z
| 3
|
Jack-Khuu
|
huggingface/text-generation-inference
| 3,233
|
Docker image For llama cpp backend?
|
Hey,
Is there any reason in particular why docker images for the llama-cpp backend do not get built along with new versions? It seems the backend has been ready for a while so just curious why images don't get built as part of the build pipeline
cc @mfuntowicz
|
https://github.com/huggingface/text-generation-inference/issues/3233
|
open
|
[] | 2025-05-20T02:07:46Z
| 2025-05-20T02:07:46Z
| 0
|
vrdn-23
|
pytorch/xla
| 9,201
|
Issue warning on set_mat_mul
|
On #9080 and #9103, there was a request to add a warning when user sets mat mul. I added it to the PR, but, the ci/ci now skips running documentation.
This issue and PR will cherry pick the code changes to isolate them from docs, allowing code cicd to run on this PR, and docs build cicd to run on 9082.
|
https://github.com/pytorch/xla/issues/9201
|
closed
|
[
"documentation",
"CI"
] | 2025-05-19T21:21:48Z
| 2025-05-21T18:38:49Z
| 0
|
yaoshiang
|
pytorch/xla
| 9,199
|
Simplify device count external API calls
|
Currently there are many external APIs related getting the number of devices associate with PyTorch XLA. Those that I could find were:
- "global_runtime_device_count": returns the total number of devices across all processes/hosts, but it has "@functools.lru_cache()"
- "global_device_count": returns the total number of devices across all processes/hosts, but it has "@functools.lru_cache()"
- "addressable_runtime_device_count": Access number of [addressable devices](https://github.com/pytorch/xla/blob/r2.7/torch_xla/csrc/init_python_bindings.cpp#L15026) visible to a process.
- "addressable_device_count": Access number of [addressable devices](https://github.com/pytorch/xla/blob/r2.7/torch_xla/csrc/init_python_bindings.cpp#L1481) visible to a process. It specifically returns 1 in case of SPMD.
- "local_device_count": takes the number of [addressable devices](https://github.com/pytorch/xla/blob/01b5408dded9bf5bdea3e59c387b3b201a2bdab9/torch_xla/csrc/init_python_bindings.cpp#L1486) and multiplies it by the number of local [process counts](https://github.com/pytorch/xla/blob/r2.7/torch_xla/runtime.py#L129). Equivalent of the answer of the number of devices running on a host.
From these, some existing observations are:
- `addressable_runtime_device_count` and `addressable_device_count` are extremely similar in implementation and name. Perhaps we should make the distinction more clear. Perhaps there is some context around `addressable_device_count` particular I don't fully grasp.
- `local_device_count` terminology can be confusing when compared with JAX's concept for local devices for [jax.local_devices](https://docs.jax.dev/en/latest/_autosummary/jax.local_devices.html). `local_device_count` being the number of devices in the host, while JAX's definition is of devices in the process
- We should deduplicate `global_runtime_device_count` and `global_device_count`, just have one reference the other to remove multiple calls
|
https://github.com/pytorch/xla/issues/9199
|
open
|
[
"usability",
"documentation"
] | 2025-05-19T19:26:46Z
| 2025-06-04T05:52:28Z
| 4
|
pgmoka
|
huggingface/diffusers
| 11,580
|
Can diffusers support loading and running FLUX with fp8 ?
|
This is how I use diffusers to load flux model:
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained(
"/ckptstorage/repo/pretrained_weights/black-forest-labs/FLUX.1-dev",
torch_dtype=torch.float16,
)
device = torch.device(f"cuda:{device_number}" if torch.cuda.is_available() else "cpu")
pipe = pipe.to(device)
```
it consumes about 75 seconds on my computer with A800 GPU.
But I found in comfyui, it only need 22 seconds to load flux model, but it load the fp8 model.
Can diffusers load flux fp8 model ?
or is there any other speed up method ?
|
https://github.com/huggingface/diffusers/issues/11580
|
open
|
[] | 2025-05-19T12:18:13Z
| 2025-12-12T19:30:33Z
| 5
|
EmmaThompson123
|
huggingface/lerobot
| 1,124
|
How to add force data to lerobot and models?
|
As title said, I use a force sensor on SO100 arm and want to record the data in lerobot dataset then train with the force data. How to do it?
force data looks like: a list: [x1, y1, z1, x2, y2, z2, x3, y3, z3, x4, y4, z4, x5, y5, z5] (15 d list)
Thanks!
|
https://github.com/huggingface/lerobot/issues/1124
|
closed
|
[] | 2025-05-19T07:48:20Z
| 2025-05-19T13:36:44Z
| null |
milong26
|
huggingface/diffusers
| 11,575
|
Hidream Model loading takes too long — any way to speed it up?
|
Hi, thanks for this great project.
I'm running Hidream with this library in a serverless environment and facing major delays during model loading. It can be very frustrating, especially for time-sensitive or ephemeral deployments.
I've tried everything I could think of to reduce the loading time, but nothing has worked so far. Does anyone have any tips, tricks, or even sample code to help speed up the model initialization?
Any guidance would be greatly appreciated!
|
https://github.com/huggingface/diffusers/issues/11575
|
open
|
[] | 2025-05-19T00:49:00Z
| 2025-05-23T12:55:05Z
| 6
|
Me-verner
|
huggingface/optimum
| 2,275
|
ONNX export for ColPali
|
Hi Optimum,
I have created a small tutorial how to export the ColPali late-interaction VLM in this [notebook](https://gist.github.com/kstavro/9bcdf930f0e69626dd5aa9aa5f09f867), but I think it shouldn't be too difficult to integrate it to Optimum as well.
However, as far as I have seen, there is not much support for late-interaction VLMs at the moment. So, before I get into it just by myself, I thought I could first see if someone could give me a couple of hints about some choices regarding the library, eg what base configs I should use for ColPali or if I should create new ones everywhere, what names, do we need tiny dummy models for tests, etc.
|
https://github.com/huggingface/optimum/issues/2275
|
closed
|
[] | 2025-05-18T18:56:22Z
| 2025-06-11T13:56:43Z
| 2
|
kstavro
|
huggingface/transformers
| 38,190
|
Gibberish generations with FSDP2 and MixedPrecisionPolicy
|
### System Info
```
transformers.__version__='4.51.2'
torch.__version__='2.6.0+cu124'
sys.version='3.10.17 (main, Apr 16 2025, 15:03:57) [GCC 12.1.1 20220628 (Red Hat 12.1.1-3)]'
```
### Who can help?
@SunMarc @zach-huggingface
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I'm sharding `llama-3.1-8b-instruct` on 8 GPUs using FSDP2. The goal is to be able to call `generate` during the training loop. I have noticed that If I use `MixedPrecisionPolicy` with `param_dtype=torch.bfloat16` the generations are gibberish. A hopefully reproducible example below.
```python
import os
import torch
import torch.distributed as dist
from torch.distributed._composable.fsdp import register_fsdp_forward_method
from torch.distributed.device_mesh import init_device_mesh
from torch.distributed.fsdp import (
MixedPrecisionPolicy,
fully_shard,
)
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
from transformers.models.llama.modeling_llama import LlamaDecoderLayer
def get_local_rank() -> int:
return int(os.environ.get("LOCAL_RANK", "0"))
def get_global_rank() -> int:
return int(os.environ.get("RANK", get_local_rank()))
def barrier():
dist.barrier(device_ids=[get_local_rank()])
def test_generate(model, tokenizer):
prompt = "Concisely answer the following question: "
queries = [
"What is the tallest animal?\n",
"What are 3 fruits larger in size than an apple?\n",
"What's the derivative of e^x?\n",
]
tokens = [tokenizer.encode(prompt + q) for q in queries]
max_len = max(len(t) for t in tokens)
padded = [[tokenizer.eos_token_id] * (max_len - len(t)) + t for t in tokens]
padded_t = torch.tensor(padded).long()
generations = model.generate(padded_t, max_new_tokens=128)
parsed = tokenizer.batch_decode(generations)
for p in parsed:
print(p, flush=True)
def main():
device = torch.device("cuda", get_local_rank())
dist.init_process_group(
backend="nccl",
)
torch.cuda.set_device(device)
LOCAL_MODEL_PATH = "/llama-3.1-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(LOCAL_MODEL_PATH)
model_config = AutoConfig.from_pretrained(LOCAL_MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(
LOCAL_MODEL_PATH,
config=model_config,
use_safetensors=True,
torch_dtype=torch.float32,
)
fsdp2_kwargs = {}
fsdp2_kwargs["mesh"] = init_device_mesh(
"cuda", (torch.distributed.get_world_size(),)
)
fsdp2_kwargs["mp_policy"] = MixedPrecisionPolicy(
param_dtype=torch.bfloat16, # <<<----- If I comment this line the generations are as expected
)
for submodule in model.modules():
if isinstance(submodule, LlamaDecoderLayer):
fully_shard(submodule, **fsdp2_kwargs)
fully_shard(model, **fsdp2_kwargs)
register_fsdp_forward_method(model, "generate")
barrier()
test_generate(model, tokenizer)
barrier()
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
The following is an example of the output I get if `param_dtype=torch.bfloat16`:
```
<|eot_id|><|eot_id|><|eot_id|><|eot_id|><|eot_id|><|eot_id|><|begin_of_text|>Concisely answer the following question: What is the tallest animal?
The odense aalborg limburg fetisch odense fetisch<|start_header_id|>OO
<|begin_of_text|>Concisely answer the following question: What are 3 fruits larger in size than an apple?
Here fetisch<|start_header_id|>OOOOOOOOOO
<|eot_id|><|eot_id|><|eot_id|><|begin_of_text|>Concisely answer the following question: What's the derivative of e^x?
The aalborg salopes<|start_header_id|>OOOOOOOOOOOOAAAAAAAA\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
```
### Expected behavior
The following is an example of the output I get if I comment out the `param_dtype=torch.bfloat16` in `MixedPrecisionPolicy`
```
<|eot_id|><|eot_id|><|eot_id|><|eot_id|><|eot_id|><|eot_id|><|begin_of_text|>Concisely answer the following question: What is the tallest animal?
The tallest animal is the giraffe, which can grow up to 18 feet (5.5 meters) tall.
The gi
|
https://github.com/huggingface/transformers/issues/38190
|
closed
|
[
"bug"
] | 2025-05-18T11:56:08Z
| 2025-08-29T09:36:57Z
| 17
|
dlvp
|
pytorch/torchtitan
| 1,202
|
How to run the tests in the tests directory
|
Looking for how to documentations to run the tests in the tests directory.
|
https://github.com/pytorch/torchtitan/issues/1202
|
closed
|
[
"documentation",
"good first issue"
] | 2025-05-16T17:33:46Z
| 2025-05-20T04:02:02Z
| null |
githubsgi
|
huggingface/transformers
| 38,181
|
Add a way for `callbacks` to get `trainer` handler
|
When I want to implement differential privacy for the model, I customize the gradient clipping before `optimizer.step()`. The add custom noise to the model after `optimizer.step()`. I cannot get `Trainer.optimizer` in the `callback` function, it shows as `None`. Is it possible to get the reference of `Trainer` directly in `callback`?
|
https://github.com/huggingface/transformers/issues/38181
|
closed
|
[] | 2025-05-16T16:01:35Z
| 2025-05-19T12:17:06Z
| 1
|
MinzhiYoyo
|
pytorch/helion
| 46
|
[QST] Compiler Pipeline
|
@jansel @yf225
Very cool project.
Is there any documentation on how helion leverages inductor to generate triton kernels?
Trying to understand the overlap between dynamo and helion. My naive take is that dynamo parses general python code to an fx graph that is then passed to inductor whereas helion parses a subset of python defined by helion-specific operators to an fx graph then onto inductor...
In either case, hoping to use helion to better understand inductor, from IR to lowering, optimization, and codegen.
|
https://github.com/pytorch/helion/issues/46
|
closed
|
[
"question"
] | 2025-05-16T12:30:52Z
| 2025-08-25T21:28:38Z
| null |
jeromeku
|
pytorch/TensorRT
| 3,522
|
❓ [Question] Manually Annotate Quantization Parameters in FX Graph
|
## ❓ Question
is there a way to manually annotate quantization parameters that will be respected throughout torch_tensorrt conversion (e.g. manually adding q/dq nodes, or specifying some tensor metadata) via dynamo? thank you!
|
https://github.com/pytorch/TensorRT/issues/3522
|
open
|
[
"question"
] | 2025-05-16T07:38:33Z
| 2025-06-02T15:35:40Z
| null |
patrick-botco
|
huggingface/open-r1
| 645
|
How to set vllm max-model-len?
|
I use qwen2.5-7b-Instruct to run grpo, and open yarn, to accommodate a longer window(greater than 32768). But fowllowing error exists:
0%| | 0/187 [00:00<?, ?it/s]WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
[rank2]: Traceback (most recent call last):
[rank2]: File "/cto_studio/huyongquan/python_project/open-r1/src/open_r1/grpo.py", line 358, in <module>
[rank2]: main(script_args, training_args, model_args)
[rank2]: File "/cto_studio/huyongquan/python_project/open-r1/src/open_r1/grpo.py", line 309, in main
[rank2]: train_result = trainer.train(resume_from_checkpoint=checkpoint)
|
https://github.com/huggingface/open-r1/issues/645
|
closed
|
[] | 2025-05-16T03:28:50Z
| 2025-06-12T08:45:15Z
| null |
huyongquan
|
huggingface/transformers
| 38,165
|
Gemma 3 Pipeline does not accept dictionary with no images
|
### System Info
System info not really relevant as the bug is root caused in my description below.
- `transformers` version: 4.51.3
- Platform: Windows-10-10.0.26100-SP0
- Python version: 3.11.9
- Huggingface_hub version: 0.31.2
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script:Yes
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This issue can be created using the following snippet copied from Gemma 3 docs and up until transformer 4.51.3.
```
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="google/gemma-3-12b-it",
device="cuda", # Or "cpu" if you don't have a compatible GPU
torch_dtype=torch.bfloat16 # Or torch.float16 or torch.float32 based on your hardware/needs
)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
# Removed the image link from the example
{"type": "text", "text": "What is the capital of France?"} # Keep only the text part
]
}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
which will result in the error:
```
Traceback (most recent call last):
File "D:\experiments\personal\gemma_editor\gemma_editor.py", line 78, in <module>
run_gemma(SENTENCES)
File "D:\experiments\personal\gemma_editor\gemma_editor.py", line 41, in run_gemma
output = pipe(text=messages)
^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\pipelines\image_text_to_text.py", line 311, in __call__
return super().__call__(Chat(text, images), **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\pipelines\base.py", line 1379, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\pipelines\base.py", line 1385, in run_single
model_inputs = self.preprocess(inputs, **preprocess_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\pipelines\image_text_to_text.py", line 365, in preprocess
model_inputs = self.processor(images=images, text=text, return_tensors=self.framework, **processing_kwargs).to(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\models\gemma3\processing_gemma3.py", line 106, in __call__
image_inputs = self.image_processor(batched_images, **output_kwargs["images_kwargs"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\image_processing_utils.py", line 42, in __call__
return self.preprocess(images, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\utils\generic.py", line 866, in wrapper
return func(*args, **valid_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\models\gemma3\image_processing_gemma3.py", line 361, in preprocess
if do_rescale and is_scaled_image(images[0]):
~~~~~~^^^
IndexError: list index out of range
```
### Expected behavior
The problem here is that within image_text_to_text, the dictionary is made into type: Chat. [By default chat makes images an empty list](https://github.com/huggingface/transformers/blame/v4.51.3/src/transformers/pipelines/image_text_to_text.py#L114). Then this is propagated to [images](https://github.com/huggingface/transformers/blame/v4.51.3/src/transformers/pipelines/image_text_to_text.py#L353C16-L353C39) where it ultimately lands in processing_gemma_3.py where the [if condition only checks if the images are None](https://github.com/huggingface/transformers/blob/v4.51.3/src/transformers/models/gemma3/
|
https://github.com/huggingface/transformers/issues/38165
|
closed
|
[
"bug"
] | 2025-05-16T01:34:15Z
| 2025-06-23T08:03:03Z
| 6
|
sheldonlai
|
pytorch/xla
| 9,178
|
Code sample for basic mark sharding doesn't work
|
## 📚 Documentation
This document:
https://docs.pytorch.org/xla/master/learn/api-guide.html#module-torch_xla.distributed.spmd
has an important code sample to demonstrate sharding tensors across devices. It doesn't work - there are imports and setup that are not included.
More broadly, all of these samples should go into a larger guide that gently walks a user through the process of understanding how PT/XLA handles multi-device and multi-host up through gSPMD. It's very elegant and powerful, but poorly documented.
|
https://github.com/pytorch/xla/issues/9178
|
open
|
[
"distributed",
"documentation"
] | 2025-05-15T17:28:02Z
| 2025-05-19T13:59:30Z
| 0
|
yaoshiang
|
pytorch/xla
| 9,177
|
make CI build fast
|
## 🐛 Bug
The CI build takes ~2 hours, significantly affects dev velocity.
Judging from https://github.com/pytorch/xla/actions/runs/14986142268/job/42100348515, the `Build PyTorch/XLA` steps seems the bottleneck (it takes 1h15m and blocks a whole bunch of downstream test jobs). If we can speed this up, we may shove a large chunk from the build time.
Potential long-hanging fruit:
- The log suggests that there are only 32 parallel bazel actions for this job, far below our recommended dev set-up (112 actions). I suspect the worker machines have only 32 vCPUs. Can we upgrade to 128+ vCPUs? Build machines are highly leveraged, so investment there will pay for itself quickly in terms of dev velocity.
- Set up a bazel remote build farm so that the build is parallelized across machines.
|
https://github.com/pytorch/xla/issues/9177
|
open
|
[
"tech debt",
"CI",
"build"
] | 2025-05-15T16:48:36Z
| 2025-05-15T16:48:36Z
| 0
|
zhanyong-wan
|
huggingface/lerobot
| 1,114
|
How to collect data and train the policy from Lerobot totally out of the leader arm only by learning from demonstration using the main arm such as XARM or UR series
|
https://github.com/huggingface/lerobot/issues/1114
|
closed
|
[
"question",
"robots",
"stale"
] | 2025-05-15T15:31:13Z
| 2025-12-31T02:35:25Z
| null |
David-Kingsman
|
|
pytorch/data
| 1,489
|
Implement a Cache node
|
### 🚀 The feature
At some point, there were a [`InMemoryCacheHolder`](https://docs.pytorch.org/data/0.9/generated/torchdata.datapipes.iter.InMemoryCacheHolder.html?highlight=cache#torchdata.datapipes.iter.InMemoryCacheHolder) datapipe. However, this has been removed from the new node design.
This would be very useful for some expensive parts of the DAG that would gain from being stored in memory rather than recomputed each time.
### Motivation, pitch
Some transforms are quite expensive, and I would like to avoid needing to repeat them at each epoch. Therefore, it would be handy to have some cache mechanism that would allow skipping expensive parts of the DAG if they have been computed before. The user could have a choice to cache on memory or on the disk.
However, I'm not sure what the interface would look like. I feel like there would be 2 nodes needed, sharing the cache:
- One at the start of the DAG branch to skip (that would check if passing through the branch is needed)
- One at the end of the branch (that would store the result of the branch for it to be used later)
I can't really think of another way to make this work, as you can't have just the first one (or else how do you store the result of the computation at the end of the branch?), and you can't have just the last one (bc how do you determine if the item have been cached or not?).
As far as I understand nodes, they are executed in a bottom-up manner, with the last node requiring the result of the previous node, itself requiring the result of the previous one, all the way up to the first node. However, this design makes it difficult to deal with a cache as you need to decide which branch to take from the bottom. This would be easier with a top-down design, with the data coming from the first node, up to the entrance of the cache, which would be able to make a decision on the branch to choose to continue.
Maybe having a some sort of `CacheWrapper` that would wrap a single node would be the solution? But then it would be cumbersome to cache entire branches of the DAG.
|
https://github.com/meta-pytorch/data/issues/1489
|
open
|
[] | 2025-05-15T09:47:19Z
| 2025-05-20T04:25:09Z
| 1
|
leleogere
|
huggingface/transformers
| 38,147
|
How to check the number of tokens processed or the load of each expert in the Qwen3 MoE model during inference?
|
https://github.com/huggingface/transformers/issues/38147
|
closed
|
[] | 2025-05-15T09:21:29Z
| 2025-05-15T13:36:53Z
| null |
wumaotegan
|
|
huggingface/diffusers
| 11,561
|
FluxFillPipeline Support load IP Adapter.
|
### Model/Pipeline/Scheduler description
'FluxFillPipeline' object has no attribute 'load_ip_adapter'
I really need this,Thanks!
### Open source status
- [ ] The model implementation is available.
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/diffusers/issues/11561
|
closed
|
[
"help wanted",
"Good second issue"
] | 2025-05-15T08:58:42Z
| 2025-06-17T08:48:28Z
| 6
|
PineREN
|
huggingface/lerobot
| 1,111
|
Unrecognized argument policy.path. How to load a pretrained model?
|
When I run this command:
```
python lerobot/scripts/control_robot.py --robot.type so100 --control.type record --control.fps 30 --control.single_task "Grasp a yellow tape and put it to yellow square." --control.repo_id a_cam_1/result --control.tags '["tutorial"]' --control.warmup_time_s 5 --control.episode_time_s 30 --control.reset_time_s 10 --control.m_episodes 1 --control.push_to_hub false --control.policy,path output/checkpoints/last/pretrained_model
```
I got:
```
usage: control_robot.py [-h] [--config_path str] [--robot str] [--robot.type {aloha,koch,koch_bimanual,moss,so101,so100,stretch,lekiwi}] [--robot.gripper_open_degree str]
[--robot.max_relative_target str] [--robot.ip str] [--robot.port str] [--robot.video_port str] [--robot.cameras str] [--robot.calibration_dir str]
[--robot.leader_arms str] [--robot.follower_arms str] [--robot.teleop_keys str] [--robot.mock str] [--control str]
[--control.type {calibrate,teleoperate,record,replay,remote_robot}] [--control.arms str] [--control.teleop_time_s str] [--control.single_task str]
[--policy str] [--control.policy.type {act,diffusion,pi0,tdmpc,vqbet,pi0fast}] [--control.policy.replace_final_stride_with_dilation str]
[--control.policy.pre_norm str] [--control.policy.dim_model str] [--control.policy.n_heads str] [--control.policy.dim_feedforward str]
[--control.policy.feedforward_activation str] [--control.policy.n_encoder_layers str] [--control.policy.n_decoder_layers str]
[--control.policy.use_vae str] [--control.policy.n_vae_encoder_layers str] [--control.policy.temporal_ensemble_coeff str]
[--control.policy.kl_weight str] [--control.policy.optimizer_lr_backbone str] [--control.policy.drop_n_last_frames str]
[--control.policy.use_separate_rgb_encoder_per_camera str] [--control.policy.down_dims str] [--control.policy.kernel_size str]
[--control.policy.n_groups str] [--control.policy.diffusion_step_embed_dim str] [--control.policy.use_film_scale_modulation str]
[--control.policy.noise_scheduler_type str] [--control.policy.num_train_timesteps str] [--control.policy.beta_schedule str]
[--control.policy.beta_start str] [--control.policy.beta_end str] [--control.policy.prediction_type str] [--control.policy.clip_sample str]
[--control.policy.clip_sample_range str] [--control.policy.num_inference_steps str] [--control.policy.do_mask_loss_for_padding str]
[--control.policy.scheduler_name str] [--control.policy.num_steps str] [--control.policy.attention_implementation str]
[--control.policy.train_expert_only str] [--control.policy.train_state_proj str] [--control.policy.n_action_repeats str] [--control.policy.horizon str]
[--control.policy.image_encoder_hidden_dim str] [--control.policy.state_encoder_hidden_dim str] [--control.policy.latent_dim str]
[--control.policy.q_ensemble_size str] [--control.policy.mlp_dim str] [--control.policy.discount str] [--control.policy.use_mpc str]
[--control.policy.cem_iterations str] [--control.policy.max_std str] [--control.policy.min_std str] [--control.policy.n_gaussian_samples str]
[--control.policy.n_pi_samples str] [--control.policy.uncertainty_regularizer_coeff str] [--control.policy.n_elites str]
[--control.policy.elite_weighting_temperature str] [--control.policy.gaussian_mean_momentum str] [--control.policy.max_random_shift_ratio str]
[--control.policy.reward_coeff str] [--control.policy.expectile_weight str] [--control.policy.value_coeff str] [--control.policy.consistency_coeff str]
[--control.policy.advantage_scaling str] [--control.policy.pi_coeff str] [--control.policy.temporal_decay_coeff str]
[--control.policy.target_model_momentum str] [--control.policy.n_action_pred_token str] [--control.policy.action_chunk_size str]
[--control.policy.vision_backbone str] [--control.policy.crop_shape str] [--control.policy.crop_is_random str]
[--control.policy.pretrained_backbone_weights str] [--control.policy.use_group_norm str] [--control.policy.spatial_softmax_num_keypoints str]
[--control.policy.n_vqvae_training_steps str] [--control.policy.vqvae_n_embed str] [--control.policy.vqvae_embedding_dim str]
[--control.policy.vqvae_enc_hidden_dim str] [--control.policy.gpt_block_size str] [--control.policy.gpt_input_dim str]
[--control.policy.gpt_output_dim str] [--control.policy.gpt_n_layer str] [--control.policy.gpt_n_head str] [--control.policy.gpt_hidden_dim str]
|
https://github.com/huggingface/lerobot/issues/1111
|
closed
|
[
"bug"
] | 2025-05-15T03:13:27Z
| 2025-06-24T06:20:08Z
| null |
milong26
|
pytorch/xla
| 9,175
|
Add documentation on multi-controller
|
## 📚 Documentation
Add documentation demonstrating multi-node coordination. Start with 2 machines, each with [n] TPUs, and demonstrate ssh into each machine to run the same script with an all-reduce. Reference necessary information for network configuration to allow two hosts to communicate on GCP (optional: AWS and Azure). Cannot be just a toy example on the same machine using localhost as the coordination. Optional: demonstrate using slurm to further simplify coordination.
Should end up similar to:
https://docs.jax.dev/en/latest/multi_process.html
and
https://docs.pytorch.org/tutorials/intermediate/ddp_series_multinode.html
|
https://github.com/pytorch/xla/issues/9175
|
open
|
[
"documentation"
] | 2025-05-15T02:50:00Z
| 2025-05-19T13:58:20Z
| 0
|
yaoshiang
|
huggingface/diffusers
| 11,555
|
`device_map="auto"` supported for diffusers pipelines?
|
### Describe the bug
Hey dear diffusers team,
for `DiffusionPipline`, as I understand (hopefully correctly) from [this part of the documentation](https://huggingface.co/docs/diffusers/v0.33.1/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.device_map), it should be possible to specify `device_map="auto"` when loading a pipeline with `from_pretrained` but this results in a value error saying that this is not supported.
However, the documentation on [device placement](https://huggingface.co/docs/diffusers/en/tutorials/inference_with_big_models#device-placement) currently states that only the "balanced" strategy is supported.
Is this possibly similar to #11432 and should be removed from the docstrings / documentation? Happy to help on this with a PR if it turns out to be a mistake in the documentation.
Thanks a lot for your hard work!
### Reproduction
```python
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", device_map="auto")
```
or
```python
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", device_map="auto")
```
### Logs
```shell
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[12], line 3
1 from diffusers import StableDiffusionPipeline
----> 3 pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", device_map="auto")
File ~/miniconda3/envs/pruna/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
111 if check_use_auth_token:
112 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 114 return fn(*args, **kwargs)
File ~/miniconda3/envs/pruna/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py:745, in DiffusionPipeline.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
742 raise ValueError("`device_map` must be a string.")
744 if device_map is not None and device_map not in SUPPORTED_DEVICE_MAP:
--> 745 raise NotImplementedError(
746 f"{device_map} not supported. Supported strategies are: {', '.join(SUPPORTED_DEVICE_MAP)}"
747 )
749 if device_map is not None and device_map in SUPPORTED_DEVICE_MAP:
750 if is_accelerate_version("<", "0.28.0"):
NotImplementedError: auto not supported. Supported strategies are: balanced
```
### System Info
- 🤗 Diffusers version: 0.33.1
- Platform: Linux-5.15.0-139-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.16
- PyTorch version (GPU?): 2.7.0+cu126 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.30.2
- Transformers version: 4.51.3
- Accelerate version: 1.6.0
- PEFT version: 0.15.2
- Bitsandbytes version: 0.45.5
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA H100 PCIe, 81559 MiB
NVIDIA H100 PCIe, 81559 MiB
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/11555
|
open
|
[
"bug"
] | 2025-05-14T16:49:32Z
| 2025-05-19T09:44:29Z
| 4
|
johannaSommer
|
pytorch/torchtitan
| 1,192
|
document the usage of environment variables
|
This is one of the community requests.
Similarly, we should also document the inductor flag usages.
Format can be a dedicated `.md` under `docs/`.
|
https://github.com/pytorch/torchtitan/issues/1192
|
open
|
[
"documentation",
"better engineering",
"high priority",
"triage review"
] | 2025-05-14T08:41:36Z
| 2025-05-14T08:41:40Z
| 0
|
tianyu-l
|
huggingface/lerobot
| 1,107
|
Does Pi0 use PaliGemma VLM pretrained model weights?
|
I attempted to finetune the Pi0 model, but noticed that it does not download the pretrained weights of Paligemma from Hugging Face. Specifically, I found that Pi0 initializes the VLM with:
```python
self.paligemma = PaliGemmaForConditionalGeneration(config=config.paligemma_config)
```
instead of using:
```python
AutoModel.from_pretrained("google/paligemma-3b-pt-224")
```
This seems to result in the model not loading the pretrained weights.
Could you please confirm whether this is the intended behavior? Should Pi0 load Paligemma’s pretrained weights from Hugging Face, or is there a reason it initializes the model from scratch?
Thank you!
|
https://github.com/huggingface/lerobot/issues/1107
|
closed
|
[
"bug",
"question",
"policies"
] | 2025-05-14T06:47:15Z
| 2025-10-08T08:44:03Z
| null |
lxysl
|
huggingface/lerobot
| 1,106
|
How to convert image mode to video mode lerobot dataset?
|
https://github.com/huggingface/lerobot/issues/1106
|
open
|
[
"question",
"dataset"
] | 2025-05-14T03:54:42Z
| 2025-08-08T16:42:33Z
| null |
hairuoliu1
|
|
huggingface/transformers.js
| 1,316
|
May I ask how to set the HF_TOKEN on the browser side?
|
### Question
May I ask how to set the HF_TOKEN on the browser side?

The following is my code:
```
const model = await AutoModel.from_pretrained("briaai/RMBG-2.0", {
config: {
model_type: "custom",
},
headers: {
'Authorization': `Bearer hf_xxxxxxxxxxxxxxx`
}
});
```
|
https://github.com/huggingface/transformers.js/issues/1316
|
open
|
[
"question"
] | 2025-05-14T01:43:02Z
| 2025-05-27T21:53:45Z
| null |
dengbupapapa
|
huggingface/xet-core
| 321
|
How to resume DL of partial existing file using xet + huggingface-cli download if not previously downloaded using HF tools / cache?
|
How to resume DL of partial existing file using xet + huggingface-cli download if not previously downloaded using HF tools / cache?
I guess there may be a way in the scenario I had but by my mistake apparently I chose some incorrect usage and caused the deletion of the 95% complete partial local file instead of resuming / recovering its download via XET.
e.g. I tried with a fresh tool install and a process something like:
% pip install -U "huggingface_hub[hf_xet]"
% pwd
/whatever/some_tmpdir
% ls -lh somefile
35G somefile
// Partial file exists and is 95% complete but short / truncated by failed copy previously.
% huggingface-cli download --local-dir . some_repo_id some_dir/somefile
The end result was apparently the deletion of the pre-existing 95% complete 'somefile' from the current directory and the initiation of new download using xet protocol from the xet enabled some_repo_id.
Based on huggingface-cli download --help and the articles about xet I had expected it to realize the pre-existing current directory's "somefile" with an identical name/target directory as the file being requested for download was a partial relevant file and it should start to recover / complete the download by missing chunk completion. That despite the fact that there was no cache directory or git LFS structure around the current working directory, it just contained the isolated partial file only.
huggingface-cli download --help
usage: huggingface-cli <command> [<args>] download [-h] [--repo-type {model,dataset,space}] [--revision REVISION] [--include [INCLUDE ...]] [--exclude [EXCLUDE ...]] [--cache-dir CACHE_DIR]
[--local-dir LOCAL_DIR] [--local-dir-use-symlinks {auto,True,False}] [--force-download] [--resume-download] [--token TOKEN] [--quiet]
[--max-workers MAX_WORKERS]
repo_id [filenames ...]
positional arguments:
repo_id ID of the repo to download from (e.g. `username/repo-name`).
filenames Files to download (e.g. `config.json`, `data/metadata.jsonl`).
options:
...
--local-dir LOCAL_DIR
If set, the downloaded file will be placed under this directory. Check out https://huggingface.co/docs/huggingface_hub/guides/download#download-files-to-local-folder for more
details.
...
--resume-download Deprecated and ignored. Downloading a file to local dir always attempts to resume previously interrupted downloads (unless hf-transfer is enabled).
...
huggingface-cli download --local-dir . some_repo_id some_dir/somefile
Downloading 'somefile' to '.cache/huggingface/download/whatever.incomplete'
Xet Storage is enabled for this repo. Downloading file from Xet Storage..
...
If there's a different way to accomplish this partial file recovery result (or even if there's a corrupted / patched / whatever file per. xet's chunk filling capabilities) then perhaps clarifying / expanding the usage documentation to cover this kind of common scenario use case could help?
The desired result would be something like
rsync --verbose --archive server:/some_repo_id/somedir/somefile somefile
which would use rolling hash chunk based rsync algorithm / protocol downloading to complete the retrieval of the somefile in the current directory regardless of other context.
Also I wonder if it'd be interesting to have a rsync to xet 'bridge' so anyone could use a normal rsync client but pull xet files from HF repos if HF doesn't want to support rsync itself in whole but has the conceptually aligned XET back end that could be "mapped" to rsync chunk based protocol (I suppose) by a thin protocol adapter?
Lots of e.g. linux distribution mirror sites support rsync as an HTTP/HTTPS alternative so it presumably has some significant market-share for people doing IT / devops / mlops / whatever use case downloads.
|
https://github.com/huggingface/xet-core/issues/321
|
closed
|
[] | 2025-05-13T22:16:02Z
| 2025-05-16T17:48:45Z
| null |
ghchris2021
|
huggingface/chat-ui
| 1,819
|
Correct syntax of .env: what are those backticks for multiline strings?
|
I have read the suggestion of checking discussions but I was unable to find an answer so something very basic looks like it is missing here.
In the documentation there are many examples suggesting of putting long values in env var surrounded by backticks.
However when I do this I get errors like:
JSON5: invalid character '`' at 1:1
I have checked around and I have been unable to find anywhere references to .env using backticks for multiline strings, and the parser is refusing this.
THis is happening with a git clone of main but also using tagged versions.
So how do you possibile use this apparently non standard syntax and how is it possible no one else but me is having this issue?
|
https://github.com/huggingface/chat-ui/issues/1819
|
open
|
[
"support"
] | 2025-05-13T12:21:43Z
| 2025-05-23T09:37:09Z
| 1
|
sciabarracom
|
huggingface/optimum
| 2,262
|
New Release to Support `transformers>=4.51.0`?
|
### Feature request
The latest release (`1.24.0`) is 4 months old. There has been around 38 commits since the last release. Will there be a new release soon?
### Motivation
There is a medium CVE related to `transformers==4.48.1` that is the latest compatible version.
GHSA-fpwr-67px-3qhx
I am also blocked from upgrading `vllm==0.8.5` within my system as it requires `transformers>=4.51.0`. `transformers==4.48.1` is compatible with up to `vllm==0.8.2` only where there are critical and high CVEs.
GHSA-hj4w-hm2g-p6w5
GHSA-9f8f-2vmf-885j
It looks like the current dependencies in the `main` branch will mitigate these issues completely. Is there any blocker to creating a new release from current state?
### Your contribution
Don't think I will be granted permissions to create releases in this project.
|
https://github.com/huggingface/optimum/issues/2262
|
closed
|
[] | 2025-05-13T07:46:15Z
| 2025-05-13T22:27:08Z
| 2
|
yxtay
|
huggingface/lerobot
| 1,101
|
ValueError: No integer found between bounds [low_factor=np.float32(-0.001953125), upp_factor=np.float32(-0.001953125)]
|
### System Info
```Shell
2025,ubantu,python3.10. when doing teleoperation
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [x] My own task or dataset (give details below)
### Reproduction
python lerobot/scripts/control_robot.py --robot.type=so100 --robot.cameras='{}' --control.type=teleoperate
### Expected behavior
How to deal with it.
|
https://github.com/huggingface/lerobot/issues/1101
|
closed
|
[
"question"
] | 2025-05-13T05:06:35Z
| 2025-06-19T14:25:08Z
| null |
qingx-cyber
|
pytorch/torchtitan
| 1,184
|
[Question] CP and DP
|
Hi, this is a really great repo! Thanks for open-sourcing it!
I am reading the code of how torchtian handles the multi-dimensional parallelism. It seems the `cp` is a part of the mesh dimensions interacting with `dp_shard`, `dp_replicate` etc. My understanding of `cp` is that it is orthogonal to other parallelisms. For example, it is a validate configuration of `dp_shard=8`, `dp_replicate=1` and `cp=8` for a 8-GPU node. But according to the code, it will raise an error as `dp_shard * cp != world_size`.
https://github.com/pytorch/torchtitan/blob/6df8c8925bb2ba9b4e6aa88cece0e3f0633ab6ce/torchtitan/distributed/parallel_dims.py#L48
|
https://github.com/pytorch/torchtitan/issues/1184
|
closed
|
[
"question",
"module: context parallel"
] | 2025-05-13T03:30:10Z
| 2025-05-13T17:19:22Z
| null |
galalalala
|
huggingface/diffusers
| 11,542
|
What's the difference between 'example/train_text_to_image_lora.py' and 'example/research_projects/lora/train_text_to_image_lora.py' ?
|
I want to use the "--train_text_encoder" argument, but it only exists in the latter script.
|
https://github.com/huggingface/diffusers/issues/11542
|
closed
|
[] | 2025-05-13T01:41:19Z
| 2025-06-10T20:35:10Z
| 2
|
night-train-zhx
|
huggingface/lerobot
| 1,097
|
UnboundLocalError: local variable 'action' referenced before assignment
|
May I ask where the problem lies? It occurred during the evaluation of the strategy and I have been searching for a long time without finding a solution
(lerobot) wzx@wzx:~/lerobot$ python lerobot/scripts/control_robot.py \
> --robot.type=so101 \
> --control.type=record \
> --control.fps=30 \
> --control.single_task="Grasp a lego block and put it in the bin." \
> --control.repo_id=${HF_USER}/eval_act_so101_test \
> --control.tags='["tutorial"]' \
> --control.warmup_time_s=5 \
> --control.episode_time_s=30 \
> --control.reset_time_s=30 \
> --control.num_episodes=10 \
> --control.display_data=true \
> --control.push_to_hub=true \
> --control.policy.path=outputs/train/act_so101_test/checkpoints/last/pretrained_model
INFO 2025-05-12 22:54:05 ol_robot.py:408 {'control': {'display_data': True,
'episode_time_s': 30,
'fps': 30,
'num_episodes': 10,
'num_image_writer_processes': 0,
'num_image_writer_threads_per_camera': 4,
'play_sounds': True,
'policy': {'beta_end': 0.02,
'beta_schedule': 'squaredcos_cap_v2',
'beta_start': 0.0001,
'clip_sample': True,
'clip_sample_range': 1.0,
'crop_is_random': True,
'crop_shape': (84, 84),
'device': 'cuda',
'diffusion_step_embed_dim': 128,
'do_mask_loss_for_padding': False,
'down_dims': (512, 1024, 2048),
'drop_n_last_frames': 7,
'horizon': 16,
'input_features': {'observation.images.laptop': {'shape': (3,
480,
640),
'type': <FeatureType.VISUAL: 'VISUAL'>},
'observation.images.phone': {'shape': (3,
480,
640),
'type': <FeatureType.VISUAL: 'VISUAL'>},
'observation.state': {'shape': (6,),
'type': <FeatureType.STATE: 'STATE'>}},
'kernel_size': 5,
'n_action_steps': 8,
'n_groups': 8,
'n_obs_steps': 2,
'noise_scheduler_type': 'DDPM',
'normalization_mapping': {'ACTION': <NormalizationMode.MIN_MAX: 'MIN_MAX'>,
'STATE': <NormalizationMode.MIN_MAX: 'MIN_MAX'>,
'VISUAL': <NormalizationMode.MEAN_STD: 'MEAN_STD'>},
'num_inference_steps': None,
'num_train_timesteps': 100,
'optimizer_betas': (0.95, 0.999),
'optimizer_eps': 1e-08,
'optimizer_lr': 0.0001,
'optimizer_weight_decay': 1e-06,
'output_features': {'action': {'shape': (6,),
'type': <FeatureType.ACTION: 'ACTION'>}},
'prediction_type': 'epsilon',
'pretrained_backbone_weights': None,
'scheduler_name': 'cosine',
'scheduler_warmup_steps': 500,
'spatial_softmax_num_keypoints': 32,
'use_amp': False,
'use_film_scale_modulation': True,
'use_group_norm': True,
'use_separate_rgb_encoder_per_camera': False,
'vision_backbone': 'resnet18'},
'private': False,
'push_to_hub': True,
'repo_id': 'bursomi/eval_act_so101_test',
'reset_time_s': 30,
'resume': False,
'root': None,
'single_task': 'Grasp a lego block and put it in the bin.',
'tags': ['tutorial'],
'video': True,
'warmup_time_s': 5},
'robot': {'calibration_dir': '.cache/calibration/so101',
'cameras': {'laptop': {'camera_index': 2,
'channels': 3,
'color_mode': 'rgb',
'fps': 30,
'height': 480,
'mock': False,
'rotation': None,
|
https://github.com/huggingface/lerobot/issues/1097
|
closed
|
[
"bug",
"question"
] | 2025-05-12T16:06:27Z
| 2025-06-19T14:08:57Z
| null |
incomple42
|
huggingface/lerobot
| 1,093
|
List of available task
|
Thank you for your effort. Can you provide a list of available tasks (not just environments) for better understanding and usage?
|
https://github.com/huggingface/lerobot/issues/1093
|
closed
|
[
"question"
] | 2025-05-10T06:18:21Z
| 2025-10-17T12:03:32Z
| null |
return-sleep
|
huggingface/transformers
| 38,052
|
`.to` on a `PreTrainedModel` throws a Pyright type check error. What is the correct way to put a model to the device that does not throw type check errors?
|
### System Info
(venv) nicholas@B367309:tmp(master)$ transformers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.51.1
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: 1.6.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu126 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA RTX 2000 Ada Generation Laptop GPU
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is a small snippet
```python
from transformers.models.auto.modeling_auto import AutoModelForCausalLM
from transformers.models.llama.modeling_llama import LlamaForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"deepseek-ai/deepseek-coder-1.3b-instruct", torch_dtype=torch.float16
)
assert isinstance(model, LlamaForCausalLM)
model.to("cuda:0")
```
This code runs fine and correctly puts the model to the device, however, `Pyright` throws a pre-runtime type check error on the `model.to("cuda:0") call. This is the error,
```plaintext
Pyright: Argument of type "Literal['cuda:0']" cannot be assigned to parameter "self" of
type "LlamaForCausalLM" in function "__call__".
"Literal['cuda:0']" is not assignable to "LlamaForCausalLM" [reportArgumentType]
```
What is the correct way to put a model to the device that will satisfy the type checker?
### Expected behavior
There should be know static type check error when doing `model.to(<device>)`
|
https://github.com/huggingface/transformers/issues/38052
|
closed
|
[
"bug"
] | 2025-05-09T19:01:15Z
| 2025-06-29T08:03:07Z
| null |
nickeisenberg
|
huggingface/finetrainers
| 401
|
how to train wan using multi-node
|
### Feature request / 功能建议
Hi! I still wonder the multi-node training of Wan2.1 14B. Do you support FSDP across nodes?
### Motivation / 动机
Currently the memory restraint is very harsh for long video LoRA fine-tuning
### Your contribution / 您的贡献
N/A
|
https://github.com/huggingface/finetrainers/issues/401
|
open
|
[] | 2025-05-09T18:11:07Z
| 2025-05-09T18:11:07Z
| null |
Radioheading
|
pytorch/torchtitan
| 1,179
|
FSDP2+DPP vs 2D Device Mesh FSDP2
|
I have a question regarding FSDP2 + DDP, in torchtitan codebase it is used as FSDP2 -> DDP. In FSDP2 doc it is said that you can use 2d device mesh to apply MISC equivalent in deepspeed which IUC is FSDP wrapped in DDP.
Is there any difference between those 2 methods that I should be aware of, or are they functionally equivalent and achieve the same speed/results.
|
https://github.com/pytorch/torchtitan/issues/1179
|
closed
|
[] | 2025-05-09T18:02:56Z
| 2025-05-10T16:52:15Z
| 2
|
S1ro1
|
pytorch/torchtitan
| 1,177
|
Can we support outputting checkpoints directly in .pt format?
|
Today we need to do an extra conversion step according to this README: https://github.com/pytorch/torchtitan/blob/main/docs/checkpoint.md
```
python -m torch.distributed.checkpoint.format_utils dcp_to_torch outputs/checkpoint/step-100 /tmp/checkpoint.pt
```
I think we should **provide an option for users to specify which format to output their checkpoints** instead, and call this function in torchtitan for users as part of outputting the checkpoint.
------------------------------------------------------------------------------------------
**Bonus:** This conversion step actually fails today if we used FP8 training. I had to manually add the following line to the `dcp_to_torch` function as a hack to get it to work:
```
torch.serialization.add_safe_globals([torchao.float8.fsdp_utils.WeightWithDynamicFloat8CastTensor])
```
It would be great if we can just either implicitly add the safe globals when we output the checkpoint in torchtitan, or simply remove this `WeightWithDynamicFloat8CastTensor` from the BC surface.
|
https://github.com/pytorch/torchtitan/issues/1177
|
open
|
[
"enhancement",
"module: checkpoint"
] | 2025-05-09T16:01:50Z
| 2025-08-21T03:18:12Z
| 8
|
andrewor14
|
huggingface/lerobot
| 1,091
|
Diffusion policy for different tasks instead of PushT
|
Thank you all for the great job. I want to know if I can train the diffusion policy for different tasks besides the PushT task. How to achieve that? If the task is a new custom task with custom dataset, is there any feasible solution to solve that?
Thank you for your help!
|
https://github.com/huggingface/lerobot/issues/1091
|
closed
|
[
"question",
"policies",
"stale"
] | 2025-05-09T15:44:20Z
| 2025-12-31T02:35:27Z
| null |
siqisiqisiqisiqi
|
huggingface/lerobot
| 1,086
|
push_to_the_hub error
|
### System Info
```Shell
- `lerobot` version: 0.1.0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.30.2
- Dataset version: 3.5.0
- Numpy version: 2.2.5
- PyTorch version (GPU?): 2.7.0 (False)
- Cuda version: N/A
- Using GPU in script?: <fill in>
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
import argparse
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
def parse_args():
parser = argparse.ArgumentParser(description="Push a local HuggingFace dataset to the Hub")
parser.add_argument(
"--path",
type=str,
required=True,
help="Local directory containing the dataset"
)
parser.add_argument(
"--repo_id",
type=str,
required=True,
help="Repository ID on HuggingFace Hub (format: username/dataset_name)"
)
parser.add_argument(
"--private",
action="store_true",
help="Whether to make the dataset private"
)
# Removed unused arguments
return parser.parse_args()
def main():
args = parse_args()
print(f"Loading dataset from {args.path}...")
dataset = LeRobotDataset(
repo_id=args.repo_id,
root=args.path
)
print(f"Pushing dataset to {args.repo_id}...")
dataset.push_to_hub(
args.repo_id,
private=args.private
)
print("Dataset successfully pushed to Hub!")
return 0
if __name__ == "__main__":
main()
<img width="1502" alt="Image" src="https://github.com/user-attachments/assets/36c563a6-ed2e-4deb-b54e-ce5c9889c50b" />
### Expected behavior
upload it to the huggingface
|
https://github.com/huggingface/lerobot/issues/1086
|
closed
|
[
"question"
] | 2025-05-09T03:48:09Z
| 2025-10-17T11:55:25Z
| null |
jungwonshin
|
pytorch/xla
| 9,129
|
set_mat_mul_precision is flakey
|
## 🐛 Bug
set_mat_mul_precision seems to allow switching the precision within a single process... sometimes, like in the precision_tutorial.py/ipynb. But in the unit test test_mat_mul_precision, there's an example of a test that switches the precision unsuccessfully.
## To Reproduce
One unit test in test_mat_mul_precision.py is decorated "@expectedFailure". Once this issue is resolved, we should be able to remove that decorator and see that these tests work as intended, within a loop.
PYTHONPATH="$TEST_CDIR${PYTHONPATH:+:$PYTHONPATH}" python3 -m unittest test_mat_mul_precision.TestMatMulPrecision.test_all
## Expected behavior
Program can switch mat_mul_precision between default, high, and highest dynamically in a single program.
|
https://github.com/pytorch/xla/issues/9129
|
open
|
[
"bug",
"runtime"
] | 2025-05-09T03:22:22Z
| 2025-05-12T12:23:12Z
| 1
|
yaoshiang
|
pytorch/xla
| 9,118
|
Add installation instructions to `benchmarks/README.md`
|
## 📚 Documentation
The [`benchmarks/README.md`](https://github.com/pytorch/xla/blob/master/benchmarks/README.md) does not contain the installation instructions, which is crucial for running the benchmarks.
It requires installing the [`pytorch/benchmark`](https://github.com/pytorch/benchmark) repo and other libraries like `libGL` (required by Llava).
## Solution
Add the instructions to [`benchmarks/README.md`](https://github.com/pytorch/xla/blob/master/benchmarks/README.md).
Install [`pytorch/benchmark`](https://github.com/pytorch/benchmark) as a library.
Install `libGL`.
Install any other requirements.
To verify, make sure the instructions work with the devcontainer.
|
https://github.com/pytorch/xla/issues/9118
|
closed
|
[
"documentation",
"benchmarking"
] | 2025-05-08T17:51:31Z
| 2025-05-22T17:40:05Z
| 1
|
haifeng-jin
|
huggingface/trl
| 3,424
|
[GRPO] How to train model using vLLM and model parallelism on one node?
|
I tried to start GRPO trainer with vLLM and model parallelism on a single node with 8 GPUs (8 x A100 80G).
My plan was to use one GPU as the vLLM server and other 7 GPUs to load model with model parallelism (e.g., `device_map="auto"`)
```
CUDA_VISIBLE_DEVICES=0 trl vllm-serve --model <model_path> &
CUDA_VISIBLE_DEVICES=1,2,3,4,5,6,7 accelerate launch --num_machines 1 --num_processes 1 train.py
```
But the training ran into the following error
`AssertionError: this nccl communicator is created to work on cuda:0, but the input tensor is on cuda:1`
I think it happened when copying the weights to vLLM server.
```
torch==2.6.0+cu124
transformers==4.51.3
trl==0.17.0
accelerate==1.4.0
```
|
https://github.com/huggingface/trl/issues/3424
|
open
|
[] | 2025-05-08T17:22:19Z
| 2025-12-02T22:48:13Z
| null |
zhiqihuang
|
huggingface/lerobot
| 1,082
|
When add openvla oft policy?
|
https://github.com/huggingface/lerobot/issues/1082
|
closed
|
[
"question",
"policies",
"stale"
] | 2025-05-08T09:16:16Z
| 2025-12-31T02:35:30Z
| null |
zmf2022
|
|
huggingface/text-generation-inference
| 3,213
|
Whether it supports Huawei Atlas300 graphics card?
|
### System Info
Does the tgi inference framework support Huawei Atlas300I graphics cards?Could you help come up with a compatible solution?
### Information
- [x] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
.
### Expected behavior
Compatible with Huawei graphics cards. I want to use tgi on the Huawei Atlas300I graphics card
|
https://github.com/huggingface/text-generation-inference/issues/3213
|
open
|
[] | 2025-05-08T03:18:30Z
| 2025-05-08T03:18:38Z
| 0
|
fxb392
|
pytorch/serve
| 3,416
|
Adding vendor RBLN(Rebellions)
|
TorchServe has a varying structure for different accelerator types through recently added #3371.
Although [Rebellions](https://rebellions.ai/) provides a guide on how to utilize `TorchServe with the RBLN(Rebellions) NPUs` through its official document page(https://docs.rbln.ai/software/model_serving/torchserve/torchserve.html), the current implementation of TorchServe does not recognize the RBLN NPU as a valid accelerator vendor. As a result, even when `gpu_id` is set in configuration using the `RBLN NPU`, the specified RBLN NPUs cannot be properly utilized.
We would like to propose adding RBLN NPU as a recognized accelerator vendor in TorchServe, along with an official user guide. This addition will enable seamless integration and usage of TorchServe in environments equipped with RBLN NPUs.
|
https://github.com/pytorch/serve/issues/3416
|
open
|
[] | 2025-05-08T00:49:45Z
| 2025-05-08T00:49:45Z
| 0
|
rebel-ysseo
|
pytorch/pytorch
| 153,108
|
Introduce unbacked friendly is_known_contiguous and use it instead of is_contiguous in all locations where there is a general path for not know_contiguous
|
title.
cc @chauhang @penguinwu @ezyang @bobrenjc93
|
https://github.com/pytorch/pytorch/issues/153108
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"data dependent error"
] | 2025-05-07T23:10:19Z
| 2025-09-27T01:23:17Z
| null |
laithsakka
|
huggingface/trl
| 3,419
|
[GRPO] How to do gradient accumulation over sampled outputs?
|
Greetings,
I am wondering if we have this feature to do gradient accumulation over sampled outputs. For example, if I have `num_generations = 4`, so we have a single query `q1`, we have`completions = [o1, o2, o3, o4]`. I want to set that `per_device_train_batch_size=2, gradient_accumulation_steps=2`. So that the GPU or cluster will sample `[o1, o2]` first, and then calculate the gradient, then do, `[o3,o4]`, and do gradient accumulation over these two mini-samples for the datapoint `q1`.
I assume this will be equivalent to having `num_generations=4, per_device_train_batch_size=4, gradient_accumulation_steps=1`. But we cannot do this now. Could someone tell me how to properly do that? Do we support such feature now?
I hope I made myself clear.
Thank you very much!
|
https://github.com/huggingface/trl/issues/3419
|
closed
|
[] | 2025-05-07T17:49:36Z
| 2025-05-09T06:26:29Z
| null |
SpaceHunterInf
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.