repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/alignment-handbook
| 215
|
Use alignment-handbook on Apple Silicon
|
Hi, is it possible to install and use this tool on Apple Silicon? I am aware that certain dependencies, such as Flash Attention, do not work on Apple Silicon. Has anyone tried and successfully installed this tool without those dependencies?
|
https://github.com/huggingface/alignment-handbook/issues/215
|
closed
|
[] | 2025-04-11T01:28:02Z
| 2025-04-27T01:09:55Z
| 0
|
minhquoc0712
|
huggingface/lerobot
| 968
|
没有物理机器人我如何进行仿真机器人,我应该如何学习
|
没有物理机器人我如何进行仿真机器人,我应该如何学习仿真机器人呢,有没有好的推荐吗
|
https://github.com/huggingface/lerobot/issues/968
|
closed
|
[
"question",
"simulation"
] | 2025-04-10T18:10:47Z
| 2025-10-08T12:54:19Z
| null |
harryhu0301
|
huggingface/diffusers
| 11,285
|
value errors in convert to/from diffusers from original stable diffusion
|
### Describe the bug
There's a hardcode somewhere for 77 tokens, when it should be using the dimensions of what is actually in the model.
I have a diffusers-layout SD1.5 model, with LongCLIP.
https://huggingface.co/opendiffusionai/xllsd-alpha0
I can pull it locally, then convert to single file format, with
python convert_diffusers_to_original_stable_diffusion.py \
--use_safetensors \
--model_path $SRCM \
--checkpoint_path $DESTM
But then if I try to convert it back, I get size errors for the text encoder not being 77 size.
I should point out that the model WORKS PROPERLY for diffusion, when loaded in diffusers format, so I dont have some funky broken model here.
### Reproduction
from transformers import CLIPTextModel, CLIPTokenizer
from diffusers import StableDiffusionPipeline, AutoencoderKL
import torch
pipe = StableDiffusionPipeline.from_single_file(
"XLLsd-phase0.safetensors",
torch_dtype=torch.float32,
use_safetensors=True)
outname = "XLLsd_recreate"
pipe.save_pretrained(outname, safe_serialization=False)
### Logs
```shell
venv/lib/python3.12/site-packages/diffusers/models/model_loading_utils.py", line 230, in load_model_dict_into_meta
raise ValueError(
ValueError: Cannot load because text_model.embeddings.position_embedding.weight expected shape torch.Size([77, 768]), but got torch.Size([248, 768]). If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.
```
### System Info
- 🤗 Diffusers version: 0.32.2
- Platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39
- Running on Google Colab?: No
- Python version: 3.12.3
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.29.3
- Transformers version: 4.50.0
- Accelerate version: 1.5.2
- PEFT version: not installed
- Bitsandbytes version: 0.45.2
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/11285
|
open
|
[
"bug"
] | 2025-04-10T17:16:42Z
| 2025-05-12T15:03:03Z
| 2
|
ppbrown
|
huggingface/diffusers
| 11,272
|
what is the difference between from diffusion import *** and from diffusers import ***?
|
I have installed diffusers and it runs ok, however the code gets wrong with " no module named diffusion "
when goes to from diffusion import ***?
What is the difference between from diffusion import *** and from diffusers import ***?
Need I install them all and what is the difference between diffusion and diffusers?
|
https://github.com/huggingface/diffusers/issues/11272
|
closed
|
[] | 2025-04-10T05:11:56Z
| 2025-04-30T02:11:51Z
| null |
micklexqg
|
huggingface/inference-benchmarker
| 11
|
How to set the OPENAI_API_KEY?
|
There is no api_key param for inference-benchmarker. How to set the OPENAI_API_KEY?
Thanks~
code there:
https://github.com/huggingface/inference-benchmarker/blob/d91a0162bdfe318fe95b9a9bbb53b1bdc39194a9/src/requests.rs#L145C1-L153C36
```bash
root@P8757303A244:/opt/inference-benchmarker# inference-benchmarker -h
Usage: inference-benchmarker [OPTIONS] --tokenizer-name <TOKENIZER_NAME>
Options:
-t, --tokenizer-name <TOKENIZER_NAME>
The name of the tokenizer to use [env: TOKENIZER_NAME=]
--model-name <MODEL_NAME>
The name of the model to use. If not provided, the same name as the tokenizer will be used [env: MODEL_NAME=]
-m, --max-vus <MAX_VUS>
The maximum number of virtual users to use [env: MAX_VUS=] [default: 128]
-d, --duration <DURATION>
The duration of each benchmark step [env: DURATION=] [default: 120s]
-r, --rates <RATES>
A list of rates of requests to send per second (only valid for the ConstantArrivalRate benchmark) [env: RATES=]
--num-rates <NUM_RATES>
The number of rates to sweep through (only valid for the "sweep" benchmark) The rates will be linearly spaced up to the detected maximum rate [env: NUM_RATES=] [default: 10]
--profile <PROFILE>
A benchmark profile to use [env: PROFILE=]
-b, --benchmark-kind <BENCHMARK_KIND>
The kind of benchmark to run (throughput, sweep, optimum) [env: BENCHMARK_KIND=] [default: sweep]
-w, --warmup <WARMUP>
The duration of the prewarm step ran before the benchmark to warm up the backend (JIT, caches, etc.) [env: WARMUP=] [default: 30s]
-u, --url <URL>
The URL of the backend to benchmark. Must be compatible with OpenAI Message API [env: URL=] [default: http://localhost:8000]
-n, --no-console
Disable console UI [env: NO_CONSOLE=]
--prompt-options <PROMPT_OPTIONS>
Constraints for prompt length. No value means use the input prompt as defined in input dataset. We sample the number of tokens to generate from a normal distribution. Specified as a comma-separated list of key=value pairs. * num_tokens: target number of prompt tokens * min_tokens: minimum number of prompt tokens * max_tokens: maximum number of prompt tokens * variance: variance in the number of prompt tokens [env: PROMPT_OPTIONS=]
--decode-options <DECODE_OPTIONS>
Constraints for the generated text. We sample the number of tokens to generate from a normal distribution. Specified as a comma-separated list of key=value pairs. * num_tokens: target number of generated tokens * min_tokens: minimum number of generated tokens * max_tokens: maximum number of generated tokens * variance: variance in the number of generated tokens [env: DECODE_OPTIONS=]
--dataset <DATASET>
Hugging Face dataset to use for prompt generation [env: DATASET=] [default: hlarcher/inference-benchmarker]
--dataset-file <DATASET_FILE>
File to use in the Dataset [env: DATASET_FILE=] [default: share_gpt_filtered_small.json]
--extra-meta <EXTRA_META>
Extra metadata to include in the benchmark results file, comma-separated key-value pairs. It can be, for example, used to include information about the configuration of the benched server. Example: --extra-meta "key1=value1,key2=value2" [env: EXTRA_META=]
--run-id <RUN_ID>
[env: RUN_ID=]
-h, --help
Print help (see more with '--help')
-V, --version
Print version
```
|
https://github.com/huggingface/inference-benchmarker/issues/11
|
closed
|
[] | 2025-04-10T04:36:11Z
| 2025-04-25T13:13:18Z
| null |
handsome-chips
|
huggingface/transformers
| 37,408
|
How to solve the error of converting Qwen onnx_model to tensorRT_model?
|
### **1. The transformers' Qwen ONNX model has been exported successfully.**
### **2. Convert ONNX_model to tensorRT_model failed by trtexec.**
**error info**
```
[04/10/2025-11:04:52] [E] Error[3]: IExecutionContext::setInputShape: Error Code 3: API Usage Error (Parameter check failed, condition: engineDims.d[i] == dims.d[i]. Static dimension mismatch while setting input shape for key_cache.1. Set dimensions are [7,8,32,128]. Expected dimensions are [7,8,1,128].)
[04/10/2025-11:04:52] [E] The engine was built with static shapes for input tensor key_cache.1 but the provided shapes do not match the static shapes!
[04/10/2025-11:04:52] [E] Inference set up failed
```
### **Due to the fact that Qwen of Transoformers utilizes the DynamicCache class to handle KVcache, The error should be attributed to DynamicCache.**
**### ONNX model check OK**
```
The model is well-formed and valid!
=======================Model1 inputs:
x_s [1, 'seq_len', 1024]
attn_mask [1, 'seq_len', 'seq_len']
key_cache.1 [7, 8, 'seq_len', 128]
value_cache.1 [7, 8, 'seq_len', 128]
=======================Model1 outputs:
y_pred [1, 'seq_len', 1024]
key_cache [7, 8, 'seq_len', 128]
value_cache [7, 8, 'seq_len', 128]
```
**export foward**
```
def injected_forward(
self,
xs: torch.Tensor,
att_mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
key_cache: torch.Tensor = torch.zeros((0, 0, 0, 0), dtype=torch.float32),
value_cache: torch.Tensor = torch.zeros((0, 0, 0, 0), dtype=torch.float32)
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
att_mask = ~att_mask.unsqueeze(1) * torch.finfo(xs.dtype).min
past_key_values = DynamicCache(self.config.num_hidden_layers)
for i in torch.arange(self.config.num_hidden_layers):
past_key_values.key_cache[i] = key_cache[i].unsqueeze(0)
past_key_values.value_cache[i] = value_cache[i].unsqueeze(0)
past_seen_tokens = past_key_values.get_seq_length()
cache_position = torch.arange(past_seen_tokens, past_seen_tokens + xs.shape[1], device=xs.device)
position_ids = cache_position.unsqueeze(0)
hidden_states = xs
for decoder_layer in self.layers[: self.config.num_hidden_layers]:
layer_outputs = decoder_layer(
hidden_states,
attention_mask=att_mask,
position_ids=position_ids,
past_key_value=past_key_values,
output_attentions=False,
use_cache=True,
cache_position=cache_position,
)
hidden_states = layer_outputs[0]
xs = self.norm(hidden_states)
new_key_cache = torch.cat(past_key_values.key_cache, dim=0)
new_value_cache = torch.cat(past_key_values.value_cache, dim=0)
return xs, new_key_cache, new_value_cache
```
|
https://github.com/huggingface/transformers/issues/37408
|
closed
|
[] | 2025-04-10T04:08:47Z
| 2025-06-28T08:03:06Z
| null |
dearwind153
|
pytorch/pytorch
| 150,967
|
[MPS] `where`: silent incorrectness when cond is not contiguous
|
### 🐛 Describe the bug
```python
device = "mps"
diff = torch.tensor([[True, True], [True, True]], dtype=torch.bool)
diff = diff.T
target = torch.tensor([[0, 0], [0, 1]])
rcpu = torch.where(diff, target, 0)
diffmps = diff.to(device)
targetmps = target.to(device)
rmps = torch.where(diffmps, targetmps, 0)
print(rcpu)
print(rmps)
```
```
tensor([[0, 0],
[0, 1]])
tensor([[0, 0],
[0, 0]], device='mps:0')
```
### Versions
Nightly
```
PyTorch version: 2.8.0a0+git00c921c
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.7.1 (arm64)
GCC version: Could not collect
Clang version: 18.1.5
CMake version: version 4.0.0
Libc version: N/A
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 17:00:33) [Clang 13.0.1 ] (64-bit runtime)
Python platform: macOS-13.7.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Apple M1 Max
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
|
https://github.com/pytorch/pytorch/issues/150967
|
closed
|
[
"triaged",
"module: correctness (silent)",
"module: mps"
] | 2025-04-09T23:13:38Z
| 2025-04-13T20:44:52Z
| null |
qqaatw
|
pytorch/torchtitan
| 1,081
|
Torch.compile and TP during multiresolution Training
|
is it correct to assume that we should only enable torch.compile in single resolution training or when we have the same sequence lengths to avoid recompiles and slow down?
|
https://github.com/pytorch/torchtitan/issues/1081
|
open
|
[
"question",
"module: torch.compile"
] | 2025-04-09T18:08:41Z
| 2025-04-10T15:05:57Z
| null |
nighting0le01
|
huggingface/lerobot
| 964
|
RuntimeError: Could not load libtorchcodec during lerobot/scripts/train.py script
|
### System Info
```Shell
- `lerobot` version: 0.1.0
- Platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.29.3
- Dataset version: 3.4.1
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Cuda version: 12040
Additionally:
ffmpeg version : 7.1.1
TorchCodec version : 0.2.1
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
Install leRobot from the main documentation as follows :
conda create -n lerobot python=3.10 -y
conda activate lerobot
git clone https://github.com/huggingface/lerobot.git ~/lerobot
pip install --no-binary=av -e
pip install torchvision==0.20.1
conda install -c conda-forge 'ffmpeg>=7.0' -y
After collecting a dataset, run `lerobot/scripts/train.py` script
### Expected behavior
Hello all!
I am getting started with the lerobot so100 arm and have had a few issues.
The first was the same as the issue in #883 in running the `control_robot.py` script which I solved (or bypassed) by following [remi cadene's response](https://github.com/huggingface/lerobot/issues/679#issuecomment-2737292192 ) to do `pip install torchvision==0.20.1` and also `conda install -c conda-forge 'ffmpeg>=7.0' -y` after doing `pip install --no-binary=av -e `. This allowed me to successfully run the `control_robot.py` script successfully. However, then I tried to collect a dataset and run a training with the `lerobot/scripts/train.py` script and I encountered the following issue :
```
from torchcodec.decoders._core.video_decoder_ops import (
File "/home/moonshot/miniconda3/envs/lerobot/lib/python3.10/site-packages/torchcodec/decoders/_core/video_decoder_ops.py", line 59, in <module>
load_torchcodec_extension()
File "/home/moonshot/miniconda3/envs/lerobot/lib/python3.10/site-packages/torchcodec/decoders/_core/video_decoder_ops.py", line 44, in load_torchcodec_extension
raise RuntimeError(
RuntimeError: Could not load libtorchcodec. Likely causes:
1. FFmpeg is not properly installed in your environment. We support
versions 4, 5, 6 and 7.
2. The PyTorch version (2.5.1+cu124) is not compatible with
this version of TorchCodec. Refer to the version compatibility
table:
https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.
3. Another runtime dependency; see exceptions below.
The following exceptions were raised as we tried to load libtorchcodec:
[start of libtorchcodec loading traceback]
/home/moonshot/miniconda3/envs/lerobot/lib/python3.10/site-packages/torchcodec/libtorchcodec7.so: undefined symbol: _ZNK3c1011StorageImpl27throw_data_ptr_access_errorEv
libavutil.so.58: cannot open shared object file: No such file or directory
libavutil.so.57: cannot open shared object file: No such file or directory
/home/moonshot/miniconda3/envs/lerobot/lib/python3.10/site-packages/torchcodec/libtorchcodec4.so: undefined symbol: _ZNK3c1011StorageImpl27throw_data_ptr_access_errorEv
[end of libtorchcodec loading traceback].
```
It seems that I have some issues with the `torchcodec`and `ffmpeg` versions not being compatible. Checking their versions gives me:
```
ffmpeg version 7.1.1 Copyright (c) 2000-2025 the FFmpeg developers
built with gcc 13.3.0 (conda-forge gcc 13.3.0-2)
configuration: --prefix=/home/moonshot/miniconda3/envs/lerobot --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/x86_64-conda-linux-gnu-cc --cxx=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/x86_64-conda-linux-gnu-c++ --nm=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/x86_64-conda-linux-gnu-nm --ar=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/x86_64-conda-linux-gnu-ar --disable-doc --enable-openssl --enable-demuxer=dash --enable-hardcoded-tables --enable-libfreetype --enable-libharfbuzz --enable-libfontconfig --enable-libopenh264 --enable-libdav1d --disable-gnutls --enable-libmp3lame --enable-libvpx --enable-libass --enable-pthreads --enable-alsa --enable-libpulse --enable-vaapi --enable-libopenvino --enable-gpl --enable-libx264 --enable-libx265 --enable-libaom --enable-libsvtav1 --enable-libxml2 --enable-pic --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libvorbis --enable-libopus --enable-librsvg --enable-ffplay --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/pkg-config
libavutil 59. 39.100 / 59. 39.100
libavcodec 61. 19.101 / 61. 19.101
libavformat 61. 7.100 / 61. 7.100
libavdevice 61. 3.100 / 61. 3.100
libavfilter 10. 4.100 / 10. 4.100
libswscale 8. 3.100 / 8. 3.100
libswresample 5. 3.100 / 5. 3.100
libpostproc 58. 3.100 / 58. 3.100
```
And `TorchCodec` version 0.2.1.
Could anyone suggest the right v
|
https://github.com/huggingface/lerobot/issues/964
|
closed
|
[
"question"
] | 2025-04-09T14:25:38Z
| 2025-04-15T13:32:24Z
| null |
shrutichakraborty
|
huggingface/transformers
| 37,390
|
how to reduce original model's tokenizer vocabulary
|
`###` Feature request
I am working on model distillation. I am currently using the nllb-distilled-600M model, but the parameters of this model are still too large, and the vocabulary supports more than 100 languages. My use case is single language translation, such as English to Hebrew. Therefore, I need to reduce the redundant vocabulary of the original model and only keep the English and Hebrew vocabulary. I noticed that transformers do not use the sentencepiece.bpe.model file, and I don't want to retrain a tokenizer, because the trained tokenizer will be inconsistent with the original tokenizer result, which will lead to the subsequent model weight migration and model distillation process cannot be carried out. Therefore, my idea is to quickly replace the tokenizer.json and tokenizer_config.json files in the original model, and then migrate the model weights at the model level to get a pruned model. What I am doing now is to load the original model tokenizer, tokenize the corpus I prepared, count the registered tokens, regain a reduced vocabulary, and change the corresponding json file. Is there any better strategy to quickly replace the tokenizer vocabulary?

### Motivation
quick modify model vocabulary for beater application
### Your contribution
> `def modify_tokenizer():
for sentences in tqdm.tqdm(range(100,len(en_corpus),100)):
enc = teacher_tokenizer(en_corpus[sentences-100:sentences],
add_special_tokens=False,
return_attention_mask=False,
return_token_type_ids=False)
for ids in enc['input_ids']:
selected_ids.update(ids)
print('all english tokens nums is ',len(selected_ids))
for sentences in tqdm.tqdm(range(100,len(he_corpus),100)):
enc = teacher_tokenizer(he_corpus[sentences-100:sentences],
add_special_tokens=False,
return_attention_mask=False,
return_token_type_ids=False)
for ids in enc['input_ids']:
selected_ids.update(ids)
print('all english+Hebrew tokens nums is ',len(selected_ids))
for tok in teacher_tokenizer.all_special_tokens:
# print('special_token ',tok)
selected_ids.add(teacher_tokenizer.convert_tokens_to_ids(tok))
print('all english+Hebrew_special tokens nums is ',len(selected_ids))
# 从原 vocab 中反查出对应 token
orig_vocab = teacher_tokenizer.get_vocab()
new_tokens = []
for tok, idx in sorted(orig_vocab.items(), key=lambda kv: kv[1]):
if idx in selected_ids:
new_tokens.append(tok)
# 写出新的 vocab.json(Hugging Face 格式)
new_vocab = {tok: i for i, tok in enumerate(new_tokens)}
#修改原有tokenizer和tokenizer_config
teacher_tokenizer_path='/workspace/nllb-200-distilled-600M/tokenizer.json'
teacher_tokenizer_config_path='/workspace/nllb-200-distilled-600M/tokenizer_config.json'
student_tokenizer_path='/workspace/distilled_model_test/tokenizer.json'
student_tokenizer_config_path='/workspace/distilled_model_test/tokenizer_config.json'
def _read_json(path):
with open(path, "r", encoding="utf-8") as f:
data = json.load(f)
return data
def _write_json(path,data):
with open(path, "w", encoding="utf-8") as f:
json.dump(data, f, ensure_ascii=False, indent=2)
#change tokenizer
student_tokenizer_data=_read_json(teacher_tokenizer_path)
student_tokenizer_data['model']['vocab']=new_vocab
for single_added_token in student_tokenizer_data['added_tokens']:
single_added_token['id']=new_vocab[single_added_token['content']]
new_merges=[]
#change merges
for merge_pair in student_tokenizer_data['model']['merges']:
_temp_merge=merge_pair[0]+merge_pair[1]
if _temp_merge in new_vocab.keys():
new_merges.append(merge_pair)
student_tokenizer_data['model']['merges']=new_merges
_write_json(student_tokenizer_path,student_tokenizer_data)
#change tokenizer_config`
|
https://github.com/huggingface/transformers/issues/37390
|
open
|
[
"Feature request"
] | 2025-04-09T10:45:56Z
| 2025-04-09T10:53:07Z
| null |
masterwang22327
|
huggingface/datasets
| 7,506
|
HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM
|
### Describe the bug
I am trying to run some finetunings on 4 A100 GPUs using SLURM using axolotl training framework which in turn uses Huggingface's Trainer and Accelerate on [Fineweb-10BT](https://huggingface.co/datasets/HuggingFaceFW/fineweb), but I end up running into 429 Client Error: Too Many Requests for URL error when I call next(dataloader_iter). Funny is, that I can run some test fine tuning (for just 200 training steps) in 1 A100 GPU using SLURM. Is there any rate limiter set for querying dataset? I could run the fine tuning with the same settings (4 A100 GPUs in SLURM) last month.
### Steps to reproduce the bug
You would need a server installed with SLURM
1. Create conda environment
1.1 conda create -n example_env -c conda-forge gxx=11 python=3.10
1.2 conda activate example_env
1.3 pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
1.4 conda install nvidia/label/cuda-12.4.0::cuda-toolkit
1.5 Download flash_attn-2.7.4.post1+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
1.6 pip3 install packaging
1.7 pip3 install ninja
1.8 pip3 install mlflow
1.9 Clone https://github.com/calvintanama/axolotl.git
1.10 `cd` to `axolotl`
1.11 pip3 install -e '.[deepspeed]'
2. Run the training
2.1. Create a folder called `config_run` in axolotl directory
2.2. Copy `config/phi3_pruned_extra_pretrain_22_29_bottleneck_residual_8_a100_4.yaml` to `config_run`
2.3. Change yaml file in the `config_run` accordingly
2.4. Change directory and conda environment name in `jobs/train_phi3_22_29_bottleneck_residual_8_a100_4_temp.sh`
2.5. `jobs/train_phi3_22_29_bottleneck_residual_8_a100_4_temp.sh`
### Expected behavior
This should not cause any error, but gotten
```
File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/accelerate/data_loader.py", line 552, in __iter__
[rank3]: current_batch = next(dataloader_iter)
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 701, in __next__
[rank3]: data = self._next_data()
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 757, in _next_data
[rank3]: data = self._dataset_fetcher.fetch(index) # may raise StopIteration
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 33, in fetch
[rank3]: data.append(next(self.dataset_iter))
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/accelerate/data_loader.py", line 338, in __iter__
[rank3]: for element in self.dataset:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 2266, in __iter__
[rank3]: for key, example in ex_iterable:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1866, in __iter__
[rank3]: for key, example in self.ex_iterable:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1084, in __iter__
[rank3]: yield from self._iter()
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1263, in _iter
[rank3]: for key, transformed_example in outputs:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1258, in <genexpr>
[rank3]: outputs = (
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1244, in iter_outputs
[rank3]: for i, key_example in inputs_iterator:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1106, in iter_batched_inputs
[rank3]: for key, example in iterator:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1866, in __iter__
[rank3]: for key, example in self.ex_iterable:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1535, in __iter__
[rank3]: for x in self.ex_iterable:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datase
|
https://github.com/huggingface/datasets/issues/7506
|
open
|
[] | 2025-04-09T06:32:04Z
| 2025-06-29T06:04:59Z
| 2
|
calvintanama
|
huggingface/lerobot
| 960
|
pi0-fintune-performance
|
I have been fine-tuning the provided pi0-base model on my dataset using LeRobot. After training for 100,000 steps, I found that the model performs well on tasks that appeared in my dataset, but its performance on unseen tasks is very poor. It seems to lack the generalization ability of a VLA model. Is this phenomenon normal? Are there any strategies to improve this situation?
|
https://github.com/huggingface/lerobot/issues/960
|
closed
|
[
"question",
"policies"
] | 2025-04-09T01:21:12Z
| 2025-10-08T08:43:22Z
| null |
yanghb1
|
pytorch/pytorch
| 150,891
|
[ONNX] How to export Llama4
|
### 🐛 Describe the bug
I am trying to do an onnx export for the Llama 4 Scout model but it fails saying:
`RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: DynamicCache`
The error traceback:
```
Traceback (most recent call last):
File "/proj/work/sdey/examples/llama4/llama4_scout.py", line 80, in <module>
torch.onnx.export(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/__init__.py", line 375, in export
export(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 502, in export
_export(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1564, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1113, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 997, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/proj/work/sdey//venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 904, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/jit/_trace.py", line 1500, in _get_trace_graph
outs = ONNXTracedModule(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/jit/_trace.py", line 139, in forward
graph, out = torch._C._create_graph_by_tracing(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/jit/_trace.py", line 133, in wrapper
out_vars, _ = _flatten(outs)
RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: DynamicCache
```
This occurs for higher version of `transformers > 4.44.2`
Code to reproduce:
```
import torch
from transformers import AutoProcessor,AutoModelForImageTextToText, pipeline
processor = AutoProcessor.from_pretrained("meta-llama/Llama-4-Scout-17B-16E")
model = AutoModelForImageTextToText.from_pretrained("meta-llama/Llama-4-Scout-17B-16E",torch_dtype=torch.bfloat16)
url1 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
url2 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png"
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": url1},
{"type": "image", "url": url2},
{"type": "text", "text": "Can you describe how these two images are similar, and how they differ?"},
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
)
torch.onnx.export(
model,
(inputs["input_ids"], inputs["pixel_values"], inputs["attention_mask"]),
"llama4_scout.onnx",
do_constant_folding=False,
training= torch.onnx.TrainingMode.EVAL,
export_params=False)
```
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (aarch64)
GCC version: (GCC) 13.3.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.29.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-1019-nvidia-64k-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GH200 480GB
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
|
https://github.com/pytorch/pytorch/issues/150891
|
closed
|
[
"module: onnx",
"triaged"
] | 2025-04-09T00:11:49Z
| 2025-11-13T10:13:08Z
| null |
srijanie03
|
huggingface/lerobot
| 956
|
pi0 multi gps train
|
if i have multi 4090, how to modify to train pi0?
only 1 4090 just error

|
https://github.com/huggingface/lerobot/issues/956
|
closed
|
[
"question"
] | 2025-04-08T13:06:27Z
| 2025-11-20T03:07:56Z
| null |
ximiluuuu
|
huggingface/transformers
| 37,364
|
How to find a specific func doc when using transformers doc?
|
### Feature request
Better UX for doc
### Motivation
The search and UI layout make it so hard to find a func doc, especially when there are so many func doc in one webpage and your just can not find what you want by web page search.
### Your contribution
no, right now
|
https://github.com/huggingface/transformers/issues/37364
|
open
|
[
"Feature request"
] | 2025-04-08T10:48:04Z
| 2025-09-15T19:16:35Z
| null |
habaohaba
|
huggingface/open-r1
| 586
|
what is next for this project?
|
https://github.com/huggingface/open-r1/issues/586
|
open
|
[] | 2025-04-07T21:29:54Z
| 2025-04-07T21:29:54Z
| null |
Mnaik2
|
|
pytorch/xla
| 8,948
|
Torch-XLA not compatible with static python
|
## ❓ Questions and Help
I am trying to use Torch-XLA v2.3.0 but it fails with:
```
line 7, in <module>
import _XLAC
ImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory
```
I noticed this message [here](https://github.com/pytorch/xla/blob/9e23ca853331aa229dcdba2473d20ca5af2d620d/docs/source/contribute/bazel.md?plain=1#L71):
```
Bazel brings in [pybind11](https://github.com/pybind/pybind11) embeded
python and links against it to provide `libpython` to the plugin using
this mechanism. Python headers are also sourced from there instead of
depending on the system version. These are satisfied from the
`"@pybind11//:pybind11_embed"`, which sets up compiler options for
linking with `libpython` transitively.
```
which suggests XLA is pulling in a two year old version of pybind11_bazel, which gets its python binary/library/headers/paths by inspecting the copy of the interpreter installed on the operating system. During this probing pybind11_bazel explicitly asks the python interpreter to give it the linker flags it would need to embed the interpreter in its code, leading to that dependency. This renders it unusable with static python.
Is there a way to make this work/could you provide a different build of Torch-XLA which is compatible with static python?
|
https://github.com/pytorch/xla/issues/8948
|
open
|
[
"question"
] | 2025-04-07T18:25:43Z
| 2025-04-23T14:32:47Z
| null |
drewjenks01
|
huggingface/lerobot
| 949
|
Optional deps in using LeRobot as am optional package
|
Hi, we are working on enabling LeRobot dataset generation in [IsaacLab](https://github.com/isaac-sim/IsaacLab), such that developers could create data with IsaacLab data generation workflow and use it in their robot learning models.
The asks are,
1. Is there any scheduled release, such that downstream devs could have stable codebase to integrate LeRobot into their applications?
2. Can we move some deps as optional wrt the core code, if training/eval is not expected? For example, we only need Lerobot dataset related functions, Gymnasium dependency is not needed. You only need Gymnasium dependency if you want to use the environment in eval mode during training or deployment.
I hope those could expand the user base further for LeRobot dataset generation and for training/eval with broader model families.
|
https://github.com/huggingface/lerobot/issues/949
|
closed
|
[
"question",
"dataset",
"simulation",
"stale"
] | 2025-04-07T16:55:48Z
| 2025-10-21T02:29:27Z
| null |
xyao-nv
|
huggingface/datasets
| 7,502
|
`load_dataset` of size 40GB creates a cache of >720GB
|
Hi there,
I am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows:
```python
ds = DatasetDict(
{
"train": load_dataset(
"parquet",
data_dir=f"{local_dir}/{tok}",
cache_dir=cache_dir,
num_proc=min(12, os.cpu_count()), # type: ignore
split=ReadInstruction("train", from_=0, to=NUM_TRAIN, unit="abs"), # type: ignore
),
"validation": load_dataset(
"parquet",
data_dir=f"{local_dir}/{tok}",
cache_dir=cache_dir,
num_proc=min(12, os.cpu_count()), # type: ignore
split=ReadInstruction("train", from_=NUM_TRAIN, unit="abs"), # type: ignore
)
}
)
```
which still strangely creates 720GB of cache. In addition, if I remove the raw parquet file folder (`f"{local_dir}/{tok}"` in this example), I am not able to load anything. So, I am left wondering what this cache is doing. Am I missing something? Is there a solution to this problem?
Thanks a lot in advance for your help!
A related issue: https://github.com/huggingface/transformers/issues/10204#issue-809007443.
---
Python: 3.11.11
datasets: 3.5.0
|
https://github.com/huggingface/datasets/issues/7502
|
closed
|
[] | 2025-04-07T16:52:34Z
| 2025-04-15T15:22:12Z
| 2
|
pietrolesci
|
huggingface/trl
| 3,254
|
How to get completion_length?
|
I noticed that during GRPO training, `completion_length` is recorded. However, I found that it’s not simply obtained by `len(completion)`. How is this calculated—by tokens? Is it possible for me to access the `completion_length` for each sample?
|
https://github.com/huggingface/trl/issues/3254
|
open
|
[
"❓ question",
"🏋 GRPO"
] | 2025-04-07T15:02:04Z
| 2025-04-11T03:10:20Z
| null |
Tuziking
|
huggingface/diffusers
| 11,220
|
Unconditional image generation documentation page not working as expected
|
### Describe the bug
When consulting the documentation for [unconditional image generation](https://huggingface.co/docs/diffusers/using-diffusers/unconditional_image_generation), the last embedded page seems to contain an error that blocks it from being shown (see image below). This is @stevhliu's model stored in [this](https://huggingface.co/spaces/stevhliu/unconditional-image-generation) huggingface space. This space is also down in HuggingFace.
<img width="1511" alt="Image" src="https://github.com/user-attachments/assets/4b33be09-97b1-4f76-bd23-27c905616ee8" />
### Reproduction
- Go to https://huggingface.co/docs/diffusers/using-diffusers/unconditional_image_generation or https://huggingface.co/spaces/stevhliu/unconditional-image-generation, you will see that the unconditional image generation part is not loading
### Logs
```shell
```
### System Info
Not relevant as it is documentation, not system related
### Who can help?
@stevhliu
|
https://github.com/huggingface/diffusers/issues/11220
|
closed
|
[
"bug"
] | 2025-04-07T10:32:45Z
| 2025-04-08T08:47:18Z
| 2
|
alvaro-mazcu
|
huggingface/transformers.js
| 1,275
|
How to use @xenova/transformers in a musl-based environment?
|
### Question
Hi,
I encountered the following error when using @xenova/transformers:
```bash
Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /app/node_modules/onnxruntime-node/bin/napi-v3/linux/x64//libonnxruntime.so.1.14.0)
```
After investigating the issue, I found that it was caused by using the Node Alpine Docker image.
(https://github.com/huggingface/transformers.js/issues/555)
(https://github.com/huggingface/transformers.js/issues/376)
Since Alpine Linux uses musl as its standard C library, and @xenova/transformers depends on onnxruntime-node (which is built against glibc), this incompatibility appears to be the root cause.
I confirmed this by switching to the node:slim image (which uses glibc), and the error was resolved.
However, I would really like to use @xenova/transformers in a musl-based environment (e.g., Alpine).
Is there currently any way to run it on Alpine using musl?
If not, are there any plans to support musl or an alternative backend (e.g., onnxruntime-web with WASM) in Node.js?
Thanks in advance!
|
https://github.com/huggingface/transformers.js/issues/1275
|
closed
|
[
"question"
] | 2025-04-07T06:34:51Z
| 2025-10-07T21:23:36Z
| null |
ezcolin2
|
huggingface/open-r1
| 583
|
num_iterations in GRPOConfig does NOT DO what it is supposed to DO
|
Hi @qgallouedec and @lewtun
Thanks again for the amazing work ! I got the chance to try the v0.16.0 trl release in open-r1.
I was excited about num_iterations which was supposed to make the training 6 times faster. Simply one needs something like:
`training_args = GRPOConfig(..., num_iterations=4)
`
But I did not see this happening. Using this simple receipe, it takes 58 steps and about 3 hours and 30 minutes to train the model on 8 A100 GPUs with `num_iterations=1`. But increasing it to `num_iterations=4` linearly increases the number of steps to 232 and increases the training time to 4 hours and 20 minutes under the same exact setup.
Am I missing something here ? are we not supposed to re-use the generated data across multiple steps ? then why the training time has increased ?
|
https://github.com/huggingface/open-r1/issues/583
|
closed
|
[] | 2025-04-06T15:57:43Z
| 2025-04-12T06:00:21Z
| null |
ahatamiz
|
pytorch/pytorch
| 150,741
|
how to install pytorch with cuda 12.2 and py3.12
|
### 🐛 Describe the bug
I wanna know how to install pytorch with CUDA12.2
### Versions
I used the following command , and many issue occured
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
|
https://github.com/pytorch/pytorch/issues/150741
|
closed
|
[] | 2025-04-06T14:33:59Z
| 2025-04-07T14:34:48Z
| null |
goactiongo
|
huggingface/agents-course
| 412
|
[QUESTION] - Dummy Agent Library
|
_---
Do you see the issue?
The answer was hallucinated by the model. We need to stop to actually execute the function! Let’s now stop on “Observation” so that we don’t hallucinate the actual function response.
---_
Can someone explain how the system is hallucinating in this example. I am kind of stuck on this.
|
https://github.com/huggingface/agents-course/issues/412
|
open
|
[
"question"
] | 2025-04-06T09:44:14Z
| 2025-04-06T09:44:14Z
| null |
NewTonDBA
|
huggingface/lerobot
| 940
|
Possible mismatch in observations.state metadata in Libero datasets on Hugging Face
|
Hello,
I believe there might be a mistake in the Libero datasets hosted on huggingface/datasets.
Specifically, the issue is with the `observations.state` column. According to `meta/info.json`, the structure is described as:
```
"observation.state": {
"dtype": "float32",
"shape": [
8
],
"names": {
"motors": [
"x",
"y",
"z",
"rx",
"ry",
"rz",
"rw",
"gripper"
]
}
}
```
However, when I check the values in the `observations.state` column, the last two values appear to be negative of each other. It seems like those two values are `robot0_gripper_qpos` from the environment observations. When I compare the values of observations from the environment, the first three values in the column are `robot0_eef_pos` and the second three seems like `robot0_eef_quat` (rx, ry, rz, rw) converted to axis angle representation.
Could you please clarify or confirm whether this is an intended design or a labeling error?
Thanks for your work on LeRobot datasets!
|
https://github.com/huggingface/lerobot/issues/940
|
closed
|
[
"question",
"dataset",
"stale"
] | 2025-04-06T04:18:55Z
| 2025-10-19T02:32:09Z
| null |
ozgraslan
|
pytorch/data
| 1,471
|
torchdata or torchdata-contrib?
|
my team has been implementing quite several utilities. some are close to core features, some other are more advanced and utilities. for example, their class names and features are like:
```python
class RoundRobinNode(BaseNode[T]):
"""A node that cycles through multiple datasets in a round-robin way.
```
```python
class FileListNode(BaseNode[Dict]):
"""Node that lists files from any supported filesystem (local, S3) matching specified patterns.
Uses fsspec to provide universal file access capabilities for both local and remote files.
Features:
- Lists files from supported filesystems (local, S3)
- Supports glob patterns for file matching
- Maintains state for checkpointing and resumption
```
```python
class FileReaderNode(BaseNode[Dict]):
"""Universal node that reads file contents from any supported filesystem.
Uses smart_open to support local files, S3, HTTP, and more file systems.
```
```python
class TextStreamDecodeNode(BaseNode[Dict]):
"""Node that streams text files line by line from any source.
This node combines functionality of file reading and line-by-line processing,
supporting both local and remote (S3, HTTP, etc.) files via smart_open.
Features:
- Streams files line-by-line (memory efficient)
- Supports local files, S3, HTTP, and more
- Handles compressed files (.gz, .bz2) transparently
- Maintains state for checkpointing and resumption
- Preserves metadata from source nodes
```
```python
class HuggingFaceDatasetStreamNode(BaseNode[dict]):
"""
Node that streams examples from a HuggingFace dataset.
Output format:
{
"data": {...}, # Original dataset item
"metadata": {
"dataset_name": "squad",
"split": "train",
"index": 42
}
}
Input: None (configured with dataset name and split at initialization)
Output: Dict containing example data and metadata
```
```python
class JsonlStreamNode(TextStreamDecodeNode):
"""Node that streams JSONL files and parses each line as JSON.
This node extends TextStreamDecodeNode to add JSON parsing for each line.
It maintains the same state management and streaming capabilities while adding
JSONL-specific processing.
```
and some more.
conservatively, i'd say these can be part of, say, `torchdata-contrib`. but i'd like to hear from the maintainers. where would you suggest drawing the line? any other suggestions would be great, too.
|
https://github.com/meta-pytorch/data/issues/1471
|
open
|
[] | 2025-04-06T01:11:33Z
| 2025-05-12T21:38:37Z
| 2
|
keunwoochoi
|
pytorch/torchtitan
| 1,058
|
Issue of using fully_shard (FSDP2) for Huggingface model: Cannot copy out of meta tensor; no data!
|
Dear community,
Thanks for introducing FSDP2 to Pytorch. I am meeting with an issue using fully_shard for Huggingface model. Just want to know if you have any insights into this issue.
The code is inherited from [#743 ](https://github.com/pytorch/torchtitan/issues/743)
```
import os
import torch
from torch.distributed import init_process_group, destroy_process_group
from torch.distributed._composable.fsdp import fully_shard
from transformers import AutoConfig, AutoModelForCausalLM
from transformers.models.gpt_neox.modeling_gpt_neox import GPTNeoXLayer
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
def get_num_params(model: torch.nn.Module, exclude_embedding: bool = False) -> int:
num_params = sum(p.numel() for p in model.parameters())
if exclude_embedding:
num_params -= model.tok_embeddings.weight.numel()
return num_params
def setup(local_rank, world_size):
device = torch.device(f"cuda:{local_rank}")
torch.cuda.set_device(device)
init_process_group("nccl", rank=local_rank, world_size=world_size)
def load():
local_rank = int(os.environ["LOCAL_RANK"])
world_size = int(os.environ["WORLD_SIZE"])
setup(local_rank, world_size)
model_name = "EleutherAI/pythia-2.8b"
config = AutoConfig.from_pretrained(model_name)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config)
for module in model.modules():
if isinstance(module, GPTNeoXLayer):
fully_shard(module)
model = fully_shard(model, reshard_after_forward=True)
model.to_empty(device='cuda')
if __name__ == "__main__":
load()
```
The error is below:
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/workspace/NCCL/report_issue.py](/NCCL/report_issue.py)", line 41, in <module>
[rank0]: load()
[rank0]: File "/workspace/NCCL/report_issue.py](/NCCL/report_issue.py)", line 34, in load
[rank0]: fully_shard(module)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/_composable/contract.py", line 107, in wrapper
[rank0]: updated = func(module, *args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/_composable/fsdp/fully_shard.py", line 114, in fully_shard
[rank0]: _move_states_to_device(params, buffers, device, mesh_info)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/_composable/fsdp/_fsdp_init.py", line 143, in _move_states_to_device
[rank0]: tensor.data = [tensor.to](http://tensor.to/)(device)
[rank0]: NotImplementedError: Cannot copy out of meta tensor; no data!
```
Python command:
`torchrun --nnodes=1 --nproc_per_node=8 reproduce.py`
|
https://github.com/pytorch/torchtitan/issues/1058
|
closed
|
[
"question",
"module: checkpoint",
"module: distributed_state_dict"
] | 2025-04-05T01:48:49Z
| 2025-04-15T23:08:01Z
| null |
mingdianliu
|
pytorch/xla
| 8,940
|
User built torch-xla wheel fails on import
|
## ❓ Questions and Help
After following the build instructions in CONTRIBUTING.md, and then running `python setup.py bdist_wheel` inside of `pytorch/xla`, a wheel is generated for `torch-xla`
After installing that wheel in the environment of a different project this error appears upon import:
```
Traceback (most recent call last):
...
File "my_project/venv/lib/python3.10/site-packages/torch_xla/__init__.py", line 20, in <module>
import _XLAC
ImportError: my_project/venv/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN5torch4lazy13MetricFnValueB5cxx11Ed
```
All help is greatly appreciated.
|
https://github.com/pytorch/xla/issues/8940
|
closed
|
[
"question",
"build"
] | 2025-04-04T20:14:21Z
| 2025-04-09T14:38:59Z
| null |
LPanosTT
|
huggingface/diffusers
| 11,208
|
MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline
|
### Describe the bug
When using `StableDiffusion3ControlNetInpaintingPipeline` with `SD3MultiControlNetModel`, I receive an error:
`NotImplementedError: MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.`
### Reproduction
Example reproduction code:
```python
import os
import torch
from diffusers.utils import load_image
from diffusers.pipelines import StableDiffusion3ControlNetInpaintingPipeline
from diffusers.models import SD3ControlNetModel, SD3MultiControlNetModel
from diffusers import BitsAndBytesConfig, SD3Transformer2DModel
from transformers import T5EncoderModel
# Load images
image = load_image(
"https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting/resolve/main/images/dog.png"
)
mask = load_image(
"https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting/resolve/main/images/dog_mask.png"
)
# Initialize ControlNet models
controlnetA = SD3ControlNetModel.from_pretrained("InstantX/SD3-Controlnet-Pose")
controlnetB = SD3ControlNetModel.from_pretrained("alimama-creative/SD3-Controlnet-Inpainting", use_safetensors=True, extra_conditioning_channels=1)
controlnet = SD3MultiControlNetModel([controlnetA, controlnetB])
# Load transformer and text encoder
nf4_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16)
model_id = "stabilityai/stable-diffusion-3.5-large-turbo"
model_nf4 = SD3Transformer2DModel.from_pretrained(model_id, subfolder="transformer", quantization_config=nf4_config, torch_dtype=torch.bfloat16)
t5_nf4 = T5EncoderModel.from_pretrained("diffusers/t5-nf4", torch_dtype=torch.bfloat16)
# Initialize pipeline
pipe = StableDiffusion3ControlNetInpaintingPipeline.from_pretrained(
"stabilityai/stable-diffusion-3.5-large-turbo",
token=os.getenv("HF_TOKEN"),
controlnet=controlnet,
transformer=model_nf4,
text_encoder_3=t5_nf4,
torch_dtype=torch.bfloat16
)
pipe.enable_model_cpu_offload()
# This fails with NotImplementedError
result_image = pipe(
prompt="a cute dog with a hat",
negative_prompt="low quality, bad anatomy",
control_image=[image, image],
num_inference_steps=30,
guidance_scale=7.5,
controlnet_conditioning_scale=[1.0, 1.0],
output_type="pil",
).images[0]
```
### Logs
```shell
Error
NotImplementedError: MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.
Error occurs in `diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py` at line 1026. *Full error code*:
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[1], line 41
38 pipe.enable_model_cpu_offload()
40 # This fails with NotImplementedError
---> 41 result_image = pipe(
42 prompt="a cute dog with a hat",
43 negative_prompt="low quality, bad anatomy",
44 control_image=[image, image],
45 num_inference_steps=30,
46 guidance_scale=7.5,
47 controlnet_conditioning_scale=[1.0, 1.0],
48 output_type="pil",
49 ).images[0]
File ~/miniconda3/envs/bnb310/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File ~/miniconda3/envs/bnb310/lib/python3.10/site-packages/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py:1026, in StableDiffusion3ControlNetInpaintingPipeline.__call__(self, prompt, prompt_2, prompt_3, height, width, num_inference_steps, sigmas, guidance_scale, control_guidance_start, control_guidance_end, control_image, control_mask, controlnet_conditioning_scale, controlnet_pooled_projections, negative_prompt, negative_prompt_2, negative_prompt_3, num_images_per_prompt, generator, latents, prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds, output_type, return_dict, joint_attention_kwargs, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, max_sequence_length)
1023 width = latent_width * self.vae_scale_factor
1025 elif isinstance(self.controlnet, SD3MultiControlNetModel):
-> 1026 raise NotImplementedError("MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.")
1027 else:
1028 assert False
NotImplementedError: MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.
Expected Behavior
I expect `StableDiffusion3ControlNetInpaintingPipeline` to support `SD3MultiControlNetModel`
```
### System Info
Versions
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0]
PyTorch version: 2.2.0+cu118
CUDA version: 11.8
Diffusers version: 0.32.2
Transformers version: 4.50.3
Accelerate version: 1.7.0.dev0
### Who can help?
@yiyixuxu @sayakpaul
|
https://github.com/huggingface/diffusers/issues/11208
|
open
|
[
"bug",
"help wanted",
"Good Example PR",
"contributions-welcome"
] | 2025-04-04T12:39:10Z
| 2025-05-11T15:03:00Z
| 5
|
DanilaAniva
|
pytorch/torchtitan
| 1,055
|
Is the currnet configuration system over-engineered?
|
It seems that a training job in TorchTitan is currently defined using a combination of TOML and Python.
When users launch a training job, they are expected to provide a TOML file that specifies the model.name:
https://github.com/pytorch/torchtitan/blob/351e9fb40fe345dd8a7fb3403881328b7cc0b21b/torchtitan/models/llama/train_configs/debug_model.toml#L23-L24
At the same time, the referenced model.name must already be registered via a TrainSpec object:
https://github.com/pytorch/torchtitan/blob/351e9fb40fe345dd8a7fb3403881328b7cc0b21b/torchtitan/models/llama/__init__.py#L63-L65
My first question is: Why not move fields of `JobConfig` (serialized to the TOML file) into `TrainSpec`? That would eliminate the need for a separate `JobConfig` class and simplify the interface.
Moreover, the registration mechanism itself may not be necessary. In AXLearn, another LLM training framework, users can launch a training job like this (simplified for conceptual clarity):
```shell
axlearn.train --experiment-config-model=text.gpt --experiment-config-name=llama3b
```
Then the trainer simply loads the config dynamically:
```python
em = importlib.import_module("experiments." + "text.gpt")
trainer_config: Trainer.Config = em.named_trainer_config("llama3b")
```
Please be aware that all configuration information (corresponding to JobConfig and TrainSpec) are returned by a the innvocation to the function `experiments.text.gpt.named_trainer_config("llama3b")`.
This approach eliminates the need for explicit registration logic such as:
https://github.com/pytorch/torchtitan/blob/351e9fb40fe345dd8a7fb3403881328b7cc0b21b/torchtitan/config_manager.py#L770-L771
and
https://github.com/pytorch/torchtitan/blob/351e9fb40fe345dd8a7fb3403881328b7cc0b21b/torchtitan/config_manager.py#L784-L785
because the configuration modules can be imported dynamically from `experiments/text/gpt/*.py`.
|
https://github.com/pytorch/torchtitan/issues/1055
|
open
|
[
"question"
] | 2025-04-03T22:46:39Z
| 2025-04-04T01:21:27Z
| null |
wangkuiyi
|
pytorch/torchtitan
| 1,054
|
Clarify PP split point documentation.
|
### Bug description
The current documentation is as follows.
```
self.parser.add_argument(
"--parallelism.pipeline_parallel_split_points",
type=string_list,
nargs="+",
default=[],
help="""
Specify comma-separated names of modules to use as the beginning of a split point.
e.g. "layers.0,layers.2" will cause the model to be split into 3 stages,
the first containing all the layers up to layers.0,
the second containing layers.0 and up to layers.2,
the third containing layers.2 and all the remaining layers.
Note: fully-automated splitting may be enabled in the future,
but currently the split points must be specified manually.""",
)
```
The above description seems to indicate that layer.0 is present in both the first and second stages, layer.2 is present in both second and third stages. Can someone please clarify inclusivity ?
### Versions
head of master
|
https://github.com/pytorch/torchtitan/issues/1054
|
closed
|
[
"question"
] | 2025-04-03T22:36:08Z
| 2025-08-21T03:09:16Z
| null |
githubsgi
|
huggingface/sentence-transformers
| 3,308
|
How to load locally saved transformer models into sentence transformer?
|
I’ve made some modifications to the NVEMBEDV2 model architecture and saved the updated version locally using `model.save_pretrained()`. However, when I try to wrap the saved model in a SentenceTransformer, I encounter a `KeyError: 'NVEmbedConfig'`.
I checked the documentation, and while loading pretrained models seems straightforward, I’m unsure how to handle models with a custom configuration and type. Is there a guide on how to properly load and integrate a locally modified transformer model into SentenceTransformer?
I'm attaching a simple notebook for reproducibility and also the error. Thanks!
[issue.ipynb.txt](https://github.com/user-attachments/files/19589812/issue.ipynb.txt)
[requirements.txt](https://github.com/user-attachments/files/19589811/requirements.txt)
|
https://github.com/huggingface/sentence-transformers/issues/3308
|
open
|
[] | 2025-04-03T15:11:20Z
| 2025-04-08T15:48:26Z
| null |
samehkhattab
|
pytorch/serve
| 3,409
|
Why Use TorchScript Format Models?
|
When customizing handler.py, we can load any format of model in the initialize function without needing to package the model into a .mar file. Why do the tutorials recommend converting the model to TorchScript format and packaging it together with handler.py into a .mar file?
|
https://github.com/pytorch/serve/issues/3409
|
open
|
[] | 2025-04-03T09:00:11Z
| 2025-04-03T09:00:11Z
| 0
|
CongSuxu
|
huggingface/datasets
| 7,497
|
How to convert videos to images?
|
### Feature request
Does someone know how to return the images from videos?
### Motivation
I am trying to use openpi(https://github.com/Physical-Intelligence/openpi) to finetune my Lerobot dataset(V2.0 and V2.1). I find that although the codedaset is v2.0, they are different. It seems like Lerobot V2.0 has two version, one is data include images infos and another one is separate to data and videos.
Does someone know how to return the images from videos?
|
https://github.com/huggingface/datasets/issues/7497
|
open
|
[
"enhancement"
] | 2025-04-03T07:08:39Z
| 2025-04-15T12:35:15Z
| null |
Loki-Lu
|
pytorch/torchtitan
| 1,044
|
How are the TP, CP, and PP marked in PyTorch profiler traces ?
|
How are TP, CP and PP labelled in PyTorch profiler traces ? FSDP appears to be clearly marked.
|
https://github.com/pytorch/torchtitan/issues/1044
|
open
|
[] | 2025-04-02T22:27:30Z
| 2025-04-03T18:04:41Z
| 1
|
githubsgi
|
huggingface/blog
| 2,781
|
How to submit revised version of Arxiv paper (v2) to Daily Papers
|
I would like to submit a revised version (v2) of our arXiv paper to Daily Papers, but the original submission (v1) was uploaded too long ago, so it's not eligible through the regular submission form.
However, this v2 version was recently accepted to CVPR 2025, and it is a completely different paper compared to v1, both in content and contributions. It is based on a completely new idea and contains significant updates and improvements over the original version.
Is there any way we can submit this revised version (v2) to Daily Papers?
|
https://github.com/huggingface/blog/issues/2781
|
closed
|
[] | 2025-04-02T09:20:30Z
| 2025-11-03T15:22:36Z
| null |
eveningglow
|
pytorch/pytorch
| 150,523
|
[Question] How to load extremely large model checkpoint for FSDP wrapped model?
|
Hello,
We tried to train DeepSeek v3 model with the parallelism of `FSDP+Expert Parallel`. It works well with random initialized weights. But if we want do SFT or RLHF, we need to load the 670B model weights from https://huggingface.co/deepseek-ai/DeepSeek-V3-0324/tree/main
So, does PyTorch has ways to load extremely large model weight checkpoint for FSDP wrapped model?
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o
|
https://github.com/pytorch/pytorch/issues/150523
|
closed
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 2025-04-02T08:05:12Z
| 2025-05-08T16:28:40Z
| null |
zigzagcai
|
huggingface/lerobot
| 927
|
How to train a model for VLN?
|
### System Info
```Shell
To control four legs dogs.
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
rt
### Expected behavior
tret
|
https://github.com/huggingface/lerobot/issues/927
|
closed
|
[
"question"
] | 2025-04-01T13:26:20Z
| 2025-04-01T15:50:04Z
| null |
lucasjinreal
|
huggingface/agents-course
| 391
|
[QUESTION] UNIT-3 not yet published ?
|
<img width="1440" alt="Image" src="https://github.com/user-attachments/assets/aa8ed881-f998-4c63-805f-8af936d630c5" />
|
https://github.com/huggingface/agents-course/issues/391
|
closed
|
[
"question"
] | 2025-04-01T11:24:07Z
| 2025-04-30T04:50:26Z
| null |
ynareshkalyan21
|
huggingface/hub-docs
| 1,664
|
Page: "how to be registered as a provider"?
|
https://github.com/huggingface/hub-docs/issues/1664
|
closed
|
[] | 2025-04-01T10:55:01Z
| 2025-04-03T13:03:26Z
| null |
hanouticelina
|
|
huggingface/lerobot
| 926
|
[Question] Deploy leRobot for a delta kinematic
|
Bonjour everyone,
I'm currently working on the development of an **open source delta robot** via ROS.
I'm wondering if any of you have a clue to help me integrate leRobot ACT algorithm to the custom kinematic of my delta.
ATM the inverse kinematic is managed by a marlin CNC firmware (on arudino mega), so we communicated via gcode, but considering moving to micro-ros to have direct angular control of the stepper motors and better ROS integration
|
https://github.com/huggingface/lerobot/issues/926
|
closed
|
[
"question"
] | 2025-04-01T09:46:29Z
| 2025-04-28T10:57:31Z
| null |
man0n0n0
|
huggingface/optimum
| 2,220
|
optimum-cli diffusion policy model issue
|
### System Info
```shell
Hi,
Trying to export a diffusion policy model to onnx format. From the error message and printed list of model types, it looks like “diffusion” model cannot be exported to onnx.
Is there a way to get around this?
optimum-cli export onnx --model lerobot/diffusion_pusht --task reinforcement-learning /onnx/
Traceback (most recent call last):
File "/optimum-cli", line 8, in
sys.exit(main())
File "/python3.10/site-packages/optimum/commands/optimum_cli.py", line 208, in main
service.run()
File "/python3.10/site-packages/optimum/commands/export/onnx.py", line 265, in run
main_export(
File "/python3.10/site-packages/optimum/exporters/onnx/main.py", line 272, in main_export
config = AutoConfig.from_pretrained(
File "/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1008, in from_pretrained
raise ValueError(
ValueError: Unrecognized model in lerobot/diffusion_pusht. Should have a model_type key in its config.json, or contain one of the following strings in its name:
Model type form config.json:
"type": "diffusion"
Supported Models:
albert, align, altclip, audio-spectrogram-transformer, autoformer, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deformable_detr, deit, depth_anything, deta, detr, dinat, dinov2, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, git, glpn, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, graphormer, grounding-dino, groupvit, hiera, hubert, ibert, idefics, idefics2, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llava, llava-next-video, llava_next, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mistral, mixtral, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, pix2struct, plbart, poolformer, pop2piano, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_moe, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rwkv, sam, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, siglip, siglip_vision_model, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, time_series_transformer, timesformer, timm_backbone, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zoedepth
Thanks
To reproduce
Download model from HF
Use optimum-cli to export the model
Platform
Linux
OS Version
Ubuntu 22.04.4 LTS
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.21.0
ONNX Runtime API
Python
Architecture
ARM64
Execution Provider
CUDA
Execution Provider Library Version
12.4
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
To reproduce
Download model from HF
Use optimum-cli to export the model
### Expected behavior
onnx export to succeed
|
https://github.com/huggingface/optimum/issues/2220
|
closed
|
[
"bug"
] | 2025-04-01T04:59:53Z
| 2025-06-11T13:57:20Z
| 1
|
kraza8
|
pytorch/torchtitan
| 1,035
|
Profiling only a select group of ranks
|
Is it possible to profile only a select group of ranks. Becomes hard to handle the large number of files when there are many ranks. I understand that there could be imbalances when only a few ranks are profiled. Do not know if there are ways to profile , but not dump the profile output file.
|
https://github.com/pytorch/torchtitan/issues/1035
|
open
|
[] | 2025-03-31T20:02:30Z
| 2025-08-21T03:10:16Z
| 3
|
githubsgi
|
huggingface/lerobot
| 923
|
Cannot install Lerobot
|
I am getting an error when the installation is building the av wheel. It is not passing this part of the installation
|
https://github.com/huggingface/lerobot/issues/923
|
closed
|
[
"documentation",
"question",
"dependencies"
] | 2025-03-31T18:26:16Z
| 2025-07-03T01:32:17Z
| null |
Prasit7
|
pytorch/xla
| 8,906
|
Profiler and `use_spmd()` order.
|
## 📚 Documentation
In #8057, [@zjjott mentioned](https://github.com/pytorch/xla/issues/8057#issuecomment-2408428441) that `xp.start_server(...)` should be used after `use_spmd()`. I didn't find it written anywhere in the documentation. So, is this actually true? If so, we should write this down somewhere.
cc @miladm @tengyifei @bhavya01
|
https://github.com/pytorch/xla/issues/8906
|
open
|
[
"distributed",
"documentation"
] | 2025-03-31T15:34:03Z
| 2025-03-31T21:29:39Z
| 6
|
ysiraichi
|
pytorch/torchtitan
| 1,034
|
Context parallel on Turing GPUs?
|
As the title suggests, is torchtitan CP supported on Turing GPU?
I got the error `RuntimeError: No available kernel. Aborting execution.` using the default `run_train.sh` script with CP changed to 2.
I know Turing GPUs don't have flash attention support yet, but I read the torchtitan CP blog post [here](https://discuss.pytorch.org/t/distributed-w-torchtitan-breaking-barriers-training-long-context-llms-with-1m-sequence-length-in-pytorch-using-context-parallel/215082), and it seems like the memory-efficient attention backend would work with CP?
If this is the case, could you share how to enable this backend in torchtitan? I tried to wrap this [line](https://github.com/pytorch/torchtitan/blob/main/torchtitan/models/llama/model.py#L258) with `with sdpa_kernel(SDPBackend.EFFICIENT_ATTENTION):`, but the error persists.
Thanks
|
https://github.com/pytorch/torchtitan/issues/1034
|
open
|
[
"question",
"module: context parallel"
] | 2025-03-31T09:36:47Z
| 2025-08-21T03:11:02Z
| null |
dingqingy
|
huggingface/open-r1
| 564
|
How to evaluate pass@16 for aime 2024 benchmark?
|
https://github.com/huggingface/open-r1/issues/564
|
open
|
[] | 2025-03-31T09:27:02Z
| 2025-03-31T09:27:02Z
| null |
Cppowboy
|
|
huggingface/diffusers
| 11,176
|
How to use attention_mask and encoder_attention_mask or apply prompts to specific areas in the image?
|
Hi, I'm aware of the attention_mask and encoder_attention_mask that exist in the forward function of the UNet2DConditionModel yet there are no examples on how to use this
I would appreciate some help on that, thank you in advance
@patrickvonplaten @Birch-san
|
https://github.com/huggingface/diffusers/issues/11176
|
open
|
[
"stale"
] | 2025-03-30T16:56:40Z
| 2025-04-30T15:03:34Z
| null |
alexblattner
|
pytorch/tutorials
| 3,308
|
💡 [REQUEST] - <title>Pruning tutorial: clarify how to achieve comparable performance to non-pruned?
|
### 🚀 Describe the improvement or the new tutorial
In the pruning tutorial https://pytorch.org/tutorials/intermediate/pruning_tutorial.html,
the method of pruning that is implemented appears to be completely random. "In this example, we will prune at random 30% of the connections..."
But isn't the goal of pruning produce a smaller network with nearly the same capabilities as the original?
I don't see anything in the tutorial about checking the performance of the new network, or how to intelligently prune the network in order to achieve the goal of pruning. The tutorial takes a randomly-initialized network, randomly prunes it, and then...
...it just suddenly ends...?
Is the idea that we're supposed to just keep iteratively trying random pruning until something finally works ok? That sounds unbearably undirected and inefficient. Did I miss something crucial while reading the tutorial?
**Requesting:** Clarification on how to achieve the "goal" of pruning: intelligently pruning the network to achieve comparable capabilities.
Just telling me I can define my own pruning function isn't enough, because...it's a tutorial, I don't know what such a function should entail.
### Existing tutorials on this topic
https://pytorch.org/tutorials/intermediate/pruning_tutorial.html
### Additional context
"In this example, we will prune at random 30% of the connections "
Why/how will that help achieve the goal of pruning? Won't it just randomly turn off parts of the network with no regard to its effect on performance? (This application seems more like Dropout than actual pruning.)
|
https://github.com/pytorch/tutorials/issues/3308
|
open
|
[] | 2025-03-30T15:49:41Z
| 2025-03-30T15:49:41Z
| null |
drscotthawley
|
huggingface/lerobot
| 920
|
[Question] How to convert dataset locally
|
I've noticed that `convert_dataset_v20_to_v21.py` convert LeRobot dataset from v20 to v21 that've already been pushed to the hub. But is there a script to do with local dataset?
|
https://github.com/huggingface/lerobot/issues/920
|
closed
|
[
"question",
"dataset",
"stale"
] | 2025-03-30T13:32:50Z
| 2025-10-13T02:30:26Z
| null |
Frozenkiddo
|
huggingface/lerobot
| 919
|
[Question] Why does "action" exist?
|
I am a beginner and I am very confused about it. What I can understand is that during my entire operation, I sampled at fixed time intervals. It's like a signal being collected by a letter. I only have to observe and what does action mean? Many data sets in the project have data with the column title `action`. Moreover, according to the expression of the project, `action` means the goal of the movement. However, this goal never seems to match the results in the observation. It looks like the robot never moves to its target. I was completely confused.
|
https://github.com/huggingface/lerobot/issues/919
|
closed
|
[
"question"
] | 2025-03-30T10:45:57Z
| 2025-03-31T07:50:19Z
| null |
ipc-robot
|
huggingface/trl
| 3,179
|
How to resume from the last checkpoint?
|
I want to continue training from the last checkpoint. How should I do it? I set resume_from_checkpoint=True in the GRPOConfig, but based on the output, it seems to start training from the first step. Do I also need to change the model to the checkpoint path?
|
https://github.com/huggingface/trl/issues/3179
|
closed
|
[
"❓ question",
"🏋 GRPO"
] | 2025-03-30T02:30:47Z
| 2025-03-30T04:35:58Z
| null |
Tuziking
|
huggingface/diffusers
| 11,168
|
Sage Attention for diffuser library
|
**Is your feature request related to a problem? No
**Describe the solution you'd like.**
A clear and concise description of what you want to happen.
Incorporate a way to add sage attention to the diffusers library: Flux pipeline, Wan pipeline, etc.
**Describe alternatives you've considered.**
None
**Additional context.**
When I incorporated sage attention in the flux pipeline (text to image) I achieved a 16% speed advantage vs no sage attention.
My environment was the same save for including / excluding sage attention in my 4 image benchmark creation.
How to incorporate sage attention? We must consider that this only applies to the Transformer. With this in mind I did the following to the FluxPipeline. Obviously there must be a way to do this via a variable of sorts so that we may/may not run it:
Need some kind of indicator to decide whether to include or not! This must be done before the denoising step in the model pipeline.
` import torch.nn.functional as F
sage_function = False
try:
from sageattention import sageattn
self.transformer.scaled_dot_product_attention = F.scaled_dot_product_attention = sageattn
sage_function = True
except (ImportError):
pass
# 6. Denoising loop
with self.progress_bar(total=num_inference_steps) as progress_bar:
for i, t in enumerate(timesteps):
if self.interrupt:
continue
`
After the denoising step we must remove sage attention else we get a VAE error due to Sage Attn wanting only torch.float16 or torch.bfloat16 dtypes which the VAE doesn't want:
` if output_type == "latent":
image = latents
else:
if sage_function:
self.transformer.scaled_dot_product_attention = F.scaled_dot_product_attention = torch._C._nn.scaled_dot_product_attention
`
Hopefully this helps.
|
https://github.com/huggingface/diffusers/issues/11168
|
open
|
[
"wip"
] | 2025-03-28T20:39:30Z
| 2025-06-23T05:59:27Z
| 12
|
ukaprch
|
huggingface/agents-course
| 381
|
[QUESTION]LLM or Agent?
|
In the tutorial, a lot of the contents mislead to a wrong conectp with LLM and Agents.
```
The Stop and Parse Approach
One key method for implementing actions is the stop and parse approach. This method ensures that the agent’s output is structured and predictable:
Generation in a Structured Format:
The agent outputs its intended action in a clear, predetermined format (JSON or code).
Halting Further Generation:
Once the action is complete, the agent stops generating additional tokens. This prevents extra or erroneous output.
Parsing the Output:
An external parser reads the formatted action, determines which Tool to call, and extracts the required parameters.
For example, an agent needing to check the weather might output:
```
The agent can output? or the author means the LLM?
|
https://github.com/huggingface/agents-course/issues/381
|
closed
|
[
"question"
] | 2025-03-28T15:36:45Z
| 2025-04-30T04:50:54Z
| null |
joshhu
|
huggingface/lerobot
| 912
|
[Question]When will MultiLeRobotDataset available?
|
Hello, the MultiLeRobotDataset is very useful for training on large amounts of data; without it, training complex tasks would be difficult. However, I noticed that after the Simplify configs(#550) commit on January 31st, MultiLeRobotDataset have been marked as unavailable(raise NotImplementedError("The MultiLeRobotDataset isn't supported for now.")). Could you please let me know approximately when this functionality will be restored, or why it has been made unavailable?
|
https://github.com/huggingface/lerobot/issues/912
|
closed
|
[
"question",
"dataset",
"stale"
] | 2025-03-28T09:16:06Z
| 2025-10-22T02:30:53Z
| null |
Vacuame
|
huggingface/agents-course
| 380
|
[QUESTION] Question on using HuggingFace space
|
First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord
However, if you prefer you can ask here, please **be specific**.
I am on AI Agents course now.
I have trouble in using HuggingFace space.
I studied this course at company so I have to open a firewall.
So I opened these port(80, 443. 8080) refer to following guide
(https://huggingface.co/docs/hub/en/spaces-overview)
But my edge window can not display anything.
Is there anything I'm missing?
Thank you for opening this course.

|
https://github.com/huggingface/agents-course/issues/380
|
closed
|
[
"question"
] | 2025-03-28T08:28:23Z
| 2025-04-30T04:47:14Z
| null |
kjh0303
|
pytorch/xla
| 8,900
|
Reset Peak Memory Usage
|
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Provides a method to reset peak used memory size to current memory being used or 0.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
PyTorch/XLA offers the function xm.get_memory_info which gives you details about memory usage, including bytes_used and peak_bytes_used.
When you run several computational graphs one after another, like A, B, and C, and if graph A uses more memory than graph B, it becomes tricky to accurately determine the memory footprint of B. It would be really useful to have a way to reset the peak memory usage after A has finished running.
A practical example of this is in vLLM. The process of loading a model (let's call this step A) often consumes more memory than the actual size of the model's weights due to how the XLA compiler works. Then, when you want to profile the memory usage during the model's execution (step B), the peak_bytes_used will reflect the higher memory usage from the loading phase. This makes the memory profiling for the execution phase less meaningful if you can't reset the peak memory measurement after the model has been loaded.
|
https://github.com/pytorch/xla/issues/8900
|
open
|
[
"enhancement"
] | 2025-03-28T05:29:44Z
| 2025-03-28T05:29:44Z
| 0
|
yaochengji
|
pytorch/torchtitan
| 1,027
|
Linear layer weights are in float32 ?
|
### Bug description
I am seeing Linear layer weights in float32 ( wq.weight.dtype torch.float32 ) even after setting the following.
mixed_precision_param = "bfloat16"
mixed_precision_reduce = "bfloat16"
Is that expected or I hit upon a bug ?
### Versions
1. Yes.
2. See the description section .
3. It is easy to check that by adding logger lines . See below.
```
# Non-PP forward / backward
with self.train_context(optional_context_parallel_ctx):
assert len(model_parts) == 1
logger.info(f"Linear wq.weight.dtype {model_parts[0].layers['0'].attention.wq.weight.dtype}")
last_layer = str(len(model_parts[0].layers) -1 )
logger.info(f"Linear wq.weight.dtype {model_parts[0].layers[last_layer].attention.wq.weight.dtype}")
pred = model_parts[0](inputs)
loss = self.train_spec.loss_fn(pred, labels)
# pred.shape=(bs, seq_len, vocab_size)
# need to free to before bwd to avoid peaking memory
del pred
loss.backward()
```
|
https://github.com/pytorch/torchtitan/issues/1027
|
closed
|
[
"question"
] | 2025-03-28T01:35:22Z
| 2025-05-08T21:15:49Z
| null |
githubsgi
|
pytorch/torchtitan
| 1,026
|
Any plan to add Llama 1B and/or 3B models ?
|
Wondering if there is any plan to add the 1B and/or 3B models to the TorchTitan set of example models ? It is probably fairly straight forward to do that , if I am not missing anything, Another toml file and adds at a few places. The optimizer and lr_scheduler section may requires some trial and error.
|
https://github.com/pytorch/torchtitan/issues/1026
|
open
|
[] | 2025-03-28T01:21:59Z
| 2025-04-01T18:29:00Z
| 4
|
githubsgi
|
pytorch/xla
| 8,899
|
The Stable Diffusion notebook is broken.
|
## 📚 Documentation
The README points to a [Stable Diffusion notebook](https://github.com/pytorch/xla/blob/master/contrib/kaggle/pytorch-xla-2-0-on-kaggle.ipynb) to help a user get started. However, this notebook cannot be run successfully:
1. The `import torch_xla` step results in an error:
```
oduleNotFoundError Traceback (most recent call last)
/tmp/ipykernel_27/3499457412.py in <module>
----> 1 import torch_xla
2 torch_xla.__version__
ModuleNotFoundError: No module named 'torch_xla'
```
This can be fixed by
```
!pip install torch~=2.6.0 'torch_xla[tpu]~=2.6.0' \
-f https://storage.googleapis.com/libtpu-releases/index.html \
-f [https://storage.googleapis.com/libtpu-wheels/index.html](https://storage.googleapis.com/libtpu-wheels/index.html%60)
```
2. Later, the `image = pipeline(prompt, callback=lambda *args: xm.mark_step(), generator=generator).images[0]` step failed with
```
/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:894: FutureWarning: `callback` is deprecated and will be removed in version 1.0.0. Passing `callback` as an input argument to `__call__` is deprecated, consider using `callback_on_step_end`
deprecate(
2%
1/50 [01:16<1:02:27, 76.48s/it]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-10-049c86b52afd>](https://localhost:8080/#) in <cell line: 0>()
2 # xm.mark_step compiles and executes the graph after each iteration.
3 # The first few steps will be much slower than the rest.
----> 4 image = pipeline(prompt, callback=lambda *args: xm.mark_step(), generator=generator).images[0]
5 image
1 frames
[/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py](https://localhost:8080/#) in __call__(self, prompt, height, width, num_inference_steps, timesteps, sigmas, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, ip_adapter_image, ip_adapter_image_embeds, output_type, return_dict, cross_attention_kwargs, guidance_rescale, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, **kwargs)
1068 if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
1069 progress_bar.update()
-> 1070 if callback is not None and i % callback_steps == 0:
1071 step_idx = i // getattr(self.scheduler, "order", 1)
1072 callback(step_idx, t, latents)
TypeError: unsupported operand type(s) for %: 'int' and 'NoneType'
```
|
https://github.com/pytorch/xla/issues/8899
|
open
|
[
"bug",
"documentation"
] | 2025-03-27T22:38:23Z
| 2025-11-13T00:44:20Z
| 0
|
zhanyong-wan
|
pytorch/vision
| 9,008
|
Torchvision bounding boxes do not match the images, becuase the bboxes are from the pre-cropped, pre-resized version.
|
### 🐛 Describe the bug
CelebA bounding boxes were calculated on the so called "in-the-wild" images, prior to cropping and resizing. But torchvision.datasets returns the version that is cropped to 178x218. So for example, on the ninth image, the bbox is outside the image size.
CODE TO REPRO
```
from torchvision import datasets
celeba = datasets.CelebA(root="./celeba", target_type="bbox", download=True, split="train")
print(celeba[8])
```
(<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=178x218>,
tensor([600, 274, 343, 475]))
### Versions
collect_env.py crashed on me but here's the version:
```$ uv pip show torchvision
Using Python 3.12.8 environment at: XXX
Name: torchvision
Version: 0.21.0
Location: XXX
Requires: numpy, pillow, torch
Required-by:
```
|
https://github.com/pytorch/vision/issues/9008
|
open
|
[] | 2025-03-27T17:34:15Z
| 2026-01-05T16:15:54Z
| 6
|
yaoshiang
|
huggingface/Math-Verify
| 47
|
Question: How to configure `verify` for strict multi-part answer checking?
|
Hi Math-Verify Team,
I'm currently using `math-verify` for evaluating LLM outputs, specifically for questions that might require multiple answers (e.g., "Find all X...").
I've observed that the `verify` function in `grader.py`, which seems to use logic similar to `any(product(gold, target))`, can return `True` even if the prediction only contains a subset of the required answers.
**Example Observation:**
In my setup:
* Ground Truth: `"1331 and 1728"` (appears to parse into something like `[1331, 1728]`)
* Prediction: `"1728"` (parses to `[1728]`)
* Result: `verify` returns `True`.
While this makes sense if checking for *any* overlap, it seems too lenient for "find all" type questions where an exact match of all required elements is needed. This can lead to inflated scores or misleading reward signals in my use case.
**Question:**
Is there an existing configuration option or a recommended way within `math-verify` (perhaps via specific `ExtractionConfig` settings or ground truth formatting) to enforce a stricter check? Specifically, I'd like to verify if the *set* of predicted answers exactly matches the *set* of ground truth answers (considering mathematical equivalence).
Or is the current behavior the intended default, and handling stricter set-based validation would require custom logic outside `verify` or modifications to the library?
Any clarification or guidance on the best practice for achieving strict multi-part answer verification with `math-verify` would be greatly appreciated!
Thanks!
|
https://github.com/huggingface/Math-Verify/issues/47
|
closed
|
[] | 2025-03-27T16:54:52Z
| 2025-07-01T19:31:51Z
| null |
TweedBeetle
|
huggingface/transformers.js
| 1,259
|
3.2.4 has wrong env check in transformers.web.js
|
### Question
## Background
I have developed a chrome extension which is followed by the [example](https://github.com/huggingface/transformers.js/tree/main/examples/extension). The example was used the package @xenova/transformers.
## Motivation
It seems that multithreads is work now. [Issue](https://github.com/huggingface/transformers.js/issues/928) [Issue2](https://github.com/huggingface/transformers.js/issues/882)
## Question
I change the package from **@xenova/transformers@2.17.2** to **@huggingface/transformers@3.4.1**. It shows a error **TypeError: sharp__WEBPACK_IMPORTED_MODULE_4__ is not a function** which have no been shown before. Anyone can help?
## Code (background.js)
```
// import { pipeline, env } from '@xenova/transformers';
// env.localModelPath = './';
// env.allowRemoteModels = false;
// env.backends.onnx.wasm.numThreads = 1;
import { env, pipeline } from '@huggingface/transformers';
env.localModelPath = './';
class ImagePipelineSingleton {
static task = 'image-classification';
static model = '/deepfake/';
static instance = null;
static async getInstance() {
try {
if (this.instance === null) {
this.instance = await pipeline(this.task, this.model);
}
} catch (error) {
console.error("Initialization error:", error);
}
return this.instance;
}
}
...
try{
let model = await ImagePipelineSingleton.getInstance();
let classification = await model(url);
}catch (error) {
console.error("image processing error:", error); //error here
}
...
```
## Folder Structure
- deepfake
- onnx
- model_quantized.onnx
|
https://github.com/huggingface/transformers.js/issues/1259
|
closed
|
[
"question"
] | 2025-03-27T07:35:23Z
| 2025-07-02T04:45:26Z
| null |
sanixa
|
huggingface/datasets
| 7,480
|
HF_DATASETS_CACHE ignored?
|
### Describe the bug
I'm struggling to get things to respect HF_DATASETS_CACHE.
Rationale: I'm on a system that uses NFS for homedir, so downloading to NFS is expensive, slow, and wastes valuable quota compared to local disk. Instead, it seems to rely mostly on HF_HUB_CACHE.
Current version: 3.2.1dev. In the process of testing 3.4.0
### Steps to reproduce the bug
[Currently writing using datasets 3.2.1dev. Will follow up with 3.4.0 results]
dump.py:
```python
from datasets import load_dataset
dataset = load_dataset("HuggingFaceFW/fineweb", name="sample-100BT", split="train")
```
Repro steps
```bash
# ensure no cache
$ mv ~/.cache/huggingface ~/.cache/huggingface.bak
$ export HF_DATASETS_CACHE=/tmp/roller/datasets
$ rm -rf ${HF_DATASETS_CACHE}
$ env | grep HF | grep -v TOKEN
HF_DATASETS_CACHE=/tmp/roller/datasets
$ python dump.py
# (omitted for brevity)
# (while downloading)
$ du -hcs ~/.cache/huggingface/hub
18G hub
18G total
# (after downloading)
$ du -hcs ~/.cache/huggingface/hub
```
It's a shame because datasets supports s3 (which I could really use right now) but hub does not.
### Expected behavior
* ~/.cache/huggingface/hub stays empty
* /tmp/roller/datasets becomes full of stuff
### Environment info
[Currently writing using datasets 3.2.1dev. Will follow up with 3.4.0 results]
|
https://github.com/huggingface/datasets/issues/7480
|
open
|
[] | 2025-03-26T17:19:34Z
| 2025-10-23T15:59:18Z
| 8
|
stephenroller
|
huggingface/transformers.js
| 1,258
|
Tokenizer encode and decode get different token ids and text, missing word_ids
|
### Question
```js
import { AutoTokenizer } from '@huggingface/transformers';
const tokenizer = await AutoTokenizer.from_pretrained('deepseek-ai/DeepSeek-R1')
console.log(tokenizer.encode(" e.g., ♩"))
console.log(tokenizer.decode([105]))
console.log(tokenizer.encode("♩"))
```
```
[ 312, 3588, 1042, 30717, 105 ]
�
[ 21315, 105 ]
```
how do I encode the words, and loop it and return it as single token,
because now ♩ is returning 2 tokens and becoming confusing
so is this a bug or something?
I guess i need word_ids?
|
https://github.com/huggingface/transformers.js/issues/1258
|
closed
|
[
"question"
] | 2025-03-26T10:44:12Z
| 2025-03-31T20:18:45Z
| null |
liho00
|
huggingface/lerobot
| 905
|
Supporting selection of obs and action keys in dataset
|
Hi all, thanks a lot for the framework.
Currently, it seems the LeRobotDataset format requires users to have a fixed state/environment state/images or actions defined in their dataset. However, this means that for multiple similar applications, the user has to record different datasets with different state or action definitions.
Is it possible to select certain keys from the state or actions similar to how we can do in robomimic?
https://github.com/ARISE-Initiative/robomimic/blob/master/robomimic/config/default_templates/bc_transformer.json#L107-L113
|
https://github.com/huggingface/lerobot/issues/905
|
closed
|
[
"question",
"dataset",
"stale"
] | 2025-03-26T08:12:10Z
| 2025-10-10T02:27:27Z
| null |
Mayankm96
|
pytorch/xla
| 8,884
|
BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. Error
|
Hello,
I'm trying to train my Transformer Encoder-Decoder model on google Colab `v2-8` TPUs. My code like this:
```python
import torch.distributed as dist
import torch_xla.core.xla_model as xm
import torch_xla.distributed.parallel_loader as pl
import torch_xla.distributed.xla_multiprocessing as xmp
import torch_xla.distributed.xla_backend
from torch.nn.parallel import DistributedDataParallel as DDP
import torch_xla as xla
def _mp_fn(rank,world_size):
dist.init_process_group("xla",init_method="xla://")
model = Transformer(TransformerEncoderConfig(),TransformerDecoderConfig())
model.to(xm.xla_device())
ddp_model = DDP(model,gradient_as_bucket_view=True)
optimizer = torch.optim.AdamW(ddp_model.parameters(),lr=0.00001)
criterion = torch.nn.CrossEntropyLoss()
xla_train_loader = pl.MpDeviceLoader(dataloader,xm.xla_device())
for fens,moves in xla_train_loader:
with xla.step():
fens,moves = fens.to(xla.device()),moves.to(xla.device())
inputs = moves[:,:-1]
labels = moves[:,1:]
optimizer.zero_grad()
outputs = ddp_model(fens,moves)
loss = criterion(outputs.permute(0,2,1),labels)
loss.backward()
xm.optimizer_step(optimizer)
xm.mark_step()
if __name__ == "__main__":
xla.launch(_mp_fn)
```
and this code raises the following error: `BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.`
What is the problem, is it due to dataloader behaviour or wrong model assignment to device? Could you help me with this if i achieve this i want to train the model with whole dataset in google TPU Research Program. So i'm open to any suggestion with working with TPUs.
|
https://github.com/pytorch/xla/issues/8884
|
open
|
[
"bug",
"needs reproduction"
] | 2025-03-25T23:52:17Z
| 2025-03-26T21:38:26Z
| 2
|
oayk23
|
pytorch/xla
| 8,883
|
[RFC] Use shard_as to improve sharding and avoid OOM
|
# 🚀 Use shard_as to improve sharding and avoid OOM
## Summary
2D sharding propagation is harder than 1D sharding propagation due to
incompatible sharding. This problem is worse in a `scan` / XLA `While` op, and
the <code>[shard_as][shard_as]</code> GSPMD feature seems to help.
## Motivation
This proposal is primarily to improve the sharding propgation of
`torch_xla.experimental.scan`.
When the decoder layer is wrapped in an XLA `While` op through
`torch_xla.experimental.scan`, Llama 3 8B trains a-okay with gbs 16 on a v6e-8
TPU, but we still get a OOM when scaling to Llama 3.1 405B on v6e-256 with 2D
(FSDP + TP) sharding.
By inspecting the memory profiles, we can infer the following:
* The OOM occurs during the `scan` in the backward pass (judging from the
referenced body computation)
* The OOM occurs because the compiler emits a convolution (convolution.171)
whose output shape is [1, 4K, 16K].
* That output tensor is then all-reduced over the FSDP axis (judging from the
replica groups), keeping the shape unchanged.
* The all-reduced tensor gets written to a `[126, 4K, 16K]` stacked output
tensor. This tensor is too large to materialize in a single chip so
compilation fails. Note that 126 is the number of layers in Llama 3.1 405B.
We deduced that the convolution before the all-reduce is computing the gradient
for the weight tensor of the
<code>[o_proj operation in self attention][o_proj]</code>:
```python
# Code snippet of Llama self attention
attn_output = attn_output.transpose(1, 2).contiguous()
attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
attn_output = self.o_proj(attn_output) # <--- here
return attn_output
```
During the backward pass, we will compute `grad_o_proj` which is a matmul of a
2D sharded input with a 2D sharded `attn_output`. Based on the profile, this
gradient tensor is only 1D sharded: its shape is `[1, 4K, 16K]`, where `16K` is
the size of the embedding dim. We expect it to have the shape of `[1, 4K, 4K]`.
## Breakdown of the problem
When GSPMD propagates 2D sharding annotations over a matmul, and the contraction
dim has matching sharding annotations:
```math
A[M_X, N_Y] \cdot B[N_Y , M_X] = C[M_?, M_?]
```
(using [scaling book sharding notation][scaling-book])
Dimension $N$ is contracted away. The mesh axis $X$ also disappears. Based on my
understanding of the GSPMD paper, the result will only be 1D sharded, barring
influence from any other operations. Therefore $C$ is only 1D sharded. Since $C$
is a gradient tensor and `scan` outputs a stacked array of all gradients for all
126 Llama 3.1 405B layers during the backward pass, this 1D sharding goes on to
"infect" the stacked array with a leading dim size of 126, resulting in an array
of shape `[126, 4K, 16K]`, which is too large to fit in HBM.
## Pitch
I followed the [HLO spec][shard_as] the [JAX implementation][shard_alike] to add
a `shard_as` function to PyTorch/XLA and use it in `scan` during the backward pass.
[PR here](https://github.com/pytorch/xla/pull/8879). `shard_as` will ensure that the inputs have the same sharding after GSPMD sharding propagation. Specifically, instead of scanning over the decoder layer's backward pass during the backward of scan,
we'll scan over a wrapper that adds additional sharding constraints to shard
the gradients the same way as their corresponding inputs:
```python
# This backward pass wrapper calls the original backward pass of a layer, and then use `shard_as` to ensure that
# the carry is sharded the same as grad_carry, and the grad_x (gradient for input) is sharded the same as the
# first element of the stacked input array.
def _backward_shard_alike(carry, x, backward, init, xs):
grad_carry, grad_x = backward(carry, x)
# Propagate sharding between forward inputs and backward outputs.
_, grad_carry = shard_as(init, grad_carry)
_, grad_x = shard_as(tree_map(lambda v: v[0], xs), grad_x)
return grad_carry, grad_x
```
The PR also has a unit test that checks the result of sharding propagation and
fails if we remove the `shard_as` usage from `scan`.
## Alternatives
Rather than using `shard_as`, we could expose a keyword argument on `scan` that
takes in the intended sharding annotation of all the weights during the backward
pass of a layer. Potentially, the user may specify that the gradient for the
`o_proj` weight should be sharded a certain way. There are some drawbacks:
- Since `scan` lowers the combine function using AOTAutograd into a functional
graph, we can't tell the tensors from each other. We don't even know what is
the variable name that corresponds to some specific output of an FX graph
extracted by AOTAutograd.
- SPMD and `scan` are orthogonal concerns and it's a code smell to expose both
APIs in one function.
In contrast, `shard_as` doesn't require telling tensors apart. It just says to
constrain the sharding of the N gradient tensors to be the same as the N input
tensors.
## Additi
|
https://github.com/pytorch/xla/issues/8883
|
closed
|
[
"enhancement",
"distributed"
] | 2025-03-25T22:14:04Z
| 2025-03-29T03:26:36Z
| 0
|
tengyifei
|
huggingface/chat-ui
| 1,772
|
USE_LOCAL_WEBSEARCH No results found for this search query
|
## Bug description
With `USE_LOCAL_WEBSEARCH=true`, Web Search always reports _No results found for this search query_.
## Steps to reproduce
- enable search
- enter and submit question
## Screenshots
<img width="488" alt="Image" src="https://github.com/user-attachments/assets/b948b629-ff67-4edb-9f7c-25ca9d3d1325" />
## Context
I'm running chat-ui-db using podman on an M1 Macbook. I'm using LM Studio as the model provider.
`podman run --rm --mount type=bind,source="$(pwd)/.env.local",target=/app/.env.local -v chat-ui:/data -p 3000:3000 ghcr.io/huggingface/chat-ui-db`
### Logs
<!-- Add any logs that are relevant to your issue. Could be browser or server logs. Wrap in code blocks. -->
```
{"level":50,"time":1742937489975,"pid":18,"hostname":"bbd76a6649ad","msg":"No results found for this search query"}
```
### Specs
- **OS**: macOS 15.3.1 (24D70)
- **Browser**: Firefox 136.0.2 (aarch64)
- **chat-ui commit**: ghcr.io/huggingface/chat-ui-db f679ed220b9b
### Config
_.env.local_
```
HF_TOKEN=hf_...
MODELS=`[
{
"name": "LM Studio",
"endpoints": [{
"type" : "openai",
"baseURL": "http://host.docker.internal:1234/v1"
}],
},
]`
USE_LOCAL_WEBSEARCH=true
WEBSEARCH_JAVASCRIPT=true
```
|
https://github.com/huggingface/chat-ui/issues/1772
|
open
|
[
"bug",
"help wanted",
"websearch"
] | 2025-03-25T21:28:11Z
| 2025-10-22T21:13:54Z
| 6
|
brechtm
|
huggingface/chat-ui
| 1,771
|
Client disconnects before response is received
|
## Bug description
If an answer takes several minutes to complete, the chat-ui client simply disconnects. This disconnection happens at 1 minute, but I'm unsure.
## Steps to reproduce
Ask your LLM a riddle but change it a little, so it becomes confused and wonders for a while.
man and a goat are one one side of a river with a boat. How do they get across?
Notice that the response is terminated during thinking/reasoning phase.
The LM Studio logs indicates that the client disconnects so it terminates the response at that point.
## Screenshots
## Context
### Logs
<!-- Add any logs that are relevant to your issue. Could be browser or server logs. Wrap in code blocks. -->
This request is terminated as 1min in the browser.
```
curl 'https://example.com/conversation/67e1af3d9becaf215b19d526' \
-X 'POST' \
-H 'Content-Type: multipart/form-data; boundary=----WebKitFormBoundarywFDiAu9glkYBEPBf' \
-H 'Accept: */*' \
--data-binary $'------WebKitFormBoundarywFDiAu9glkYBEPBf\r\nContent-Disposition: form-data; name="data"\r\n\r\n{"id":"91f280d4-9852-4453-b941-582eb531e911","is_retry":true,"is_continue":false,"web_search":false,"tools":[]}\r\n------WebKitFormBoundarywFDiAu9glkYBEPBf--\r\n'
```
### Specs
- **OS**: OS X
- **Browser**: Orion
- **chat-ui commit**: chat-ui-db image: `ghcr.io/huggingface/chat-ui-db@sha256:a69b02884d0de64bb60d8011828b0e4be778673cadfc5f783fe6df14fa737504`
### Config
<!-- Add the environment variables you've used to setup chat-ui, making sure to redact any secrets. -->
## Notes
How do I configure these timeouts?
|
https://github.com/huggingface/chat-ui/issues/1771
|
open
|
[
"bug"
] | 2025-03-25T19:14:54Z
| 2025-06-14T13:46:28Z
| 3
|
drewwells
|
huggingface/datasets
| 7,477
|
What is the canonical way to compress a Dataset?
|
Given that Arrow is the preferred backend for a Dataset, what is a user supposed to do if they want concurrent reads, concurrent writes AND on-disk compression for a larger dataset?
Parquet would be the obvious answer except that there is no native support for writing sharded, parquet datasets concurrently [[1](https://github.com/huggingface/datasets/issues/7047)].
Am I missing something?
And if so, why is this not the standard/default way that `Dataset`'s work as they do in Xarray, Ray Data, Composer, etc.?
|
https://github.com/huggingface/datasets/issues/7477
|
open
|
[] | 2025-03-25T16:47:51Z
| 2025-04-03T09:13:11Z
| null |
eric-czech
|
huggingface/lerobot
| 901
|
Any tutorial on how to make experiments on the SimXArm enviroment?
|
https://github.com/huggingface/lerobot/issues/901
|
closed
|
[] | 2025-03-25T13:29:59Z
| 2025-03-25T16:42:11Z
| null |
chenkang455
|
|
huggingface/chat-ui
| 1,765
|
`truncate` parameter ignored for OpenAI chat_completions endpoint
|
## Bug description
The `truncate` parameter in the ChatUI configuration is not being applied when using the OpenAI chat_completions endpoint.
## Root Cause
The issue arises because the chat_completions endpoint does not utilize the buildPrompt function where the `truncate` parameter is handled. The logic for truncation is solely within buildPrompt and is therefore bypassed entirely when processing chat_completions requests. This means there's no truncation mechanism applied to the chat history before it's sent to vllm-openai or OpenAI.
#1654
|
https://github.com/huggingface/chat-ui/issues/1765
|
open
|
[
"bug"
] | 2025-03-25T10:13:40Z
| 2025-03-25T10:20:33Z
| 0
|
calycekr
|
huggingface/finetrainers
| 350
|
how to train wan using 8 GPUs
|
I notice that there is only 4 GPUs scripts, even though I modify the script for 8 GPU training, it gets some errors.
|
https://github.com/huggingface/finetrainers/issues/350
|
open
|
[] | 2025-03-25T05:02:18Z
| 2025-05-06T14:54:50Z
| null |
tanshuai0219
|
pytorch/xla
| 8,876
|
Missing torch-xla-gpu-plugin
|
A user reported the following issue:
we have been trying to use `torch-xla` nightly builds to get around some of the slowness issues seen in torch-xla 2.5. We found `torch-xla` nightly builds for GPU under `gs://pytorch-xla-releases/wheels/cuda/12.6`, however these don’t contain `torch-xla-gpu-plugin` (this was present for older `torch-xla` versions e.g. `gs://pytorch-xla-releases/wheels/cuda/12.1/torch_xla_cuda_plugin-2.6.0-py3-none-any.whl`). Is there any location that contains the cuda plugin nightly builds for torch-xla 2.8.0?
|
https://github.com/pytorch/xla/issues/8876
|
open
|
[
"xla:gpu"
] | 2025-03-24T18:03:36Z
| 2025-04-02T14:25:22Z
| 11
|
tengyifei
|
pytorch/xla
| 8,874
|
Contribution suggestion?
|
## ❓ Questions and Help
I want to have a deeper understanding of pytorch/xla by contributing to it. I notice that the majority of the [issues with "good first issue" tag](https://github.com/pytorch/xla/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22) are of one kind (i.e. Op info test) and are created long time ago. I am not sure if they are still relevant. Other than that, i don't know how to find good issues to work with. Can i assume that any issue from the issue list without assignee are open for a all contributors? Or, should there be a tag for all the issues that are available for public contribution?
Thanks!
|
https://github.com/pytorch/xla/issues/8874
|
closed
|
[
"question"
] | 2025-03-24T17:03:26Z
| 2025-03-24T18:24:49Z
| null |
iwknow
|
huggingface/diffusers
| 11,147
|
[LTX0.9.5] make LTX0.9.5 works with text-to-video
|
see more context here https://github.com/huggingface/diffusers/issues/11143#issuecomment-2747390564
|
https://github.com/huggingface/diffusers/issues/11147
|
closed
|
[
"help wanted"
] | 2025-03-24T09:56:47Z
| 2025-04-04T14:43:16Z
| 9
|
yiyixuxu
|
huggingface/search-and-learn
| 47
|
How to run this project on CPU?
|
Hello, I'm going to run the code for the project on cpu
The graphics card I have now is 4060ti, but even with the lightest option (minimum batch size, use 1.5B model, etc.), I couldn't run the project due to memory capacity issues
So I want to move this project to cpu and see the results even if it takes some time
However, even though all settings and codes have been checked, the flash attention backend is automatically set and we are having trouble solving the error
So I would like to ask if this project cannot be implemented in cpu through vllm setting change only
|
https://github.com/huggingface/search-and-learn/issues/47
|
open
|
[] | 2025-03-24T01:13:44Z
| 2025-03-24T01:13:44Z
| null |
pss0204
|
pytorch/pytorch
| 149,826
|
How to handle dynamic output size with torch.onnx.export (through dynamo) for Resize
|
### 🐛 Describe the bug
I would like to export with torch.onnx.export (through dynamo) some code that contains a resize operation. The output width and height is dynamic. An example model is as follows:
```
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, size):
y = torch.nn.functional.interpolate(x, size=size.tolist())
return y
model = Model()
x = torch.rand(1, 3, 400, 500)
size = torch.tensor([1024, 1024]).to(torch.int32)
y = model(x, size)
onnx_model = torch.onnx.export(model, (x, size), dynamo=True)
```
The code throws the following error:
```
<class 'RuntimeError'>: /pytorch/build/aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:5615: SymIntArrayRef expected to contain only concrete integers
While executing %upsample_nearest2d : [num_users=1] = call_function[target=torch.ops.aten.upsample_nearest2d.vec](args = (%x, [%_local_scalar_dense, %_local_scalar_dense_1], None), kwargs = {})
Original traceback:
File "/tmp/test.py", line 11, in forward
y = torch.nn.functional.interpolate(x, size=size.tolist())
```
The interpolate function doesn't accept a tensor as argument, so I somehow has to convert it to a List. That fails with the error as shown. I can hardcode the list to a fixed sizes, but then I cannot accept images with different size at inference time.
How can I address this issue?
### Error logs
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 14.2.0-4ubuntu2~24.04) 14.2.0
Clang version: 19.1.1 (1ubuntu1~24.04.2)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-19-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 500 Ada Generation Laptop GPU
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 22
On-line CPU(s) list: 0-21
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 7 155H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 4
CPU(s) scaling MHz: 22%
CPU max MHz: 4800.0000
CPU min MHz: 400.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 896 KiB (14 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-21
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disab
|
https://github.com/pytorch/pytorch/issues/149826
|
closed
|
[
"module: onnx",
"triaged",
"oncall: pt2"
] | 2025-03-23T09:27:37Z
| 2025-04-24T15:17:47Z
| null |
FabianSchuetze
|
pytorch/pytorch
| 149,771
|
How to remove the “internal api” notice?
|
### 📚 The doc issue
What is the option that will remove this notice?
> This page describes an internal API which is not intended to be used outside of the PyTorch codebase and can be modified or removed without notice.
We would like to remove it for https://pytorch.org/docs/stable/onnx_dynamo.html and a few onnx pages.
@svekars
### Suggest a potential alternative/fix
_No response_
cc @svekars @sekyondaMeta @AlannaBurke
|
https://github.com/pytorch/pytorch/issues/149771
|
closed
|
[
"module: docs",
"triaged"
] | 2025-03-21T22:46:30Z
| 2025-03-27T22:02:25Z
| null |
justinchuby
|
huggingface/datasets
| 7,473
|
Webdataset data format problem
|
### Describe the bug
Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1
Error code: FileFormatMismatchBetweenSplitsError
All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted as being the webdataset format? (I don't think there is currently a way, but happy to be told that I am wrong.)
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("ejschwartz/idioms")
### Expected behavior
The dataset loads. Alternatively, there is a YAML syntax for manually specifying the format.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.28.1
- PyArrow version: 19.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
|
https://github.com/huggingface/datasets/issues/7473
|
closed
|
[] | 2025-03-21T17:23:52Z
| 2025-03-21T19:19:58Z
| 1
|
edmcman
|
huggingface/datasets
| 7,470
|
Is it possible to shard a single-sharded IterableDataset?
|
I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not.
Say we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs more cost too). But the results are also big enough that we don't want to materialize it entirely and instead stream it with an IterableDataset.
But after we have the results we want to split it up across workers to parallelize processing.
Is something like this possible to do?
Here's a failed attempt. The end result should be that each of the shards has unique data, but unfortunately with this attempt the generator gets run once in each shard and the results end up with duplicates...
```
import random
import datasets
def gen():
print('RUNNING GENERATOR!')
items = list(range(10))
random.shuffle(items)
yield from items
ds = datasets.IterableDataset.from_generator(gen)
print('dataset contents:')
for item in ds:
print(item)
print()
print('dataset contents (2):')
for item in ds:
print(item)
print()
num_shards = 3
def sharded(shard_id):
for i, example in enumerate(ds):
if i % num_shards in shard_id:
yield example
ds1 = datasets.IterableDataset.from_generator(
sharded, gen_kwargs={'shard_id': list(range(num_shards))}
)
for shard in range(num_shards):
print('shard', shard)
for item in ds1.shard(num_shards, shard):
print(item)
```
|
https://github.com/huggingface/datasets/issues/7470
|
closed
|
[] | 2025-03-21T04:33:37Z
| 2025-11-22T07:55:43Z
| 6
|
jonathanasdf
|
huggingface/lerobot
| 884
|
[Question] Support of PointCloud
|
Hi,
I'm currently developing a plugin for lerobot and would like to know if there are any plans to support PointCloud data.
Additionally, I'd like to ask if there is a recommended storage format for handling PointCloud data within the project.
Looking forward to your response.
Thanks
|
https://github.com/huggingface/lerobot/issues/884
|
closed
|
[
"question",
"dataset",
"stale"
] | 2025-03-21T04:29:15Z
| 2025-10-07T02:26:39Z
| null |
yilin404
|
huggingface/inference-benchmarker
| 4
|
Can i use local model's tokenizer and local dataset?
|
Hello, may I specify the paths of the locally downloaded model and dataset through the ./inference-benchmarker command, instead of accessing Hugging Face via the network?
|
https://github.com/huggingface/inference-benchmarker/issues/4
|
open
|
[
"question"
] | 2025-03-21T01:55:03Z
| 2025-03-27T18:44:04Z
| null |
handsome-chips
|
pytorch/torchx
| 1,021
|
Suggested way to get timestamp of the job submission?
|
## Description
Hi team, I am looking for a way to get the exact timestamp when the command `torchx run` is being run. Is there a formal way that is scheduler / component agnostic? The timestamp should be accessible from the training app.
## Motivation/Background
The actual use case is to calculate the overhead between job launch to the actual time when training container spin up and finishes the first batch.
## Detailed Proposal
<!-- provide a detailed proposal -->
## Alternatives
<!-- discuss the alternatives considered and their pros/cons -->
## Additional context/links
<!-- link to code, documentation, etc. -->
|
https://github.com/meta-pytorch/torchx/issues/1021
|
open
|
[] | 2025-03-20T21:58:47Z
| 2025-03-20T21:59:41Z
| 0
|
HanFa
|
huggingface/video-dataset-scripts
| 20
|
parquet file how to convert to Training Dataset Format for finetrainers
|
parquet file how to convert to Training Dataset Format for finetrainers ?
|
https://github.com/huggingface/video-dataset-scripts/issues/20
|
closed
|
[] | 2025-03-20T16:22:39Z
| 2025-04-10T17:46:06Z
| null |
kanghua309
|
pytorch/pytorch
| 149,586
|
UserWarning: Dynamo does not know how to trace the builtin `None.pybind11_object.__new__.`
|
### 🐛 Describe the bug
I'm filing an issue since this is a Python built-in (granted the error message implies that it is not since it references PyBind11, but I'm opening an issue anyway since it is caused by using returning/using `None` in a compiled function).
### Versions
2.7.0a0+gitebd087e
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @ydwu4 @xmfan @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
|
https://github.com/pytorch/pytorch/issues/149586
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"module: higher order operators",
"module: compiled autograd",
"module: pt2-dispatcher",
"module: flex attention"
] | 2025-03-20T00:32:49Z
| 2025-03-21T19:28:30Z
| null |
cora-codes
|
pytorch/xla
| 8,862
|
Replace `xm.mark_step` with `torch_xla.sync()` in examples and tests
|
`torch_xla.sync()` is easier to spell than `xm.mark_step()`. We should at least replace `mark_step` in all public examples.
|
https://github.com/pytorch/xla/issues/8862
|
closed
|
[
"enhancement",
"usability",
"documentation"
] | 2025-03-19T22:25:09Z
| 2025-05-16T17:56:25Z
| 1
|
tengyifei
|
pytorch/xla
| 8,861
|
Document the difference between `device=` vs `.to(device)`
|
## 📚 Documentation
There's a subtle difference between `torch.foo(device=xla)` vs `torch.foo().to(xla)` and we should document this in a FAQ section or similar. The first one runs the `foo` on the TPU. The second one runs the `foo` on the CPU and then moves the buffer to the TPU.
|
https://github.com/pytorch/xla/issues/8861
|
closed
|
[
"enhancement",
"good first issue",
"documentation"
] | 2025-03-19T22:23:19Z
| 2025-06-12T06:07:46Z
| 2
|
tengyifei
|
pytorch/xla
| 8,859
|
Improve `torch_xla.compile` documentation
|
## 📚 Documentation
The best doc I could find that mentions this is https://pytorch.org/xla/release/r2.5/eager_mode.html. However, `torch_xla.compile` is usable separate from PyTorch/XLA eager mode and we should make this more front-and-center compared to mark_step.
|
https://github.com/pytorch/xla/issues/8859
|
closed
|
[
"enhancement",
"good first issue",
"documentation"
] | 2025-03-19T22:15:04Z
| 2025-05-30T04:11:41Z
| 0
|
tengyifei
|
pytorch/xla
| 8,858
|
Document the difference between tracing time and execution time
|
## 📚 Documentation
If we write a loop like
```
start = time.time()
for step in range(num_steps):
run_model()
xm.mark_step()
end = time.time()
```
Then `end - start` will only measure the tracing time. We'll need to do `torch_xla.sync(wait=True)` to block on device execution to measure the execution time.
We should document this in some "common FAQs/sharp edges" maybe
|
https://github.com/pytorch/xla/issues/8858
|
closed
|
[
"enhancement",
"good first issue",
"documentation"
] | 2025-03-19T22:13:49Z
| 2025-05-30T04:10:37Z
| 4
|
tengyifei
|
pytorch/torchtitan
| 987
|
Is EP (Expert Parallelism) coming ?
|
Currently TorchTitan supports PP, CP, FSDP, PP parallelisms. Is there a plan to support Expert Parallelism (EP) ? Along the same line, see some DeepSeek files in the repo. Is there a plan to support DeepSeek training on TorchTitan ?
|
https://github.com/pytorch/torchtitan/issues/987
|
closed
|
[
"question"
] | 2025-03-19T21:41:14Z
| 2025-03-24T17:21:20Z
| null |
githubsgi
|
pytorch/torchtitan
| 986
|
Is a PP+FSDP+TP config + toml available for pre-training 405B model ?
|
Would appreciate if someone can share a toml file to do PP+FSDP+TP for 405B model.
|
https://github.com/pytorch/torchtitan/issues/986
|
closed
|
[] | 2025-03-19T21:35:43Z
| 2025-08-21T03:11:32Z
| 3
|
githubsgi
|
pytorch/vision
| 8,986
|
Speed up JPEG decoding by allowing resize during decode
|
### 🚀 The feature
Torchvision's `read_image` currently decodes JPEG images at full resolution. However, both `libjpeg` and `libjpeg-turbo` support decoding at lower resolutions (1/2, 1/4, 1/8 of the original size).
Introducing a `size_hint` parameter would allow users to specify an approximate target size, with `torchvision` selecting the closest larger available scale factor and downscale the JPEG image during decoding.
Example Usage:
```python
from torchvision.io.image import decode_image
tensor = decode_image("image.jpeg", size_hint=(224, 224))
```
### Motivation, pitch
- Many ML pipelines process images at fixed sizes (e.g., 224x224 for ImageNet models). Decoding large images only to downscale them later is inefficient.
- This can improve memory usage as we do not need to hold the full-sized image in the memory.
- Pillow provides a similar feature via [`Image.draft`](https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.draft), allowing for approximate size-based decoding.
### Alternatives
- Using Pillow for decoding with downscaling, but torchvision’s native decoder is typically faster than decoding using Pillow and then converting to tensor.
- Decode and then resize, but this is inefficient, see benchmark below.
### Additional context
## Benchmark
We implemented a proof-of-concept and ran performance tests on decoding a 1920x1080 image into 960x540.
We compared the following:
- Use existing `decode_jpeg` and resize after.
- Patch `decode_jpeg` to allow `libjpeg` / `libjpeg-turbo` downscaling via the `size_hint` parameters.
Benchmark results (1000 iters):
```
9.91s call .../test_jpeg.py::test_torchvision_image_load_with_resize_960_540
4.00s call .../test_jpeg.py::test_fastjpeg_image_load_with_size_hint_960_540
```
~2.5X speed up.
I'm happy to contribute a patch if people consider this useful.
|
https://github.com/pytorch/vision/issues/8986
|
open
|
[] | 2025-03-19T19:08:46Z
| 2025-04-29T07:32:47Z
| 3
|
gyf304
|
huggingface/trl
| 3,114
|
What is the reason for using only one GPU when integration with llm?
|
At [line](https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py#L507) of the code, when using vllm, a unique GPU device is specified here. However, in fact, it is quite common to use a single vllm instance with multiple GPUs.
1. What is the reason that the code is designed to only select a single GPU?
2. Where does the '**device**' parameter of this LLM interface eventually get passed to? When I entered this function, I couldn't find the corresponding parameter processing method (this might be a very basic question).
3. When I changed the '**device**' parameter to **tensor_parallel_size** (and also set the world_size and other parameters), an error occurred.
I've noticed that some other PRs have made modifications to the multi-GPU usage of vllm, but not at the interface where [LLM is used](https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py#L507). I'm curious about the reasons behind this.
If anyone is willing to answer me, I would be very grateful.
|
https://github.com/huggingface/trl/issues/3114
|
closed
|
[
"❓ question",
"🏋 GRPO"
] | 2025-03-19T16:20:03Z
| 2025-04-05T17:01:33Z
| null |
spencergotowork
|
huggingface/smollm
| 67
|
How to fine tune smolvlm on OCR
|
Is there any guid to finet-tune smovlm on OCR like in https://huggingface.co/ds4sd/SmolDocling-256M-preview
|
https://github.com/huggingface/smollm/issues/67
|
open
|
[
"Image"
] | 2025-03-19T14:17:33Z
| 2025-07-29T13:09:05Z
| null |
abdelkareemkobo
|
huggingface/peft
| 2,436
|
Fine-tuning with Multiple LoRAs
|
Thanks for your valuable work!
I would like to know if it's possible to jointly train two LoRAs while only loading one base model. The overall output depends on the respective outputs of LoRA1 and LoRA2. For example, logits1 is obtained from the base model with LoRA1, and logits2 is obtained from the base model with LoRA2. I have tried the following code
```python
model.add_adapter(lora_1)
model.add_adapter(lora_2)
model.enable_adapters()
model.set_adapter("lora_1")
logits1 = model(input_ids).logits # use model with lora1 to get output
model.set_adapter("lora_2")
logits2 = model(input_ids).logits # use model with lora2 to get output
logits = logits1+logits2
loss=loss_fct(logits, labels)
loss.backward()
```
but it seems there might be some issues:
1. Once set_adapter(lora2) is called, LoRA1 no longer receives gradients;
2. If I modify the source code of set_adapter to make both requires_grad=True, would that be correct?
What I'm confused about is, after I execute set_adapter(lora2), does the model perform computations using the base model with LoRA2 (as I hope), or does it use the base model with both LoRA1 and LoRA2 combined?
I'm looking forward to your help! Thank you!
|
https://github.com/huggingface/peft/issues/2436
|
closed
|
[] | 2025-03-19T13:49:28Z
| 2025-07-19T05:45:12Z
| 7
|
xymou
|
huggingface/setfit
| 590
|
How do I disable requests to huggingface.co:443 after training?
|
I'm currently evaluating setfit in a proof of concept situation. Unfortunately, I'm working behind a company firewall, where I do not have access to the world wide web, only to company-internal URLs.
That's a bit annoying in terms of downloading models, but I can work around that. More importantly, it seems there are calls to huggingface.co:443 after the training is done, which obviously cannot succeed due to the blocked internet access.
That wouldn't be big problem if the timeout were 1 minute or so, but it seems to be more like 5-10 minutes, which is a lot of time wasted just waiting for the results.
How can I disable these blocking HTTP requests?
My minimal training pipeline looks somewhat like this (shortened for readability, especially data loading):
```
model = SetFitModel.from_pretrained(
"/local/path/local-bge-small-en-v1.5",
local_files_only=True,
multi_target_strategy="multi-output",
)
train_dataset, test_dataset = a_bunch_of_loading_and_sampling_code_thats_irrelevant_here()
args = TrainingArguments(
batch_size=128,
num_epochs=10,
report_to=None
)
trainer = Trainer(
model=model,
args=args,
train_dataset=train_dataset,
metric="f1",
callbacks=None,
column_mapping={"column": "mapping"},
metric_kwargs={"average": "samples"}
)
trainer.train()
```
After all training steps are done, I get the following console logs:
```
INFO:sentence_transformers.trainer:Saving model checkpoint to checkpoints/checkpoint-258
INFO:sentence_transformers.SentenceTransformer:Save model to checkpoints/checkpoint-258
Request [id]: GET https://huggingface.co/api/models/setfit-test/local-bge-small-en-v1.5 (authenticated: False)
DEBUG:huggingface_hub.utils._http:Request [id]: GET https://huggingface.co/api/models/setfit-test/local-bge-small-en-v1.5 (authenticated: False)
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
```
Then nothing happens for about 10 minutes, before I get a "Batches: 100% [tqdm progress bar]", which is however finished almost immediately.
Is there any parameter I can set to disable this call to huggingface? "report_to=None" or "callbacks=None" don't seem to do the trick.
|
https://github.com/huggingface/setfit/issues/590
|
open
|
[] | 2025-03-19T08:42:12Z
| 2025-03-19T18:44:12Z
| null |
AdrianSchneble
|
huggingface/diffusers
| 11,114
|
channel inconsistency in cogvideo Lora training example
|
### Describe the bug
while using the training script in (https://github.com/huggingface/diffusers/blob/main/examples/cogvideo/train_cogvideox_image_to_video_lora.py)
I made a dataset as described in readme and run training.
but a bug occurred at the forward pass process.It is because the model in-channel is 16 but model_input in-channel is 32.
how can i fix it?
### Reproduction
# Sample noise that will be added to the latents
noise = torch.randn_like(video_latents)
# Add noise to the model input according to the noise magnitude at each timestep
# (this is the forward diffusion process)
noisy_video_latents = scheduler.add_noise(video_latents, noise, timesteps)
noisy_model_input = torch.cat([noisy_video_latents, image_latents], dim=2)
# Prepare rotary embeds
image_rotary_emb = (
prepare_rotary_positional_embeddings(
height=args.height,
width=args.width,
num_frames=num_frames,
vae_scale_factor_spatial=vae_scale_factor_spatial,
patch_size=model_config.patch_size,
attention_head_dim=model_config.attention_head_dim,
device=accelerator.device,
)
if model_config.use_rotary_positional_embeddings
else None
)
# Predict the noise residual
model_output = transformer(
hidden_states=noisy_model_input,
encoder_hidden_states=prompt_embeds,
timestep=timesteps,
image_rotary_emb=image_rotary_emb,
return_dict=False,
)[0]
### Logs
```shell
[rank0]: File "train_cogvideox_i_t2v_lora_raw.py", line 1426, in main
[rank0]: model_output = transformer(
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1643, in forward
[rank0]: else self._run_ddp_forward(*inputs, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1459, in _run_ddp_forward
[rank0]: return self.module(*inputs, **kwargs) # type: ignore[index]
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/accelerate/utils/operations.py", line 819, in forward
[rank0]: return model_forward(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/accelerate/utils/operations.py", line 807, in __call__
[rank0]: return convert_to_fp32(self.model_forward(*args, **kwargs))
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/diffusers/models/transformers/cogvideox_transformer_3d.py", line 476, in forward
[rank0]: hidden_states = self.patch_embed(encoder_hidden_states, hidden_states)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/diffusers/models/embeddings.py", line 715, in forward
[rank0]: image
|
https://github.com/huggingface/diffusers/issues/11114
|
open
|
[
"bug",
"stale"
] | 2025-03-19T07:55:00Z
| 2025-04-18T15:02:52Z
| 2
|
MrTom34
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.