repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers
| 10,796
|
Docs for HunyuanVideo LoRA?
|
### Describe the bug
As it seems like LoRA loading on HunyuanVideo has been implemented, I wonder where I can find the docs on this? Are they missing?
### Reproduction
Search for HunyuanVideo and LoRA
### Logs
```shell
```
### System Info
As it is the online docs...
### Who can help?
@stevhliu @sayakpaul
|
https://github.com/huggingface/diffusers/issues/10796
|
closed
|
[
"bug",
"stale"
] | 2025-02-15T04:31:34Z
| 2025-06-10T20:52:28Z
| 9
|
tin2tin
|
huggingface/open-r1
| 328
|
How to set generation sampling parameters?
|
Need to use deepseek reference settings of temperature=0.6, top_p=0.95.
Greedy sampling does poorly on AIME:
## r1-1.5B
- AIME24: 23.33%
Tried to refer to lighteval docs and ran into issues using model config:
```
model: # Model specific parameters
base_params:
model_args: "pretrained=Qwen/Qwen2.5-7B-Instruct,dtype=bfloat16,max_model_length=768,gpu_memory_utilisation=0.7" # Model args that you would pass in the command line
generation: # Generation specific parameters
temperature: 1.0
stop_tokens: null
truncate_prompt: false
```
run with:
```
TASK=aime24 lighteval vllm \
"config.yaml" \
"custom|$TASK|0|0" \
--custom-tasks tasks.py \
--use-chat-template \
--output-dir ./results/
```
hitting:
```
TypeError: expected str, bytes or os.PathLike object, not dict
```
[ref](https://github.com/huggingface/lighteval/issues/563)
|
https://github.com/huggingface/open-r1/issues/328
|
open
|
[] | 2025-02-14T21:42:28Z
| 2025-02-20T03:28:53Z
| null |
rawsh
|
pytorch/xla
| 8,710
|
Expand troubleshoot instructions
|
## 📚 Documentation
Expand troubleshoot instructions in https://github.com/pytorch/xla/blob/6f423d0bb284190cf1b12d8a943a334e57b4df28/docs/source/learn/troubleshoot.md to include common errors, and new debugging strategies.
|
https://github.com/pytorch/xla/issues/8710
|
open
|
[
"documentation"
] | 2025-02-14T18:54:01Z
| 2025-02-14T18:54:22Z
| 0
|
pgmoka
|
pytorch/xla
| 8,709
|
Add more info to TPU_TOPOLOGY errors
|
## 📚 Documentation
Currently if a VM is created utilizing an OS that does not support training on the TPU we get a TPU_TOPOLOGY OS error. We should add to our documentation to make these errors, and their solutions clearer.
|
https://github.com/pytorch/xla/issues/8709
|
open
|
[
"documentation"
] | 2025-02-14T18:49:40Z
| 2025-02-14T18:49:40Z
| 0
|
pgmoka
|
pytorch/vision
| 8,905
|
Can the `_make_divisible_function` be explained better?
|
### 📚 The doc issue
I'm referring to the following function: https://github.com/pytorch/vision/blob/main/torchvision/models/_utils.py#L76 I've no doubt that it is correct, but why does it sometimes round down the input and why is the threshold set to 90%? Is the formula from a well-known paper?
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/vision/issues/8905
|
closed
|
[] | 2025-02-14T17:16:42Z
| 2025-02-14T17:37:01Z
| 1
|
bjourne
|
huggingface/trl
| 2,864
|
How to train GPRO on 2 GPUs, one for training, one for vllm
|
### Reproduction
When I use `Qwen2.5-3B-instruct` to train GRPO, the device for vllm always appear OOM when loading weights. II used two GPUs with 32GB of memory, one device for training, another for vllm. I dont know why a 3B model using so much memory on `device 1`

arguments settings:
```yaml
per_device_train_batch_size: 8
gradient_accumulation_steps: 8
num_generations: 8
use_vllm: true
vllm_gpu_memory_utilization: 0.8
use_peft: true
lora_r: 64
lora_alpha: 64
load_in_4bit: true
use_bnb_nested_quant: true
attn_implementation: flash_attention_2
bf16: true
...
```
Start command:
```shell
export CUDA_VISIBLE_DEVICES=0,1
accelerate launch --num_processes 1 train_Datawhale-R1.py --config Datawhale-R1.yaml
```
### System Info
- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
- Python version: 3.10.8
- PyTorch version: 2.5.1
- CUDA device(s): NVIDIA vGPU-32GB, NVIDIA vGPU-32GB
- Transformers version: 4.48.3
- Accelerate version: 1.3.0
- Accelerate config: not found
- Datasets version: 3.1.0
- HF Hub version: 0.27.0
- TRL version: 0.16.0.dev0+ffcb9f4
- bitsandbytes version: 0.45.2
- DeepSpeed version: 0.16.3
- Diffusers version: 0.32.2
- Liger-Kernel version: not installed
- LLM-Blender version: not installed
- OpenAI version: 1.59.7
- PEFT version: 0.14.0
### Checklist
- [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))
- [x] I have included my system information
- [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [x] Any traceback provided is complete
|
https://github.com/huggingface/trl/issues/2864
|
open
|
[
"⚡ PEFT",
"⏳ needs more info",
"⚡accelerate",
"🏋 GRPO"
] | 2025-02-14T15:00:58Z
| 2025-03-12T12:00:10Z
| null |
AIR-hl
|
huggingface/peft
| 2,377
|
Contributing new model merging method to PEFT
|
### Feature request
Hi all,
I noticed that several model merging methods, such as TIES and DARE, have been implemented in this library, as mentioned [here](https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/model_merging.md).
I was wondering if there is a way for me to contribute a recently accepted model merging method to this repo.
I would really appreciate any guidance or suggestions on how to proceed.
Thanks in advance!
### Motivation
Enhance the diversity of model merging supported in this library.
### Your contribution
I can submit a PR.
|
https://github.com/huggingface/peft/issues/2377
|
closed
|
[] | 2025-02-14T12:17:46Z
| 2025-03-24T15:04:11Z
| 2
|
SpeeeedLee
|
pytorch/serve
| 3,391
|
How can a user specify an envelope?
|
### 📚 The doc issue
The `service_envelope` parameter has disappeared from the documentation:
https://pytorch.org/serve/configuration.html#other-properties
The KServe documentation states that this parameter is depricated:
https://kserve.github.io/website/0.11/modelserving/v1beta1/torchserve/#create-model-storage-with-a-model-archive-file-and-config
and that `enable_envvars_config=true` should now be used instead.
The question arises how can the user now set the envelope type (`json/kserve/kservev2`) and where is the place in the code where it is defined?
Where is this shown in the documentation?
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/3391
|
open
|
[] | 2025-02-14T07:24:11Z
| 2025-02-14T07:24:11Z
| 0
|
yurkoff-mv
|
pytorch/pytorch
| 147,187
|
[torch.export] How to export a model with kv cache
|
### 🐛 Describe the bug
In an attention layer, kv cache needs a variable number "start_pos" from outside.
(may related to https://github.com/pytorch/pytorch/issues/146990)
Here is a simplified model for reproducing the issue:
```python
import torch
from torch import nn
class Cache(nn.Module):
def __init__(self, head_dim):
super().__init__()
max_token = 128
self.register_buffer("cache_k", torch.zeros(
(1, max_token, head_dim,)), persistent=False)
def forward(
self,
x: torch.Tensor,
start_pos: torch.Tensor
):
_, seqlen, _ = x.size()
end_pos = start_pos+seqlen
self.cache_k[:, start_pos:end_pos, :] = x
return self.cache_k[:, :end_pos, :]
if __name__ == "__main__":
from torch.export import Dim
with torch.no_grad():
# Prepare for input
start_pos = torch.scalar_tensor(8, dtype=torch.int32)
seqlen = 8
hidden_size = 32
h = torch.randn(1, seqlen, hidden_size)
# Prepare for mdoel
model = Cache(hidden_size)
dynamic_shapes = {"x": {1: Dim.DYNAMIC},"start_pos": None}
torch.export.export(model, args=(h, start_pos), dynamic_shapes=dynamic_shapes)
```
```Error message
Exception has occurred: Unsupported (note: full exception trace is shown but execution is paused at: _run_module_as_main)
Dynamic slicing on data-dependent value is not supported
from user code:
File "/home/tim/nvpu_uno/nnc/tests/test_cache.py", line 18, in forward
self.cache_k[:, start_pos:end_pos, :] = x
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/variables/lists.py", line 923, in __init__
unimplemented("Dynamic slicing on data-dependent value is not supported")
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1873, in BUILD_SLICE
self.push(SliceVariable(items))
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/tim/miniconda3/envs/torch2.5/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739
|
https://github.com/pytorch/pytorch/issues/147187
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 2025-02-14T06:15:41Z
| 2025-02-18T19:20:39Z
| null |
exeex
|
huggingface/optimum
| 2,189
|
PEFT to ONNX conversion
|
### System Info
```shell
Hello!
I have a fine-tuned LLM model from Hugging Face saved in PEFT format, and it’s about 2.1 GB. When we convert it to ONNX, its size nearly doubles to about 4.1 GB. What causes this significant increase in model size after converting from PEFT to ONNX? Is there any bug under this conversion? ( Here is the code do this conversion. Need to mention: loading it in any commented formats will kill the accuracy). Thanks
model = ORTModelForCausalLM.from_pretrained(
peft_path,
provider='OpenVINOExecutionProvider',
provider_options={'device_type': 'GPU_FP16'},
# use_cache=False,
#use_io_binding=False
export=True,
#load_in_4bit=True,
#load_in_8bit=True
#torch_dtype=torch.bfloat16,
#device_map=device,
#from_transformers=True
)
tokenizer = AutoTokenizer.from_pretrained(peft_path)
model.save_pretrained(onnex_path)
tokenizer.save_pretrained(onnex_path)
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
model = ORTModelForCausalLM.from_pretrained(
peft_path,
provider='OpenVINOExecutionProvider',
provider_options={'device_type': 'GPU_FP16'},
# use_cache=False,
#use_io_binding=False
export=True,
#load_in_4bit=True,
#load_in_8bit=True
#torch_dtype=torch.bfloat16,
#device_map=device,
#from_transformers=True
)
tokenizer = AutoTokenizer.from_pretrained(peft_path)
model.save_pretrained(onnex_path)
tokenizer.save_pretrained(onnex_path)
### Expected behavior
I need to have the OONX model with at least the same size while not loosing accuracy performance.
|
https://github.com/huggingface/optimum/issues/2189
|
open
|
[
"bug"
] | 2025-02-13T18:21:05Z
| 2025-03-10T13:58:28Z
| 2
|
morteza89
|
pytorch/data
| 1,442
|
what dataloader to use for torchdata.nodes nodes?
|
hi, thanks for reviving torchdata. i was able to move on to `0.10.1` for lots of my existing datapipes. it seems to work pretty nicely.
question - am i supposed to use `torchdata.nodes.Loader` or `torchdata.stateful_dataloader.StatefulDataLoader` for my data nodes? or just `torch.utils.data.DataLoader`? i'm getting confused a bit after reading the docs and code. currently `Loader` works for my iterable data nodes, but with some caveats (no multi processing).
|
https://github.com/meta-pytorch/data/issues/1442
|
closed
|
[] | 2025-02-13T17:32:53Z
| 2025-10-24T04:07:52Z
| 16
|
keunwoochoi
|
pytorch/pytorch
| 147,076
|
How to check grads in each step of model?
|
Hi there:
I've implement a Pytorch version of [Retrieval-based-Voice-Conversion(RVC for short)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI) at [here](https://github.com/ElinLiu0/RVCTorch/blob/master/POC_Torch.ipynb).
The question is,when i wanna export my implementation pipeline into ONNX using below code:
```python
with torch.inference_mode(), torch.cuda.amp.autocast(enabled=False):
torch.onnx.export(
pipeline,
(audio.cuda(),),
"pipeline.onnx",
input_names=["input"],
output_names=["output"],
opset_version=14
)
```
It rasing below error:
```python
RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient
Tensor:
0.6670
[ torch.cuda.HalfTensor{1} ]
```
Typically rasing with an `nn.BatchNorm2d` cell called at [rmvpe.py](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/infer/lib/rmvpe.py) at line 244.
So how could i fix this error,since this implementation finally will deploy on C# or model serving platform like NVIDIA Triton.
|
https://github.com/pytorch/pytorch/issues/147076
|
closed
|
[
"module: onnx",
"triaged"
] | 2025-02-13T09:01:49Z
| 2025-02-20T07:56:31Z
| null |
ElinLiu0
|
huggingface/agents-course
| 113
|
Show how to use Inference Providers for inference
|
Can be helpful for students to explore different models easily.
|
https://github.com/huggingface/agents-course/issues/113
|
open
|
[] | 2025-02-13T07:46:01Z
| 2025-02-13T08:04:58Z
| null |
pcuenca
|
pytorch/torchtitan
| 840
|
profiling
|
A few questions .
1. Is it based on kineto or something else ?
2. Only seeing CPU activities ( e.g. python) - do I have to do anything special to see GPU activities ?
|
https://github.com/pytorch/torchtitan/issues/840
|
closed
|
[
"question"
] | 2025-02-13T01:27:00Z
| 2025-02-20T19:55:53Z
| null |
githubsgi
|
pytorch/pytorch
| 146,990
|
How to export a model using topk with a variable number of neighbour?
|
### 🐛 Describe the bug
The export is the following but that may not be the only one. That's the first raised one.
``torch._dynamo.exc.UserError: Could not guard on data-dependent expression u7 >= 0 (unhinted: u7 >= 0). (Size-like symbols: none)``
```python
import contextlib
import io
import logging
import warnings
from typing import Any, Dict, List, Optional
import numpy as np
import sklearn
import torch
def flatnonzero(x):
"Similar to :func:`numpy.flatnonzero`"
return torch.nonzero(torch.reshape(x, (-1,)), as_tuple=True)[0]
def _get_weights(dist, weights):
"""Get the weights from an array of distances and a parameter ``weights``.
Assume weights have already been validated.
Parameters
----------
dist : ndarray
The input distances.
weights : {'uniform', 'distance'}, callable or None
The kind of weighting used.
Returns
-------
weights_arr : array of the same shape as ``dist``
If ``weights == 'uniform'``, then returns None.
"""
if weights in (None, "uniform"):
return None
if weights == "distance":
# if user attempts to classify a point that was zero distance from one
# or more training points, those training points are weighted as 1.0
# and the other points as 0.0
dist = 1.0 / dist
inf_mask = torch.isinf(dist)
inf_row = torch.any(inf_mask, axis=1)
dist[inf_row] = inf_mask[inf_row]
return dist
if callable(weights):
return weights(dist)
class NanEuclidean(torch.nn.Module):
"""Implements :func:`sklearn.metrics.nan_euclidean`."""
def __init__(self, squared=False, copy=True):
super().__init__()
self.squared = squared
self.copy = copy
def forward(self, X, Y):
X = X.clone()
Y = Y.to(X.dtype).clone()
missing_X = torch.isnan(X)
missing_Y = torch.isnan(Y)
# set missing values to zero
X[missing_X] = 0
Y[missing_Y] = 0
# Adjust distances for missing values
XX = X * X
YY = Y * Y
distances = -2 * X @ Y.T + XX.sum(1, keepdim=True) + YY.sum(1, keepdim=True).T
distances -= XX @ missing_Y.to(X.dtype).T
distances -= missing_X.to(X.dtype) @ YY.T
distances = torch.clip(distances, 0, None)
present_X = 1 - missing_X.to(X.dtype)
present_Y = ~missing_Y
present_count = present_X @ present_Y.to(X.dtype).T
distances[present_count == 0] = torch.nan
# avoid divide by zero
present_count = torch.maximum(
torch.tensor([1], dtype=present_count.dtype), present_count
)
distances /= present_count
distances *= X.shape[1]
if not self.squared:
distances = distances.sqrt()
return distances
# %%
# Validation
# ++++++++++
model = NanEuclidean()
X = torch.randn((5, 2))
Y = torch.randn((5, 2))
for i in range(5):
X[i, i % 2] = torch.nan
for i in range(4):
Y[i + 1, i % 2] = torch.nan
d1 = sklearn.metrics.nan_euclidean_distances(X.numpy(), Y.numpy())
d2 = model(X, Y)
# print(f"discrepancies: {max_diff(d1, d2)}")
# %%
# torch implementation of KNNImputer
# ==================================
#
# See :class:`sklearn.impute.KNNImputer`.
# The code is split into several :class:`torch.nn.Module`
# and refactored to avoid control flow.
def _get_mask(X, value_to_mask):
return torch.isnan(X)
class SubTopKIndices(torch.nn.Module):
def forward(self, x, k):
# torch does not like nans
xn = torch.nan_to_num(x, nan=1.0e10)
return torch.topk(xn, k, dim=1, largest=False, sorted=True).indices
class SubWeightMatrix(torch.nn.Module):
def __init__(self, weights):
super().__init__()
self.weights = weights
def forward(self, donors_dist):
weight_matrix = _get_weights(donors_dist, self.weights)
if weight_matrix is not None:
weight_matrix = weight_matrix.clone()
weight_matrix[torch.isnan(weight_matrix)] = 0.0
else:
weight_matrix = torch.ones_like(donors_dist)
weight_matrix[torch.isnan(donors_dist)] = 0.0
return weight_matrix
class SubDonorsIdx(torch.nn.Module):
def __init__(self):
super().__init__()
self._topk = SubTopKIndices()
def forward(self, dist_pot_donors, n_neighbors):
donors_idx = self._topk(dist_pot_donors, n_neighbors)
donors_dist = dist_pot_donors[torch.arange(donors_idx.shape[0])[:, None], donors_idx]
return donors_idx, donors_dist
class MakeNewWeights(torch.nn.Module):
def forward(self, donors_mask, donors, weight_matrix):
return donors_mask.to(donors.dtype) * weight_matrix.to(donors.dtype)
class CalcImpute(torch.nn.Module):
"""Implements :meth:`sklearn.impute.KNNImputer._calc_impute`."""
def __init__(self, weights):
super().__init__()
self._weights = SubWeightMatrix(weights)
|
https://github.com/pytorch/pytorch/issues/146990
|
closed
|
[
"triaged",
"oncall: pt2",
"oncall: export"
] | 2025-02-12T16:02:20Z
| 2025-02-26T17:45:40Z
| null |
xadupre
|
pytorch/pytorch
| 146,977
|
How to install Torch version that supports RTX 5090 on Windows? - CUDA kernel errors might be asynchronously reported at some other API call
|
I have purchased RTX 5090 just to test AI apps
Currently getting this error on any app
I need torch for Python 3.10 venv on Windows
I am ok with installing nightly version etc just install command please
```
Traceback (most recent call last):
File "E:\trellis_v5\TRELLIS\app.py", line 401, in <module>
pipeline = TrellisImageTo3DPipeline.from_pretrained("JeffreyXiang/TRELLIS-image-large")
File "E:\trellis_v5\TRELLIS\trellis\pipelines\trellis_image_to_3d.py", line 56, in from_pretrained
pipeline = super(TrellisImageTo3DPipeline, TrellisImageTo3DPipeline).from_pretrained(path)
File "E:\trellis_v5\TRELLIS\trellis\pipelines\base.py", line 39, in from_pretrained
_models = {
File "E:\trellis_v5\TRELLIS\trellis\pipelines\base.py", line 40, in <dictcomp>
k: models.from_pretrained(f"{path}/{v}")
File "E:\trellis_v5\TRELLIS\trellis\models\__init__.py", line 59, in from_pretrained
model = __getattr__(config['name'])(**config['args'], **kwargs)
File "E:\trellis_v5\TRELLIS\trellis\models\structured_latent_vae\decoder_mesh.py", line 105, in __init__
self.mesh_extractor = SparseFeatures2Mesh(res=self.resolution*4, use_color=self.rep_config.get('use_color', False))
File "E:\trellis_v5\TRELLIS\trellis\representations\mesh\cube2mesh.py", line 68, in __init__
verts, cube = construct_dense_grid(self.res, self.device)
File "E:\trellis_v5\TRELLIS\trellis\representations\mesh\utils_cube.py", line 11, in construct_dense_grid
vertsid = torch.arange(res_v ** 3, device=device)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @ptrblck @eqy
|
https://github.com/pytorch/pytorch/issues/146977
|
closed
|
[
"high priority",
"needs reproduction",
"module: build",
"module: windows",
"module: cuda",
"triaged"
] | 2025-02-12T12:43:57Z
| 2025-03-01T09:47:47Z
| null |
FurkanGozukara
|
pytorch/xla
| 8,702
|
Links misssing in CONTRIBUTING.md for Additional steps for GPU.
|
## 📚 Documentation
<!-- A clear and concise description of what content is an issue. -->
I was visity CONTRIBUTING.md doc and try to build a GPU version, but in the part "Additional steps for GPU", the refer to guide link is missing.

|
https://github.com/pytorch/xla/issues/8702
|
open
|
[
"bug",
"documentation"
] | 2025-02-12T07:50:41Z
| 2025-02-17T13:40:39Z
| 3
|
yinrun
|
huggingface/lerobot
| 718
|
Hand-Eye Calibration for LeRobot
|
Hello,
I am starting a project where I plan to use LeRobot for pick-and-place tasks utilizing classical robotics and vision techniques. I am wondering if anyone has experience with performing hand-eye calibration for this robot.
My major concern is that the high-mounted camera is usually parallel to the arm, which may make it difficult for the camera to see the Aruco marker. Does anyone have any suggestions or insights on how to approach this?
Thank you!
|
https://github.com/huggingface/lerobot/issues/718
|
closed
|
[
"question",
"stale"
] | 2025-02-12T05:44:09Z
| 2025-12-21T02:59:43Z
| null |
Akumar201
|
huggingface/optimum-neuron
| 782
|
Docs on how to compile a pre-trained transformer
|
Hello,
I am experimenting with Transformers and trying to run them on AWS Inferentia.
I checked the official [docs](https://huggingface.co/docs/optimum-neuron/index) but I could not find a clear answer to my current problem.
I currently have a customized model based on the [ALBERT transformer](https://huggingface.co/docs/transformers/en/model_doc/albert) that I fine-tuned and for which I exported the weights.
```python
from transformers import AlbertConfig, AlbertModel
import torch
config_dict= {
"vocab_size": 178,
"hidden_size": 768,
"num_attention_heads": 12,
"intermediate_size": 2048,
"max_position_embeddings": 512,
"num_hidden_layers": 12,
"dropout": 0.1,
}
albert_config = AlbertConfig(**config_dict)
model = AlbertModel(albert_config)
weights = torch.load("path/to/weights.pt")
model.load_state_dict(weights)
```
My question is, how do I go from the model above to compiling it for AWS Inferentia using the `optimum-neuron` python library programmatically? I could not find documented examples or snippets for this use-case.
|
https://github.com/huggingface/optimum-neuron/issues/782
|
closed
|
[
"Stale"
] | 2025-02-11T23:36:13Z
| 2025-03-20T08:05:40Z
| null |
efemaer
|
huggingface/diffusers
| 10,772
|
Sana Controlnet Support
|
**Is your feature request related to a problem? Please describe.**
The first controlnet for Sana has appeared, so the feature is to add the sana controlnet to the diffusers pipeline https://github.com/NVlabs/Sana/blob/main/asset/docs/sana_controlnet.md
**Describe the solution you'd like.**
Be able to use the sana controlnet
**Describe alternatives you've considered.**
Using the sana repo
|
https://github.com/huggingface/diffusers/issues/10772
|
closed
|
[
"help wanted",
"Good second issue",
"contributions-welcome",
"roadmap"
] | 2025-02-11T22:39:10Z
| 2025-04-13T13:49:40Z
| 5
|
jloveric
|
huggingface/smolagents
| 610
|
Is this normal? Im getting this a lot
|
Hey, is this normal?

also, out: None is this ok as well??
|
https://github.com/huggingface/smolagents/issues/610
|
closed
|
[
"question"
] | 2025-02-11T22:05:27Z
| 2025-03-19T07:12:32Z
| null |
Mhdaw
|
pytorch/ao
| 1,701
|
Model size after quantization
|
Why is the size relationship of the model unreasonable after I use these three quantization methods on the same model?
```Python
from torchao.quantization import quantize_, int8_weight_only
quantize_(new_model, int8_weight_only())
# from torchao.quantization import quantize_, int8_dynamic_activation_int8_weight
# quantize_(new_model, int8_dynamic_activation_int8_weight())
# from torchao.quantization import int8_dynamic_activation_int4_weight
# quantize_(new_model, int8_dynamic_activation_int4_weight())
```
the result:
```Shell
20786584 Feb 5 13:46 a8w4SWaT.pte
20373272 Feb 5 13:45 a8w8SWaT.pte
29685120 Oct 5 13:12 pytorch_checkpoint.pth
20262664 Feb 5 13:44 w8onlySWaT.pte
```
Because theoretically, the model after using the A8W4 quantization method should be the smallest, but the actual results are different
|
https://github.com/pytorch/ao/issues/1701
|
open
|
[
"question",
"quantize_"
] | 2025-02-11T19:32:29Z
| 2025-02-12T08:54:01Z
| null |
TaylorYangX
|
huggingface/agents-course
| 77
|
[QUESTION] Why am I able to select multiple options in Quick Quiz?
|
In quick quizzes as there is a single answer correct, shouldn't it be like only be able to choose a single option instead of being able select all at once to see correct answer?
|
https://github.com/huggingface/agents-course/issues/77
|
closed
|
[
"question"
] | 2025-02-11T17:35:31Z
| 2025-02-13T07:20:59Z
| null |
Devrajsinh-Gohil
|
pytorch/ao
| 1,699
|
[DOC] Questions on Integrating a New CPU Operator into TorchAO?
|
I'm working on integrating a **CPU operator** into TorchAO and have a few questions regarding the process:
### How can I add a New **_CPU Operator_** in 'torchao/csrc':
* What is the recommended approach for adding a new CPU operator in the 'csrc' directory?
* Are there any specific guidelines or templates I should follow to ensure compatibility with the existing codebase?
### How can I Remove or Disable current CUDA Operators:
* How can I remove or disable all existing CUDA operators in the codebase?
* Are there any configuration flags or build options that can be used to exclude CUDA-related code during compilation?
### How can I Move Experimental MPS and CPU Code to TorchAO:
* I noticed that there is experimental code for MPS and CPU in the repository (torchao/experimental/kernels'). What is the process for moving this code into the main TorchAO module?
* Are there any specific considerations or steps I should follow to ensure a smooth transition?
Thank you for your help!
|
https://github.com/pytorch/ao/issues/1699
|
open
|
[
"question",
"cpu"
] | 2025-02-11T12:03:02Z
| 2025-02-13T01:53:33Z
| null |
Zijie-Tian
|
pytorch/pytorch
| 146,889
|
How to customize a torch.Tensor() method to access the underlying data structure of a PyTorch tensor.
|
### 🐛 Describe the bug
1. How to customize a torch.Tensor() method and call PyTorch's THPVariable_pynew function to obtain the underlying data structure of the original Tensor.

tensor = torch.Tensor(3,4).to("new_one") -> initModule()->Module.cpp->and run in
https://github.com/pytorch/pytorch/blob/32f585d9346e316e554c8d9bf7548af9f62141fc/torch/csrc/autograd/python_variable.cpp#L1891
2.This is my project: https://github.com/xiangxinhello/torch_new_tensor. My project is based on modifications of https://github.com/pytorch/pytorch/tree/v2.5.0/test/cpp_extensions/open_registration_extension, but I was unable to modify it successfully.
3.I want to obtain the underlying data structure information of a PyTorch tensor through a custom torch.Tensor method.
### Versions
PyTorch version: 2.5.0a0+gita8d6afb
Is debug build: True
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] optree==0.13.1
[pip3] torch==2.5.0a0+gita8d6afb
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include
|
https://github.com/pytorch/pytorch/issues/146889
|
open
|
[
"triaged",
"tensor subclass"
] | 2025-02-11T07:18:54Z
| 2025-04-14T17:40:25Z
| null |
xiangxinhello
|
pytorch/torchtitan
| 831
|
converging.md
|
In the [page](https://github.com/pytorch/torchtitan/blob/main/docs/converging.md) . Can someone please clarify the the following.
1. How many (dp) and what type of GPU was used for the [chart](https://github.com/pytorch/torchtitan/blob/main/docs/converging.md#test-results).
2. What is FSDP 8 , 8 GPU's or FP 8 ?
3.
|
https://github.com/pytorch/torchtitan/issues/831
|
closed
|
[
"question"
] | 2025-02-11T04:15:19Z
| 2025-03-17T19:13:39Z
| null |
githubsgi
|
huggingface/agents-course
| 66
|
[QUESTION] About the **Thought: Internal Reasoning and the Re-Act Approach** section of UNIT 1
|
I am a bit confused about the ReAct prompting example at the end of the **Thought: Internal Reasoning and the Re-Act Approach** section in Unit 1. The figure label describes it as an example of **ReAct**, but the image itself mentions "Zero-shot CoT." Could you please take a look at this section and clarify? I would really appreciate your help!
|
https://github.com/huggingface/agents-course/issues/66
|
closed
|
[
"question"
] | 2025-02-11T03:54:26Z
| 2025-02-13T07:30:13Z
| null |
saidul-islam98
|
huggingface/datasets
| 7,390
|
Re-add py.typed
|
### Feature request
The motivation for removing py.typed no longer seems to apply. Would a solution like [this one](https://github.com/huggingface/huggingface_hub/pull/2752) work here?
### Motivation
MyPy support is broken. As more type checkers come out, such as RedKnot, these may also be broken. It would be good to be PEP 561 compliant as long as it's not too onerous.
### Your contribution
I can re-add py.typed, but I don't know how to make sur all of the `__all__` files are provided (although you may not need to with modern PyRight).
|
https://github.com/huggingface/datasets/issues/7390
|
open
|
[
"enhancement"
] | 2025-02-10T22:12:52Z
| 2025-08-10T00:51:17Z
| 1
|
NeilGirdhar
|
pytorch/torchtitan
| 828
|
Any optimized suggestions for fast save ema/model/optim and resume training from them all.
|
By using dcp.async_save, we can save the model and optimizer asynchronously, preventing them from blocking the training process. However, if I also want to save the EMA (Exponential Moving Average) model, the typical approach would be to create another async_save call for the EMA. According to the documentation, it's "recommended to limit checkpoints to one asynchronous request at a time to avoid additional memory pressure per request". Therefore, either the EMA or the model/optimizer must be saved synchronously, which can potentially block the main training process. If the model is large, saving the EMA first can incur significant overhead.
Could you share any best practices for optimizing this save function to facilitate resuming training smoothly?
|
https://github.com/pytorch/torchtitan/issues/828
|
closed
|
[
"question",
"module: distributed_state_dict"
] | 2025-02-10T10:39:16Z
| 2025-02-13T07:39:35Z
| null |
tangjiasheng
|
huggingface/lerobot
| 707
|
is there option to run on parallel gpu
|
I have 2 gpus 4090 I wonder if there is an option to run on parallel while finetuning the model
I have found this parameter here

but I don't actually understand what do you mean by mp
so if there is option for parallel gpu please tell us about it
|
https://github.com/huggingface/lerobot/issues/707
|
closed
|
[
"question"
] | 2025-02-10T09:34:13Z
| 2025-05-14T20:51:43Z
| null |
AbdElrahmanMostafaRifaat1432
|
huggingface/lerobot
| 706
|
adapt_to_pi_aloha parameter
|
I am finetuning pi0 on a static aloha dataset and I found the following parameter : adapt_to_pi_aloha : false
in /lerobot/common/policies/pi0/configuration_pi0.py
but when I set it to true the first loss increased from 0.17 to 4.7
should I set it to true or not knowing that I want the predicted actions to be in aloha space
|
https://github.com/huggingface/lerobot/issues/706
|
open
|
[
"question",
"configuration"
] | 2025-02-10T09:24:45Z
| 2025-07-24T08:15:35Z
| null |
AbdElrahmanMostafaRifaat1432
|
huggingface/chat-ui
| 1,708
|
Generation failed occur
|
when I ask model then get generation error

using base model is llama3 -1b
below code is my .env.local code

|
https://github.com/huggingface/chat-ui/issues/1708
|
open
|
[
"support"
] | 2025-02-10T08:12:56Z
| 2025-02-12T07:48:47Z
| 5
|
mondayjowa
|
huggingface/open-r1
| 260
|
How to use tensor_parallel_size for vllm in GRPO?
|
GRPO use vllm to load reference model for data sampling , The limitation is that tensor parallel are not supported.
What if the reference model is larger than One GPU can hold, for example, 72B with 40GB's H800,
Is there any setting we can set the tensor_parallel_size for vllm params?
```
if self.accelerator.is_main_process:
vllm_device = self.args.vllm_device
if vllm_device == "auto":
vllm_device = f"cuda:{self.accelerator.num_processes}" # take the next GPU idx
# Check that the requested device is available
if vllm_device.split(":")[0] == "cuda" and int(vllm_device.split(":")[1]) >= torch.cuda.device_count():
raise ValueError(
f"The requested device for vllm ({vllm_device}) is not available. You are likely using vLLM "
"without restricting the number of GPUs for training. Set the `--num_processes` argument to a "
"value lower than the number of GPUs available on your machine—typically, reducing it by one "
f"is sufficient. In your case: `--num_processes {torch.cuda.device_count() - 1}`."
)
# Check that the requested device is not also used for training
if vllm_device in {f"cuda:{idx}" for idx in range(self.accelerator.num_processes)}:
warnings.warn(
f"The requested device {vllm_device} is also used for training. This may lead to unexpected "
"behavior. It is recommended to use a dedicated device for vLLM."
)
# vLLM is not compatible with accelerate. So we need to patch it to make sure we can (1) place the vLLM
# model on the desired device (world_size_patch) and (2) avoid a test that is not designed for our
# setting (profiling_patch).
world_size_patch = patch("torch.distributed.get_world_size", return_value=1)
profiling_patch = patch(
"vllm.worker.worker.Worker._assert_memory_footprint_increased_during_profiling", return_value=None
)
with world_size_patch, profiling_patch:
self.llm = LLM(
model=model.name_or_path,
device=vllm_device,
gpu_memory_utilization=self.args.vllm_gpu_memory_utilization,
dtype=self.args.vllm_dtype,
# Automatic Prefix Caching caches the KV cache of existing queries, so that a new query can
# directly reuse the KV cache if it shares the same prefix with one of the existing queries.
# This is particularly useful here because we generate completions from the same prompts.
enable_prefix_caching=True,
max_model_len=self.args.vllm_max_model_len,
)
self.sampling_params = SamplingParams(
temperature=args.temperature,
max_tokens=self.max_completion_length,
)
```
|
https://github.com/huggingface/open-r1/issues/260
|
open
|
[] | 2025-02-10T07:17:07Z
| 2025-02-20T12:21:15Z
| null |
bannima
|
huggingface/trl
| 2,814
|
How to use tensor_parallel_size for vllm reference in GRPO?
|
GRPO use vllm to load reference model for data sampling , The limitation is that tensor parallel are not supported.
What if the reference model is larger than One GPU can hold, for example, 72B with 40GB's H800,
Is there any setting we can set the tensor_parallel_size for vllm params?
```
if self.accelerator.is_main_process:
vllm_device = self.args.vllm_device
if vllm_device == "auto":
vllm_device = f"cuda:{self.accelerator.num_processes}" # take the next GPU idx
# Check that the requested device is available
if vllm_device.split(":")[0] == "cuda" and int(vllm_device.split(":")[1]) >= torch.cuda.device_count():
raise ValueError(
f"The requested device for vllm ({vllm_device}) is not available. You are likely using vLLM "
"without restricting the number of GPUs for training. Set the `--num_processes` argument to a "
"value lower than the number of GPUs available on your machine—typically, reducing it by one "
f"is sufficient. In your case: `--num_processes {torch.cuda.device_count() - 1}`."
)
# Check that the requested device is not also used for training
if vllm_device in {f"cuda:{idx}" for idx in range(self.accelerator.num_processes)}:
warnings.warn(
f"The requested device {vllm_device} is also used for training. This may lead to unexpected "
"behavior. It is recommended to use a dedicated device for vLLM."
)
# vLLM is not compatible with accelerate. So we need to patch it to make sure we can (1) place the vLLM
# model on the desired device (world_size_patch) and (2) avoid a test that is not designed for our
# setting (profiling_patch).
world_size_patch = patch("torch.distributed.get_world_size", return_value=1)
profiling_patch = patch(
"vllm.worker.worker.Worker._assert_memory_footprint_increased_during_profiling", return_value=None
)
with world_size_patch, profiling_patch:
self.llm = LLM(
model=model.name_or_path,
device=vllm_device,
gpu_memory_utilization=self.args.vllm_gpu_memory_utilization,
dtype=self.args.vllm_dtype,
# Automatic Prefix Caching caches the KV cache of existing queries, so that a new query can
# directly reuse the KV cache if it shares the same prefix with one of the existing queries.
# This is particularly useful here because we generate completions from the same prompts.
enable_prefix_caching=True,
max_model_len=self.args.vllm_max_model_len,
)
self.sampling_params = SamplingParams(
temperature=args.temperature,
max_tokens=self.max_completion_length,
)```
|
https://github.com/huggingface/trl/issues/2814
|
open
|
[
"⚡accelerate",
"🏋 GRPO"
] | 2025-02-10T07:09:47Z
| 2025-03-04T11:40:13Z
| null |
bannima
|
huggingface/diffusers
| 10,755
|
Difference in Output When Using PIL.Image vs numpy.array for Image and Mask Input.
|
hi.
I get different results when providing image and mask as input using PIL.Image versus numpy. array. Why does this happen?
Is there an issue with my normalization method?
| pillow | array |
|---|---|
|  |  |
#### pillow code
```python
image = Image.open(image_path).convert("RGB")
mask = Image.open(mask_path).convert("L")
output_image = pipeline(
image=image,
mask_image=mask,
generator=torch.Generator(device=self.device).manual_seed(0),
).images[0]
```
#### array code
```python
image = Image.open(image_path).convert("RGB")
mask = Image.open(mask_path).convert("L")
image_array = np.array(image) / 255.0
mask_array = np.array(mask) / 255.0
output_image = pipeline(
image=image_array,
mask_image=mask_array,
generator=torch.Generator(device=self.device).manual_seed(0),
).images[0]
```
|
https://github.com/huggingface/diffusers/issues/10755
|
open
|
[
"stale"
] | 2025-02-10T05:24:27Z
| 2025-03-12T15:03:12Z
| 2
|
purple-k
|
huggingface/datasets
| 7,387
|
Dynamic adjusting dataloader sampling weight
|
Hi,
Thanks for your wonderful work! I'm wondering is there a way to dynamically adjust the sampling weight of each data in the dataset during training? Looking forward to your reply, thanks again.
|
https://github.com/huggingface/datasets/issues/7387
|
open
|
[] | 2025-02-10T03:18:47Z
| 2025-03-07T14:06:54Z
| 3
|
whc688
|
pytorch/audio
| 3,879
|
How to use filtfilt() function?
|
I'm trying to move from scipy to torchaudio.
Here is my code below:
```python
from torchaudio.functional.filtering import filtfilt
from scipy import signal
bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
audio = sample_input
print(f"Audio contains nan: {torch.isnan(torch.from_numpy(audio).float().to(torch.float64)).any()}")
print(f"Audio contains inf: {torch.isinf(torch.from_numpy(audio).float().to(torch.float64)).any()}")
print(f"Audio min: {torch.from_numpy(audio).float().to(torch.float64).min()}")
print(f"Audio max: {torch.from_numpy(audio).float().to(torch.float64).max()}")
print(f"Audio mean: {torch.from_numpy(audio).float().to(torch.float64).mean()}")
print(f"Audio shape: {torch.from_numpy(audio).float().to(torch.float64).shape}")
print(f"bh contains nan: {torch.isnan(torch.from_numpy(bh).float().to(torch.float64)).any()}")
print(f"bh contains inf: {torch.isinf(torch.from_numpy(bh).float().to(torch.float64)).any()}")
print(f"bh min: {torch.from_numpy(bh).float().to(torch.float64).min()}")
print(f"bh max: {torch.from_numpy(bh).float().to(torch.float64).max()}")
print(f"bh mean: {torch.from_numpy(bh).float().to(torch.float64).mean()}")
print(f"bh shape: {torch.from_numpy(bh).float().to(torch.float64).shape}")
print(f"ah contains nan: {torch.isnan(torch.from_numpy(ah).float().to(torch.float64)).any()}")
print(f"ah contains inf: {torch.isinf(torch.from_numpy(ah).float().to(torch.float64)).any()}")
print(f"ah min: {torch.from_numpy(ah).float().to(torch.float64).min()}")
print(f"ah max: {torch.from_numpy(ah).float().to(torch.float64).max()}")
print(f"ah mean: {torch.from_numpy(ah).float().to(torch.float64).mean()}")
print(f"ah shape: {torch.from_numpy(ah).float().to(torch.float64).shape}")
audio = filtfilt(
waveform=torch.from_numpy(audio).float().to(torch.float64),
a_coeffs=torch.from_numpy(ah).float().to(torch.float64),
b_coeffs=torch.from_numpy(bh).float().to(torch.float64)
)
print(f"Audio after filtfilt : {audio}")
```
But actual output is that:
```python
Audio contains nan: False
Audio contains inf: False
Audio min: -0.858154296875
Audio max: 0.8670654296875
Audio mean: 0.00011500650977929034
Audio shape: torch.Size([1149120])
bh contains nan: False
bh contains inf: False
bh min: -9.699606895446777
bh max: 9.699606895446777
bh mean: 0.0
bh shape: torch.Size([6])
ah contains nan: False
ah contains inf: False
ah min: -9.639544486999512
ah max: 9.757863998413086
ah mean: 1.3907750447591147e-07
ah shape: torch.Size([6])
Audio after filtfilt : tensor([nan, nan, nan, ..., nan, nan, nan], dtype=torch.float64)
```
Am i using this function in a wrong way?lol😂
|
https://github.com/pytorch/audio/issues/3879
|
closed
|
[] | 2025-02-10T02:56:31Z
| 2025-02-10T08:55:03Z
| null |
ElinLiu0
|
huggingface/trl
| 2,813
|
What is the minimum GPU requirement in gigabytes for TRL intensive training?
|
https://github.com/huggingface/trl/issues/2813
|
open
|
[] | 2025-02-10T02:52:07Z
| 2025-02-11T08:41:56Z
| null |
lonngxiang
|
|
huggingface/transformers.js
| 1,188
|
It seems like Xenova/swin2SR-classical-sr-x2-64 model only work with image url?How to implement partial output with it?
|
### Question
I have fun with react demo and Xenova/swin2SR-classical-sr-x2-64 model.
https://huggingface.co/Xenova/swin2SR-classical-sr-x2-64
I tried to give object URL to upscaler function but it doesn't work, I wonder if it only accepts image url.
Also I want to know how to do partial output like the translate react demo.
I tried to convert output data to base64 for rendering but It doesn't work.


Is it output png rawdata only?
|
https://github.com/huggingface/transformers.js/issues/1188
|
open
|
[
"question"
] | 2025-02-10T02:18:32Z
| 2025-02-16T00:50:36Z
| null |
codenoobforreal
|
huggingface/transformers.js
| 1,186
|
Which undocumented transformersJS Generator parameters are supported? crapCheck ran fine.
|
### Question
Sorry to bug you again Josh @xenova I was trying a set of generator parameters and things were working fine without errors so I tried the parameter "crapCheck" and it also ran without errors so now I am worried if anything works. In the docs it seems that these are supported:
Supported Parameters (Confirmed in Docs)
max_new_tokens: ✅ Yes (Controls the number of new tokens to generate)
do_sample: ✅ Yes (Enables sampling)
top_p: ✅ Yes (Nucleus sampling)
temperature: ✅ Yes (Controls randomness)
top_k: ✅ Yes (Top-k filtering)
num_return_sequences: ✅ Yes (Number of sequences to return)
Demo code [here](https://hpssjellis.github.io/my-examples-of-transformersJS/public/deepseek-r1-webgpu/deepseek-r1-webgpu-00.html) but without all the below parameters, just some of them.
Any suggestions on what may work and what to ignore?
```
const output = await generator(messages, {
max_new_tokens: myMaxT, // 512
do_sample: myDo_sample, // true
top_p: myTop_p, // 0.9
temperature: myTemperature, // 0.7
top_k: myTop_k, // testing if it does top_k 50
num_return_sequences: 1, // 1
streamer, // calls the function TextStreamer
min_length: myMin_length, // Ensures at least 20 tokens are generated
repetition_penalty: myRepetition_penalty, // 1.2
length_penalty: myLength_penalty, // 1.5
early_stopping: myEarly_stopping, // end testing true false
chain_of_thought: myChain_of_thought, // true
stopping_criteria: stoppingCriteria, // Use stopping criteria for clean stopping
crapCheck: 65, // fairly sure this is not supported
});
```
|
https://github.com/huggingface/transformers.js/issues/1186
|
open
|
[
"question"
] | 2025-02-09T05:35:57Z
| 2025-02-09T05:35:57Z
| null |
hpssjellis
|
pytorch/torchtitan
| 827
|
How to design TP plan for `nn.GLU`
|
Hi guys, I'm encountering a challenge in designing TP plans for gated MLP, i.e., [nn.GLU](https://pytorch.org/docs/stable/generated/torch.nn.GLU.html#torch.nn.GLU) with packed weights `w12 = [w1, w2]`, followed by a down proj `w3`
The plan for separated `w1` and `w2` is quite straightforward
```
layer_tp_plan = {
# by default ColwiseParallel input layouts is replicated
# and RowwiseParallel output layouts is replicated
"feed_foward.w1": ColwiseParallel(),
"feed_forward.w2": RowwiseParallel(),
"feed_forward.w3": ColwiseParallel(),
}
```
However, I'm unsure how to approach this when using packed weights (`w12 = [w1, w2]`) to leverage the fused GLU for better performance.
Could anyone provide some guidance on how to design an effective TP plan for this scenario?
Thank you
@tianyu-l
|
https://github.com/pytorch/torchtitan/issues/827
|
closed
|
[
"question"
] | 2025-02-08T23:24:47Z
| 2025-02-12T19:43:22Z
| null |
yzhangcs
|
huggingface/lighteval
| 545
|
couldn't find it in the cached files and it looks like Elron/bleurt-tiny-512, how to set the model path?
|
How to set the eval model path?
## Eval
when I use the script to eval model with MATH-500
`NUM_GPUS=8 # Set to 8 for 32B and 70B models
MODEL=Deepseek_R1_distill/Qwen2.5-32B-Open-R1-Distill/
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilisation=0.8,tensor_parallel_size=$NUM_GPUS"
OUTPUT_DIR=data/evals/Qwen2.5-32B-Open-R1-Distill
lighteval vllm $MODEL_ARGS "custom|math_500|0|0" \
--custom-tasks src/open_r1/evaluate.py \
--use-chat-template \
--output-dir $OUTPUT_DIR
`
## Error
Error: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like Elron/bleurt-tiny-512 is not the path to a directory containing a file named
config.json.
Where to set the eval model path in the script?
|
https://github.com/huggingface/lighteval/issues/545
|
closed
|
[] | 2025-02-08T07:26:28Z
| 2025-05-15T15:27:30Z
| null |
bannima
|
huggingface/open-r1
| 240
|
How to do knowledge distillation training
|
In the deepseek r1 technical report, there is a small model based on distillation at the end; deepseek r1, as the teacher model, qwen and llama, as the student model, do SFT based on distilled data. However, it seems that the process of knowledge distillation is not involved here(open r1), that is, the process of the r1 teacher model modifying the output of the student model, but simply SFT based on distilled data.
|
https://github.com/huggingface/open-r1/issues/240
|
open
|
[] | 2025-02-08T06:50:20Z
| 2025-02-27T08:16:02Z
| null |
RyanOvO
|
huggingface/transformers.js-examples
| 42
|
How to stop the transformerJS webGPU models when they chat for too long.
|
@xenova Hi Josh.
I am making several very capable TransformerJS single page applications and I really like what they are doing. My demo index page is [here](https://hpssjellis.github.io/my-examples-of-transformersJS/public/index.html), but I can't seem to stop any of my examples if they are taking too long and then be able to do another request. I have tried several methods with the streamer, a stopFlag or an AbortController but nothing seems to be error free.
Any suggestions I have included my single page application of deepseekR1 for reference.
(Note: Single page applications are great for beginners and can be easily downloaded and ran locally after the model is cached)
```
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<script type="module">
import { pipeline, TextStreamer } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.3.2';
// Needed for buttons to call module functions
window.myLoadModel = myLoadModel
window.myAskQuestion = myAskQuestion
let generator
//let myStopFlag = false; // Global stop flag
let streamer = null; // Keep track of streamer instance
let myModel
let abortController; // Add this global variable
abortController = new AbortController(); // Create the controller
let myContent = document.getElementById('myArea01').value
console.log(myContent)
// Create a text generation pipeline
async function myLoadModel() {
myModel = document.getElementById('myModelInput').value
const progressCallback = (progress) => {
// console.log(progress);
const myProg = parseInt(progress.progress);
document.getElementById('progress').textContent = `Loading: ${progress.file} at ${myProg}%`; //(progress * 100).toFixed(2)
};
generator = await pipeline("text-generation", myModel, { dtype: "q4f16", device: "webgpu", progress_callback: progressCallback });
document.getElementById('myLoadButton').disabled = true
document.getElementById('myAskButton').disabled = false
document.getElementById('progress').textContent = `Loading: Done!`;
}
async function myAskQuestion() {
document.getElementById('myTextarea01').value = '';
myContent = document.getElementById('myArea01').value;
const messages = [{ role: "user", content: myContent }];
// myStopFlag = false; // Reset stop flag before starting
// document.getElementById('myStopButton').disabled = false; // Enable stop button
// Clear any existing streamer instance before starting a new one
streamer = new TextStreamer(generator.tokenizer, {
skip_prompt: true,
callback_function: (text) => {
// if (myStopFlag) return; // Stop updating if stop flag is set
if (!window.startTime) {
window.startTime = performance.now();
}
const currentTime = performance.now();
const elapsedTime = (currentTime - window.startTime) / 1000;
document.getElementById('myTextarea01').value += text;
const generatedTokens = document.getElementById('myTextarea01').value.length;
const tokensPerSecond = generatedTokens / elapsedTime;
const progress = parseInt((generatedTokens * 100) / (myMaxT * 10));
document.getElementById('progress').textContent = `Answer progress: ~${progress}%, Tokens per second: ${tokensPerSecond.toFixed(2)}`;
if (progress >= 100) {
window.startTime = null;
}
},
});
const myMaxT = document.getElementById('myMaxTokens').value;
const myDo_sample = document.getElementById('myDo_sample').value;
const myTop_p = document.getElementById('myTop_p').value;
const myTemperature = document.getElementById('myTemperature').value;
const myChain_of_thought = document.getElementById('myChain_of_thought').value;
console.log(` maxT:${myMaxT}, do-sample:${myDo_sample}, top_p:${myTop_p}, temp:${myTemperature}, chain-of-thought:${myChain_of_thought}, `)
try {
const output = await generator(messages, {
max_new_tokens: myMaxT,
do_sample: myDo_sample,
top_p: myTop_p, // 0.9
temperature: myTemperature, // 0.7
streamer,
chain_of_thought: myChain_of_thought,
});
// if (!myStopFlag) {
let fullReply = output[0].generated_text.at(-1).content;
let myReply = fullReply.replace(/<think>/g, "").replace(/<\/think>/g, "\r\n\r\nResponse: ").replace(/```/g, "");
document.getElementById('myTextarea01').value = `Asking: ${myContent}\r\n\r\nAnswer: ${myReply}`;
// }
} catch (error) {
console.error('Error:', error);
}
}
</script>
</head>
<body>
<h1>DeepSeek-R1-webgpu in the browser</h1>
Open the console. shift-ctrl-i <br><br>
Fully javascript activated. If you don't want to completely download
<a href="onnx-community/DeepSeek-R1-Distill-Qwen-1.5B-ONNX">
onnx-community/DeepSeek-R1-Distill-Qwen-1.5B-ONNX </a> then you should probably close this page.<br><br>
It will load from cache if downloaded once.<br><br>
Uses the Web-gpu model or other models: <input id="myModelInput" type=text size=60 value="onnx-communit
|
https://github.com/huggingface/transformers.js-examples/issues/42
|
closed
|
[] | 2025-02-08T04:38:51Z
| 2025-02-08T22:05:23Z
| null |
hpssjellis
|
huggingface/lerobot
| 692
|
How to evaluate policy on real robot and sim environment
|
I am working on evaluating a trained policy on a real robot and in a simulated environment (Isaac Gym). However, I am uncertain about the process and communication mechanisms involved.
My questions are:
- Evaluating on a real robot:
> How do I retrieve real-time observations from the real robot with Lerobot?
- Evaluating in simulation (Isaac Gym):
> Can I directly evaluate my trained policy in Isaac Gym?
|
https://github.com/huggingface/lerobot/issues/692
|
closed
|
[
"question",
"simulation"
] | 2025-02-07T13:40:27Z
| 2025-10-17T11:20:29Z
| null |
ShiyaoExtendQA
|
huggingface/diffusers
| 10,743
|
Support zero-3 for FLUX training
|
### Describe the bug
Due to memory limitations, I am attempting to use Zero-3 for Flux training on 8 GPUs with 32GB each. I encountered a bug similar to the one reported in this issue: https://github.com/huggingface/diffusers/issues/1865. I made modifications based on the solution proposed in this pull request: https://github.com/huggingface/diffusers/pull/3076. However, the same error persists. In my opinion, the fix does not work as expected, at least not entirely. Could you advise on how to modify it further?
The relevant code from https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_flux.py#L1157 has been updated as follows:
```
def deepspeed_zero_init_disabled_context_manager():
"""
returns either a context list that includes one that will disable zero.Init or an empty context list
"""
deepspeed_plugin = AcceleratorState().deepspeed_plugin if accelerate.state.is_initialized() else None
print(f"deepspeed_plugin: {deepspeed_plugin}")
if deepspeed_plugin is None:
return []
return [deepspeed_plugin.zero3_init_context_manager(enable=False)]
with ContextManagers(deepspeed_zero_init_disabled_context_manager()):
text_encoder_one, text_encoder_two = load_text_encoders(text_encoder_cls_one, text_encoder_cls_two)
vae = AutoencoderKL.from_pretrained(
args.pretrained_model_name_or_path,
subfolder="vae",
revision=args.revision,
variant=args.variant,
)
```
### Reproduction
deepspeed config:
```json
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps":"auto",
"zero_optimization": {
"stage": 3,
"offload_optimizer": {"device": "cpu"},
"stage3_gather_16bit_weights_on_model_save": false,
"overlap_comm": false
},
"bf16": {
"enabled": true
},
"fp16": {
"enabled": false
}
}
```
accelerate config:
```
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: "config/ds_config.json"
distributed_type: DEEPSPEED
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 8
```
training shell:
```
#!/bin/bash
export MODEL_NAME="black-forest-labs/FLUX.1-dev"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="trained-flux"
export DS_SKIP_CUDA_CHECK=1
export ACCELERATE_CONFIG_FILE="config/accelerate_config.yaml"
ACCELERATE_CONFIG_FILE_PATH=${1:-$ACCELERATE_CONFIG_FILE}
FLUXOUTPUT_DIR=flux_lora_output
mkdir -p $FLUXOUTPUT_DIR
accelerate launch --config_file $ACCELERATE_CONFIG_FILE_PATH train_dreambooth_lora_flux.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--mixed_precision="bf16" \
--instance_prompt="a photo of sks dog" \
--resolution=1024 \
--train_batch_size=4 \
--guidance_scale=1 \
--gradient_accumulation_steps=1 \
--learning_rate=1e-4 \
--report_to="tensorboard" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=100 \
--gradient_checkpointing \
--seed="0"
```
### Logs
```shell
RuntimeError: 'weight' must be 2-D
```
### System Info
pytorch: 2.1.0
deepspeed: 0.14.0
accelerate: 1.3.0
diffusers: develop
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/10743
|
closed
|
[
"bug"
] | 2025-02-07T12:50:44Z
| 2025-10-27T09:33:59Z
| 9
|
xiaoyewww
|
pytorch/pytorch
| 146,682
|
How to get last layer hidden state of transformer model while convert model to onnx format?
|
I am currently working with a model that has been exported to the ONNX format. For my project, I need to extract the last layer hidden states during inference. However, I couldn’t find any documentation or example that explains how to achieve this using an ONNX-exported model.
Whether the ONNX format retains the capability to extract the last layer hidden states?
Thanks!
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/146682
|
closed
|
[
"module: onnx",
"triaged"
] | 2025-02-07T08:35:07Z
| 2025-03-03T20:42:20Z
| null |
Jianshu-She
|
huggingface/alignment-handbook
| 210
|
Problem with multi-epoch training
|
Hi, I run the orpo code with 1 epoch and there was no issue. But when I tried to run the code with 5 epochs, I had the following error just at the start of the second epoch:
```
RuntimeError: Tensors of the same index must be on the same device and the same dtype except `step` tensors that can be CPU and float32 notwithstanding
```
Any idea of what could be wrong and how to fix it? Thank you!
|
https://github.com/huggingface/alignment-handbook/issues/210
|
open
|
[] | 2025-02-07T04:50:41Z
| 2025-02-07T04:50:41Z
| 0
|
sowmaster
|
pytorch/executorch
| 8,282
|
Advise on how to run the training example on iOS
|
### 🚀 The feature, motivation and pitch
Hello team,
I was wondering if it is possible to run the `train_xor` or a similar training example on an iOS device.
So be able to do
`#import <executorch/extension/training/training_module.h>`
I have followed this guide: https://pytorch.org/executorch/main/apple-runtime and was able to build the xcframework using a local copy of executorch, add it to the Xcode project, and run it on an iOS device.
I guess I need to compile and package the libraries in https://github.com/pytorch/executorch/tree/main/extension/training to the App, but I don't know how to do that, could you give some pointers?
Thanks!
### Alternatives
_No response_
### Additional context
_No response_
### RFC (Optional)
_No response_
cc @shoumikhin @JacobSzwejbka
|
https://github.com/pytorch/executorch/issues/8282
|
closed
|
[
"triaged",
"module: ios",
"module: training"
] | 2025-02-06T18:57:43Z
| 2025-09-02T16:46:06Z
| null |
YuanTingHsieh
|
huggingface/smolagents
| 521
|
authenticated sessions with smolagents (how to be logged in during browser use)
|
**Is your feature request related to a problem? Please describe.**
I would like smolagents to be able to use websites with my login credentials.
**Describe the solution you'd like**
Either a way to give Helium credentials, or a way to use my actual browser, like: https://github.com/browser-use/browser-use/blob/main/examples/browser/real_browser.py
**Is this not possible with the current options.**
I'm fairly certain this is not possible with the current implementation. (If it is, can you make a demo code?)
**Describe alternatives you've considered**
I can use https://github.com/browser-use/browser-use/ instead
**Additional context**
https://github.com/browser-use/browser-use/ does a really good job of providing multiple options for this.
|
https://github.com/huggingface/smolagents/issues/521
|
open
|
[
"enhancement"
] | 2025-02-06T15:51:53Z
| 2025-02-06T15:51:53Z
| null |
rawwerks
|
huggingface/open-r1
| 210
|
How to push own dataset to hub with train and test dataset?
|
How do I push my own dataset to the hub along with the training and test datasets?
```python
train_distiset = pipeline.run(dataset=train_dataset)
test_distiset = pipeline.run(dataset=test_dataset)
```
There is a problem with the code above.
|
https://github.com/huggingface/open-r1/issues/210
|
closed
|
[] | 2025-02-06T15:28:15Z
| 2025-02-08T05:59:13Z
| null |
JACKYLUO1991
|
huggingface/peft
| 2,364
|
docs: broken links to boft
|
### System Info
on page: https://huggingface.co/docs/peft/v0.14.0/en/conceptual_guides/oft
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
on page: https://huggingface.co/docs/peft/v0.14.0/en/conceptual_guides/oft
Snippet:
Take a look at the following step-by-step guides on how to finetune a model with BOFT:
[Dreambooth finetuning with BOFT](https://huggingface.co/docs/peft/v0.14.0/en/task_guides/boft_dreambooth)
[Controllable generation finetuning with BOFT (ControlNet)](https://huggingface.co/docs/peft/v0.14.0/en/task_guides/boft_controlnet)
### Expected behavior
perhaps the links should lead to
https://github.com/huggingface/peft/blob/main/examples/boft_dreambooth/boft_dreambooth.md
https://github.com/huggingface/peft/blob/main/examples/boft_controlnet/boft_controlnet.md
|
https://github.com/huggingface/peft/issues/2364
|
closed
|
[] | 2025-02-06T14:48:16Z
| 2025-02-07T10:14:44Z
| 1
|
makelinux
|
huggingface/open-r1
| 207
|
DeepSeek RL-Zero: How to clone DeepSeek RL-Zero?
|
How to clone DeepSeek RL-Zero?
|
https://github.com/huggingface/open-r1/issues/207
|
open
|
[] | 2025-02-06T13:45:33Z
| 2025-02-06T13:45:33Z
| null |
win10ogod
|
pytorch/pytorch
| 146,575
|
How to pip3 torch==2.1.0.dev20230822+cu118
|
> I’ve tried installing this specific version multiple times, but the issue keeps occurring.
pip3 install torch==2.1.0.dev20230822+cu118
```
ERROR: Could not find a version that satisfies the requirement torch==2.1.0.dev20230822+cu118 (from versions: 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0)
ERROR: No matching distribution found for torch==2.1.0.dev20230822+cu118
```
> PLEASE HELP ME A GUILD TO SOVLE THIS ISSUE <3
### Suggest a potential alternative/fix
_No response_
cc @seemethere @malfet @osalpekar @atalman
|
https://github.com/pytorch/pytorch/issues/146575
|
closed
|
[
"module: binaries",
"triaged"
] | 2025-02-06T06:07:34Z
| 2025-02-06T15:14:25Z
| null |
minhphi1712
|
huggingface/smolagents
| 501
|
How to run open_deep_research?
|
How to run open_deep_research?
|
https://github.com/huggingface/smolagents/issues/501
|
closed
|
[
"bug"
] | 2025-02-05T13:35:52Z
| 2025-03-19T07:28:22Z
| null |
win4r
|
pytorch/ao
| 1,665
|
NF4Tensor and DDP
|
I am trying to use `NF4Tensor` weights in my model and wrap it with `DistributedDataParallel`, but get the following error:
```
[rank0]: model = DistributedDataParallel(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/path/to/venv/lib/python3.12/site-packages/torch/nn/parallel/distributed.py", line 837, in __init__
[rank0]: _sync_module_states(
[rank0]: File "/path/to/venv/lib/python3.12/site-packages/torch/distributed/utils.py", line 313, in _sync_module_states
[rank0]: _sync_params_and_buffers(process_group, module_states, broadcast_bucket_size, src)
[rank0]: File "/path/to/venv/lib/python3.12/site-packages/torch/distributed/utils.py", line 324, in _sync_params_and_buffers
[rank0]: dist._broadcast_coalesced(
[rank0]: File "/path/to/venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/path/to/venv/lib/python3.12/site-packages/torchao/dtypes/nf4tensor.py", line 834, in __torch_dispatch__
[rank0]: raise NotImplementedError(
[rank0]: NotImplementedError: NF4Tensor dispatch: attempting to run aten.cat.default, this is not supported
```
To replicate:
```
from torchao.dtypes.nf4tensor import linear_nf4, to_nf4
from torch.nn.parallel import DistributedDataParallel
from torch import nn
import os
import torch
class NF4(nn.Module):
def __init__(
self,
device = None,
):
super().__init__()
self.linear = nn.Linear(512, 512, bias=False, device=device)
self.linear.weight = nn.Parameter(to_nf4(self.linear.weight))
if __name__ == "__main__":
_local_rank = int(os.getenv("LOCAL_RANK", "0"))
_device = f"cuda:{_local_rank}"
torch.distributed.init_process_group(
backend="nccl",
init_method="env://",
device_id=torch.device(_local_rank),
)
model = NF4(_device)
model = DistributedDataParallel(model)
```
`torchrun --nproc_per_node=2 script.py`
`NotImplementedError: NF4Tensor dispatch: attempting to run c10d.broadcast_.default, this is not supported`
Is there some way around this issue?
|
https://github.com/pytorch/ao/issues/1665
|
closed
|
[
"question"
] | 2025-02-05T12:12:27Z
| 2025-02-18T02:35:05Z
| null |
psinger
|
pytorch/torchtitan
| 821
|
WARNING - When using FSDP, it's recommended to enable config.force_recompute_fp8_weight_in_bwd.
|
Not necessarily an issue, but I see this log quite a lot when I enable Float8. I can open a PR to turn it on, but was wondering if it was intentional. Thanks for the great library!
|
https://github.com/pytorch/torchtitan/issues/821
|
closed
|
[
"question",
"module: fsdp"
] | 2025-02-05T05:04:38Z
| 2025-02-18T18:32:34Z
| null |
c0g
|
huggingface/trl
| 2,768
|
How to log more metrics with wandb when using GRPO trainer and accelerate
|
### Reproduction
```python
def correctness_reward_func(prompts, completions, answer, **kwargs) -> list[float]:
responses = [completion[0]["content"] for completion in completions]
q = prompts[0][-1]["content"]
extracted_responses = [extract_xml_answer(r) for r in responses]
# Get current step from trainer's state
current_step = trainer.state.global_step if hasattr(trainer, "state") else 0
# Initialize logger if not already done
global example_logger
if not hasattr(correctness_reward_func, "example_logger"):
example_logger = LocalExampleLogger()
correctness_reward_func.example_logger = example_logger
# Log each example
for i in range(len(responses)):
example_dict = {
"step": current_step,
"question": q,
"true_answer": answer[i],
"response": responses[i],
"extracted_response": extracted_responses[i],
"correct": extracted_responses[i] == answer[i],
"generation_idx": i, # Which generation attempt this was
}
example_logger.log_example(example_dict)
# Calculate marker counts and correctness for all responses
is_correct = [r == a for r, a in zip(extracted_responses, answer)]
uncertainty_counts = [count_uncertainty_markers(r) for r in responses]
internal_dialogue_counts = [count_internal_dialogue_markers(r) for r in responses]
reflective_counts = [count_reflective_markers(r) for r in responses]
# Separate counts for correct and incorrect responses
correct_indices = [i for i, correct in enumerate(is_correct) if correct]
incorrect_indices = [i for i, correct in enumerate(is_correct) if not correct]
# Log metrics using trainer's accelerator
if hasattr(trainer, "accelerator"):
### NONE OF THE BELOW ARE LOGGED ON WANDB
metrics = {
"correctness/correct_count": len(correct_indices),
"correctness/total_examples": len(responses),
"correctness/accuracy": len(correct_indices) / len(responses),
# Total markers across all responses
"markers/total/uncertainty": sum(uncertainty_counts),
"markers/total/internal_dialogue": sum(internal_dialogue_counts),
"markers/total/reflective": sum(reflective_counts),
# Markers in correct responses
"markers/correct/uncertainty": sum(
uncertainty_counts[i] for i in correct_indices
)
if correct_indices
else 0,
"markers/correct/internal_dialogue": sum(
internal_dialogue_counts[i] for i in correct_indices
)
if correct_indices
else 0,
"markers/correct/reflective": sum(
reflective_counts[i] for i in correct_indices
)
if correct_indices
else 0,
# Markers in incorrect responses
"markers/incorrect/uncertainty": sum(
uncertainty_counts[i] for i in incorrect_indices
)
if incorrect_indices
else 0,
"markers/incorrect/internal_dialogue": sum(
internal_dialogue_counts[i] for i in incorrect_indices
)
if incorrect_indices
else 0,
"markers/incorrect/reflective": sum(
reflective_counts[i] for i in incorrect_indices
)
if incorrect_indices
else 0,
}
trainer.accelerator.log(metrics, step=current_step)
return [2.0 if r == a else 0.0 for r, a in zip(extracted_responses, answer)]
.......
model_name = config["model"]["name"]
output_dir = config["training"]["output_dir"]
run_name = config["training"]["run_name"]
training_args = GRPOConfig(
output_dir=output_dir,
run_name=run_name,
learning_rate=config["training"]["learning_rate"],
adam_beta1=config["training"]["adam_beta1"],
adam_beta2=config["training"]["adam_beta2"],
weight_decay=config["training"]["weight_decay"],
warmup_ratio=config["training"]["warmup_ratio"],
lr_scheduler_type=config["training"]["lr_scheduler_type"],
logging_steps=config["training"]["logging_steps"],
bf16=config["training"]["bf16"],
per_device_train_batch_size=config["training"]["per_device_train_batch_size"],
gradient_accumulation_steps=config["training"]["gradient_accumulation_steps"],
num_generations=config["training"]["num_generations"],
max_prompt_length=config["training"]["max_prompt_length"],
max_completion_length=config["training"]["max_completion_length"],
num_train_epochs=config["training"]["num_train_epochs"],
save_steps=config["training"]["save_steps"],
max_grad_norm=config["training"]["max_grad_norm"],
report_to=["wandb"]
if (not torch.distributed.is_initialized() or torch.distributed.get_rank() == 0)
else [],
log_on_each_node=False, # Only log on main node
use_vllm
|
https://github.com/huggingface/trl/issues/2768
|
open
|
[
"✨ enhancement",
"⚡accelerate",
"🏋 GRPO"
] | 2025-02-05T03:59:10Z
| 2025-02-05T03:59:54Z
| null |
andrewsiah
|
pytorch/ao
| 1,664
|
Tensor subclass methods for `DTensor` and `FSDP2`
|
Is there a protocol / interface that a tensor subclass must implement in order to be used with `DTensor` primitives and for training with `FSDP2`?
I've been walking through `NF4` as an example as it [covers both](https://github.com/search?q=repo%3Apytorch%2Fao+FSDP2+and+NF4&type=pullrequests). However, the methods are scattered across `__torch_function__` and `__torch_dispatch__` (though the unittests make it clear which ops are tested for `FSDP`).
Is there a cleaner / expected format for subclassing a tensor such that
- it can be used with `DTensor` collectives and `FSDP2`, and
- composed with subclass-specific overrides for streamlined use with `torch.compile`?
@msaroufim @awgu @weifengpy @jerryzh168
---
p.s. Fwiw, also looked at the developer-guide tensor subclass example but found the abstractions a bit hard to follow; would personally prefer using torch-native functionalities.
|
https://github.com/pytorch/ao/issues/1664
|
open
|
[
"question"
] | 2025-02-05T00:40:54Z
| 2025-02-05T23:33:35Z
| null |
jeromeku
|
pytorch/torchtitan
| 818
|
Is user-defined initializers a must-have for FSDP2?
|
```
with torch.device("meta"):
model = Transformer()
for module in model.modules():
if isinstance(module, TransformerBlock):
fully_shard(module)
fully_shard(model)
for tensor in itertools.chain(model.parameters(), model.buffers()):
assert tensor.device == torch.device("meta")
# Allocate buffers and sharded parameters on GPU
model.to_empty(device="cuda")
# Run user-defined initializers
model.init_weights() # or `model.apply(init_weights)`
```
Could I skip model.init_weights() # or `model.apply(init_weights)`
if I want to just use the already initialized weights before sharding?
|
https://github.com/pytorch/torchtitan/issues/818
|
closed
|
[
"question"
] | 2025-02-04T22:00:45Z
| 2025-02-05T18:03:29Z
| null |
goldhuang
|
huggingface/open-r1
| 183
|
How to directly input embeddings into the model?
|
My data are embeddings of the tokens (i.e., already after tokenization), is there a way of directly inputting the embeddings into the DeepSeek open-r1 model?
For example, when I use the BERT model via Hugging Face, I can simply input the embeddings using the "inputs_embeds" parameter:
```
from transformers import BertModel
bert = BertModel.from_pretrained('bert-base-uncased')
outputs = bert(inputs_embeds = ...)
```
Is there a similar way of doing so with the DeepSeek open-r1 model?
Thank you!
|
https://github.com/huggingface/open-r1/issues/183
|
open
|
[] | 2025-02-04T21:10:13Z
| 2025-02-04T21:10:13Z
| null |
CCCC1800
|
huggingface/open-r1
| 180
|
How to launch GRPO with vLLM on multi-node slurm?
|
How to write sbatch script to run GRPO with vLLM on multiple nodes? What should be `--num_processes`? Is [GRPOTrainer](https://github.com/huggingface/trl/blob/1f344c9377d87cd348d92b78f27afea8e66563d7/trl/trainer/grpo_trainer.py#L288-L298) compatible with multinode training?
|
https://github.com/huggingface/open-r1/issues/180
|
open
|
[] | 2025-02-04T16:58:50Z
| 2025-03-14T15:55:18Z
| null |
pbelevich
|
huggingface/lerobot
| 678
|
The inverse kinematic solution code of so-100
|
Are there any code of inverse kinematic of so-100, which just need the input of the x, y on my desk, then it can move to the target
coordinate?
Thanks for any response.
|
https://github.com/huggingface/lerobot/issues/678
|
open
|
[
"question",
"robots"
] | 2025-02-04T03:58:17Z
| 2025-10-15T16:55:01Z
| null |
gxy-1111
|
huggingface/diffusers
| 10,710
|
Is DDUF format supported?
|
I checked this PR, https://github.com/huggingface/diffusers/pull/10037 and it is merged
```
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"DDUF/FLUX.1-dev-DDUF", dduf_file="FLUX.1-dev.dduf", torch_dtype=torch.bfloat16
)
image = pipe(
"photo a cat holding a sign that says Diffusers", num_inference_steps=50, guidance_scale=3.5
).images[0]
image.save("cat.png")
```
```
(venv) C:\aiOWN\diffuser_webui>python FLUX_DDUF.py
Fetching 1 files: 100%|█████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s]
Loading state_dict: 100%|███████████████████████████████████████████| 2/2 [00:32<00:00, 16.05s/it]
Loading pipeline components...: 29%|████████▊ | 2/7 [00:34<01:10, 14.12s/it]You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Loading pipeline components...: 57%|█████████████████▋ | 4/7 [00:34<00:26, 8.73s/it]
Traceback (most recent call last):
File "C:\aiOWN\diffuser_webui\FLUX_DDUF.py", line 4, in <module>
pipe = DiffusionPipeline.from_pretrained(
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 951, in from_pretrained
loaded_sub_model = load_sub_model(
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\pipelines\pipeline_loading_utils.py", line 742, in load_sub_model
loaded_sub_model = load_method(name, **loading_kwargs)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 931, in from_pretrained
model_file = _merge_sharded_checkpoints(
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\models\model_loading_utils.py", line 365, in _merge_sharded_checkpoints
raise FileNotFoundError(f"Part file {file_name} not found.")
FileNotFoundError: Part file diffusion_pytorch_model-00003-of-00003.safetensors not found.
```
```
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Windows-10-10.0.26100-SP0
- Running on Google Colab?: No
- Python version: 3.10.11
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.27.1
- Transformers version: 4.48.1
- Accelerate version: 1.4.0.dev0
- PEFT version: 0.14.1.dev0
- Bitsandbytes version: 0.45.1
- Safetensors version: 0.5.2
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 4060 Laptop GPU, 8188 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
-
```
|
https://github.com/huggingface/diffusers/issues/10710
|
closed
|
[] | 2025-02-03T17:42:37Z
| 2025-02-23T17:56:26Z
| 4
|
nitinmukesh
|
huggingface/trl
| 2,754
|
How to do multi-node training for GRPO with DeepSpeed + vLLM?
|
### Multi-Node Request
I am interested in doing multi-node (4 x 8 GPUs) reinforcement fine-tuning of 8B (or 14B) models using GRPO. However, given that at least 1 GPU needs to be assigned to vLLM, I am not sure how to exactly run multi-node setup? Would it be possible for you to share a simple set of scripts (config files and main .py file) with which I can test locally?
### Possible to give more GPUs to vLLM?
Also, in case of multi-node training, would it better to assign more GPUs to vLLM for faster (distributed) generation? Currently if I pass “cuda:6,7”, then it throws an error saying expected base 10 single digit number.
|
https://github.com/huggingface/trl/issues/2754
|
closed
|
[
"🚀 deepspeed",
"🏋 GRPO"
] | 2025-02-03T16:03:23Z
| 2025-03-22T12:51:19Z
| null |
nikhilchandak
|
pytorch/ao
| 1,653
|
[Doc] gemlite version
|
What gemlite version is required/supported? Can we specify this in the readme?
|
https://github.com/pytorch/ao/issues/1653
|
closed
|
[
"topic: documentation",
"question"
] | 2025-02-03T14:26:29Z
| 2025-05-02T18:00:20Z
| null |
bhack
|
pytorch/text
| 2,283
|
import torchtext fails
|
## 🐛 Bug
Today I installed torchtext in my Linux Ubuntu. When I tried to import torchtext into python, torchtext failed.
Details
1. Ubuntu 24.04.1 LTS
2. Python 3.12.3
3. PyTorch Version 2.5.1+cu124 (running fine)
4. During the torchtext install I saw messages suggesting that the version is 0.18, which according to what I read, is the last one to be supported.
5. The error messages I get when I issue the command "import torchtex" are below.
6. QUESTION: Given that torchtext will not be supported any more, is there an alternative API for text processing in PyTorch that will take the role of torchtext?
```
import torchtext
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/drv3/hm3/code/python/torch/lib/python3.12/site-packages/torchtext/__init__.py", line 18, in <module>
from torchtext import _extension # noqa: F401
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/drv3/hm3/code/python/torch/lib/python3.12/site-packages/torchtext/_extension.py", line 64, in <module>
_init_extension()
File "/drv3/hm3/code/python/torch/lib/python3.12/site-packages/torchtext/_extension.py", line 58, in _init_extension
_load_lib("libtorchtext")
File "/drv3/hm3/code/python/torch/lib/python3.12/site-packages/torchtext/_extension.py", line 50, in _load_lib
torch.ops.load_library(path)
File "/drv3/hm3/code/python/torch/lib/python3.12/site-packages/torch/_ops.py", line 1350, in load_library
ctypes.CDLL(path)
File "/usr/lib/python3.12/ctypes/__init__.py", line 379, in __init__
self._handle = _dlopen(self._name, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^
```
|
https://github.com/pytorch/text/issues/2283
|
open
|
[] | 2025-02-03T01:20:48Z
| 2025-02-03T01:20:48Z
| 0
|
JuanVargas
|
huggingface/lerobot
| 673
|
configure_motor.py says it's increasing the max acceleration of feetech motors, but is decreasing it
|
I built my SO ARM 100s before reading the huggingface instructions, so I am trying to retroactively setup the servos properly. I looked into configure_motor.py to see what it was doing so I could configure it manually, and I notice that for Feetech motors it sets Maximum_Acceleration to 254 to " speedup acceleration and deceleration of the motors". I read that value from all of the servos in both arms and the setting I was shipped with is 306, which, I assume, means faster acceleration and deceleration than 254.
|
https://github.com/huggingface/lerobot/issues/673
|
closed
|
[
"question",
"robots"
] | 2025-02-01T18:46:30Z
| 2025-04-07T15:52:20Z
| null |
jbrownkramer
|
huggingface/lerobot
| 672
|
Limited Range of Motion in 'Elbow Flex' Motor on SO-100 Follower Arm
|
# Issue: Limited Range of Motion in 'Elbow Flex' Motor on SO-100 Follower Arm
## Description
In my build of the SO-100 arm, the follower arm exhibits an issue where the motor labeled **'elbow flex'** is restricted to a movement range of approximately **90 degrees from the rest position**.
## Steps Taken to Troubleshoot
I have attempted the following troubleshooting steps:
- **Checked the servo separately**: The servo itself functions correctly and can move the full 360-degree range without issues.
- **Tested manual movement**: Manually tested the servo under normal teleoperation conditions with the weight of the arm.
- **Re-calibrated multiple times**: Repeated calibration to see if the issue persists.
- **Modified calibration JSON manually**: Editing the JSON file generated after calibration had no effect. The **homing_offset** field is the only one that causes any noticeable changes, but it only shifts the relative position of the follower to the leader, which is not a viable solution.
- **Swapped servos**: Replaced the servo with a new one to rule out hardware failure, but the issue remains.
## Expected Behavior
The **'elbow flex'** motor should be able to move the full intended range, similar to the leader arm, without being restricted to 90 degrees.
## Actual Behavior
The motor is constrained to only about **90 degrees of movement** from its rest position, despite the servo itself being capable of full rotation.
## Additional Notes
- The issue seems to persist despite changes in hardware and re-calibration.
- There may be an issue with how the calibration data is applied or interpreted.
- Any insights into possible firmware, software, or mechanical constraints would be appreciated.
---
Would appreciate any help or guidance on resolving this issue!
|
https://github.com/huggingface/lerobot/issues/672
|
closed
|
[
"question",
"robots",
"stale"
] | 2025-02-01T15:01:59Z
| 2025-10-20T02:31:48Z
| null |
ParzivalExtrimis
|
pytorch/pytorch
| 146,241
|
How to perform BF16 matrix multiplication so that multiplication is done in BF16 and summation is done in FP32 efficiently using pytorch API?
|
### 🚀 The feature, motivation and pitch
NVIDIA's cutlass library can perform BF16 matrix multiplication so that multiplication is done in BF16 and summation is done in FP32 for improved numerical stability. For example, consider the following snippet from [this code example from flash-attention](https://github.com/Dao-AILab/flash-attention/blob/02541ac9e8382f4d8e17f1f2ba0d7de2c792390c/csrc/flash_attn/src/flash_fwd_kernel.h#L319) calling it:
```
FLASH_NAMESPACE::gemm</*A_in_regs=*/Kernel_traits::Is_Q_in_regs>(
acc_s, tSrQ, tSrK, tSsQ, tSsK, tiled_mma, smem_tiled_copy_Q, smem_tiled_copy_K,
smem_thr_copy_Q, smem_thr_copy_K
);
```
where `tSrQ`, `tSrK`, `tSsQ`, `tSsK` is BF16/FP16, while final result `acc_s` is FP32.
I notice [pytorch's BF16 matrix mulitiplication](https://pytorch.org/docs/stable/notes/numerical_accuracy.html#reduced-precision-reduction-for-fp16-and-bf16-gemms) will use FP32 as intermediate accumulations, but final result is downcast to BF16 anyway. I experimented with the `out` parameter and `autocast`, but neither provided a complete solution.
Surely, below code can implement BF16 matrix multiplication so that multiplication is done in BF16 and summation is done in FP32
```
A = torch.randn((12, 3, 4, 5), dtype=torch.bfloat16)
B = torch.randn((12, 3, 5, 6), dtype=torch.bfloat16)
C = torch.einsum("...ij,...jk->...ijk", A, B).sum(dtype=torch.float32, dim=-2)
```
However, I have serious reservations about the speed and memory efficiency of this approach. I wonder if There is a more pytorch way to call the corresponding CUTLASS API.
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck @msaroufim @eqy @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @albanD
|
https://github.com/pytorch/pytorch/issues/146241
|
closed
|
[
"module: cuda",
"triaged",
"module: linear algebra",
"module: python frontend",
"matrix multiplication"
] | 2025-02-01T13:13:18Z
| 2025-04-18T05:02:40Z
| null |
Wongboo
|
pytorch/xla
| 8,660
|
Torch XLA Model all_gather does not work with tensors of different sizes along dimension 0
|
## 🐛 Bug
Torch XLA Model all_gather works with tensors of same size along `dim=0`, but if tensor sizes are different along `dim=0`, it hangs.
## To Reproduce
Save this code in `test_all_gather.py`
```
import torch
import torch_xla.core.xla_model as xm
import torch_xla.runtime as xr
import torch_xla.distributed.xla_backend as xb
import torch.distributed
def test_all_gather():
same = [512, 512, 512, 512, 512, 512, 512, 512]
different = [416, 536, 560, 544, 576, 512, 592, 360]
torch.distributed.init_process_group(backend="xla", init_method="xla://")
rank = torch.distributed.get_rank()
device = xm.xla_device()
input = torch.randn((same[rank], 16), dtype=torch.float32, device=device)
all_inputs = xm.all_gather(input, dim=0, groups=[[0,1,2,3,4,5,6,7]], pin_layout=False)
print(f"!!!!!! rank: {rank}, all_inputs: {all_inputs}")
input = torch.randn((different[rank], 16), dtype=torch.float32, device=device)
all_inputs = xm.all_gather(input, dim=0, groups=[[0,1,2,3,4,5,6,7]], pin_layout=False)
print(f"!!!!!! rank: {rank}, all_inputs: {all_inputs}")
torch.distributed.destroy_process_group()
if __name__ == "__main__":
test_all_gather()
```
```
torchrun --nproc_per_node=8 test_all_gather.py
```
## Expected behavior
It should gather all the tensors from all the devices along `dim=0`
## Environment
Docker image
`us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.5.0_3.10_cuda_12.4`
## Additional context
According to this documentation for `all_gather` https://pytorch.org/docs/stable/distributed.html uneven tensor sizes are supported.
|
https://github.com/pytorch/xla/issues/8660
|
open
|
[
"enhancement",
"distributed",
"usability"
] | 2025-01-31T22:02:27Z
| 2025-03-04T22:52:46Z
| 6
|
ajayvohra2005
|
huggingface/sentence-transformers
| 3,207
|
How to increase batch size by using multiple gpus?
|
Hello! My fine-tuned model need a large batch size to get the best performance. I have multiple gpus with 40G VRAM each. How can i use them together to enlarge the batch size? Currently i can only set the batch size be 3 per GPU and seems they won't share the datas. How can i make the total batch size become 24?
|
https://github.com/huggingface/sentence-transformers/issues/3207
|
open
|
[] | 2025-01-31T18:00:08Z
| 2025-02-19T10:36:28Z
| null |
13918763630
|
pytorch/torchtitan
| 813
|
HSDP causes loss instability
|
I have a codebase forked from torchtitan with minor changes. FSDP trains very well with minimal instability, but HSDP on the same codebase exhibits loss spikes.
Is there some reason for this you folks can think of? Note that I have implemented gradient accumulation in my fork, though without changing any sharding behavior (just to accumulate the gradients on a larger batchsize)
|
https://github.com/pytorch/torchtitan/issues/813
|
closed
|
[
"question",
"module: fsdp"
] | 2025-01-31T03:27:09Z
| 2025-08-21T03:06:46Z
| null |
apkumar
|
pytorch/vision
| 8,889
|
Torchvision 0.20.1 looks for jpeg9 on MacOS, while depending on libjpeg-turbo which only provides jpeg8
|
### 🐛 Describe the bug
Hi, I tried to create a new conda environment torch + torchvision + torchaudio + blas accelerate on a MacOS 14.
Post installation, when I try to import the torchvision library, I get a warning about missing libjpeg9.
I have added more details below. Just wanted to bring this to your attention for triage and if there is an issue to be fixed. Cheers!
(Replaced full path with CONDA_PREFIX and added newlines to make it clearer)
```bash
mamba create -n env -c pytorch 'pytorch=2.5.1' torchvision torchaudio 'libblas=*=*accelerate'
mamba run -n env python
Python 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:19:53) [Clang 18.1.8 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import torchvision
{CONDA_PREFIX}/lib/python3.12/site-packages/torchvision/io/image.py:14: UserWarning: Failed to load image Python extension: 'dlopen({CONDA_PREFIX}/lib/python3.12/site-packages/torchvision/image.so, 0x0006): Library not loaded: @rpath/libjpeg.9.dylib
Referenced from: <367D4265-B20F-34BD-94EB-4F3EE47C385B>{CONDA_PREFIX}/lib/python3.12/site-packages/torchvision/image.so
Reason: tried:
'{CONDA_PREFIX}/lib/python3.12/site-packages/torchvision/../../../libjpeg.9.dylib' (no such file),
'{CONDA_PREFIX}/lib/python3.12/site-packages/torchvision/../../../libjpeg.9.dylib' (no such file),
'{CONDA_PREFIX}/lib/python3.12/lib-dynload/../../libjpeg.9.dylib' (no such file),
'{CONDA_PREFIX}/bin/../lib/libjpeg.9.dylib' (no such file)'
If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
>>>
```
I tried to find the jpeg libraries in the conda environment with find command
```bash
find ${CONDA_PREFIX} -name 'libjpeg*.dylib'
{CONDA_PREFIX}/lib/libjpeg.8.3.2.dylib
{CONDA_PREFIX}/lib/libjpeg.8.dylib
{CONDA_PREFIX}/lib/libjpeg.dylib
```
When I run `otool`, I see that it is linked against jpeg9, while installing libjpeg-turbo as a dependency, which only provides jpeg8.
```
$ otool -L $CONDA_PREFIX/lib/python3.1/site-packages/torchvision/image.so
{CONDA_PREFIX}/lib/python3.1/site-packages/torchvision/image.so:
@rpath/libpng16.16.dylib (compatibility version 56.0.0, current version 56.0.0)
@rpath/libjpeg.9.dylib (compatibility version 15.0.0, current version 15.0.0)
@rpath/libwebp.7.dylib (compatibility version 9.0.0, current version 9.8.0)
@rpath/libc10.dylib (compatibility version 0.0.0, current version 0.0.0)
@rpath/libtorch.dylib (compatibility version 0.0.0, current version 0.0.0)
@rpath/libtorch_cpu.dylib (compatibility version 0.0.0, current version 0.0.0)
@rpath/libtorch_python.dylib (compatibility version 0.0.0, current version 0.0.0)
@rpath/libc++.1.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1345.100.2)
```
conda packages
```
blas 2.128 accelerate conda-forge
blas-devel 3.9.0 28_h55bc449_accelerate conda-forge
brotli-python 1.1.0 py312hde4cb15_2 conda-forge
bzip2 1.0.8 h99b78c6_7 conda-forge
ca-certificates 2024.12.14 hf0a4a13_0 conda-forge
certifi 2024.12.14 pyhd8ed1ab_0 conda-forge
cffi 1.17.1 py312h0fad829_0 conda-forge
charset-normalizer 3.4.1 pyhd8ed1ab_0 conda-forge
cpython 3.12.8 py312hd8ed1ab_1 conda-forge
filelock 3.17.0 pyhd8ed1ab_0 conda-forge
freetype 2.12.1 hadb7bae_2 conda-forge
giflib 5.2.2 h93a5062_0 conda-forge
gmp 6.3.0 h7bae524_2 conda-forge
gmpy2 2.1.5 py312h524cf62_3 conda-forge
h2 4.1.0 pyhd8ed1ab_1 conda-forge
hpack 4.1.0 pyhd8ed1ab_0 conda-forge
hyperframe 6.1.0 pyhd8ed1ab_0 conda-forge
idna 3.10 pyhd8ed1ab_1 conda-forge
jinja2 3.1.5 pyhd8ed1ab_0 conda-forge
lcms2 2.16 ha0e7c42_0 conda-forge
lerc 4.0.0 h9a09cb3_0 conda-forge
libblas 3.9.0 28_h504e6c8_accelerate conda-forge
libcblas 3.9.0 28_h8d39bcd_accelerate conda-forge
libcxx 19.1.7 ha82da77_0 conda-forge
libdeflate
|
https://github.com/pytorch/vision/issues/8889
|
open
|
[] | 2025-01-30T16:57:13Z
| 2025-09-22T13:02:58Z
| 4
|
IMG-PRCSNG
|
huggingface/optimum
| 2,174
|
Support for ONNX export of SeamlessM4TModel
|
### Feature request
Add SeamlessM4Tv2 Model support to onnx_export_from_model.
### Motivation
Being able to deploy SeamlessM4Tv2 models to production using onnx.
### Your contribution
I got the speech-to-text model to ONNX, but I'm not able to generate the audio as expected, even though I'm trying to give the tgt_lang_token_ids as decoder_input_ids. I could help with by submitting a PR, but I might start creating the model_config/model_patcher first if it is needed.
EDIT: I got the speech-to-text model, not the speech-to-speech model. I'd like to export the t2u_model and the vocoder to onnx, but it seems that is giving problems, any advice on how to do it?
|
https://github.com/huggingface/optimum/issues/2174
|
closed
|
[
"Stale"
] | 2025-01-30T15:10:31Z
| 2025-03-18T02:07:02Z
| 3
|
AlArgente
|
pytorch/pytorch
| 145,978
|
What is the recommended way to use Distributed Checkpointing Save/Load with HSDP?
|
### 🐛 Describe the bug
There are torch distributed checkpointing examples in [torch/distributed/checkpoint/examples](https://github.com/pytorch/pytorch/tree/main/torch/distributed/checkpoint/examples). All of these examples use FSDP. Running these examples out of the box has no issues, the loaded checkpoint state matches the saved checkpoint state. However, when I convert these examples to run HSDP instead of FSDP, I notice that loaded state no longer matches the saved state.
How I am converting from FSDP to HSDP:
```
model = FSDP(
torch.nn.Linear(4, 4).cuda(dist.get_rank()),
device_mesh=mesh,
sharding_strategy=ShardingStrategy.HYBRID_SHARD
)
```
[Link](https://gist.github.com/gkroiz/fcf5ed19665bc09475057f8bf626e853) to gist of updated [torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py](https://github.com/pytorch/pytorch/blob/main/torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py) with HSDP modifications and printed output.
I also made similar changes to [torch/distributed/checkpoint/examples/stateful_example.py](https://github.com/pytorch/pytorch/blob/main/torch/distributed/checkpoint/examples/stateful_example.py) and saw the same discrepancies between saved and loaded state.
Either (1) I'm setting up HSDP + distributed checkpointing incorrectly or (2) there is a bug with distributed checkpointing. Assuming (1), what is the correct way to set up HSDP + distributed checkpointing?
### Versions
```
my_vm:/workspace# python collect_env.py
/usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using `python3 -m pip install --upgrade 'optree>=0.13.0'`.
warnings.warn(
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.6.44+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 5399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear serialize amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.9 MiB (104 instances)
L1i cache:
|
https://github.com/pytorch/pytorch/issues/145978
|
open
|
[
"oncall: distributed",
"triaged",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 2025-01-29T22:24:11Z
| 2025-04-08T15:58:03Z
| null |
gkroiz
|
huggingface/diffusers
| 10,683
|
Would anyone consider a diffusers export_to_frames utility fuction?
|
**Is your feature request related to a problem? Please describe.**
The current `export_to_video` function in Hugging Face's Diffusers library exports a compressed video, but it's not straightforward for users to obtain raw, lossless PNG frames from a list of frames. This can be a problem for users who need to work with individual frames or want to export them in a specific format as part of a workflow.
**Describe the solution you'd like.**
I propose introducing a new function, `export_to_frames`, in `huggingface/diffusers/utils/export_utils.py`. This function would take a the frames (either NumPy arrays or PIL Image objects) and export each frame as a separate PNG file in a specified output directory. The function would also allow users to specify the frame rate and output directory.
**Describe alternatives you've considered.**
While users can currently solve this problem on their own by using other libraries or writing custom code, it would be beneficial to provide a simple and standard method for exporting raw, uncompressed PNG frames. This would save users time and effort, and make the Diffusers library more user-friendly.
**Additional context.**
I've included very rough example implementation of the proposed `export_to_frames` function below:
`
def export_to_frames(
video_frames: Union[List[np.ndarray], List[PIL.Image.Image]], output_dir: str = None, fps: int = 10
) -> str:
"""
Export each frame in a list of frames to a directory.
Args:
video_frames (Union[List[np.ndarray], List[PIL.Image.Image]]): A list of frames.
output_dir (str, optional): The directory where the frames will be saved. Defaults to None.
fps (int, optional): The frame rate. Defaults to 10.
Returns:
str: The path to the output directory.
"""
try:
imageio.plugins.ffmpeg.get_exe()
except AttributeError:
raise AttributeError(
(
"Found an existing imageio backend in your environment. Attempting to export frames with imageio. \n"
"Unable to find a compatible ffmpeg installation in your environment to use with imageio. Please install via pip install imageio-ffmpeg"
)
)
print( "video_frames",len(video_frames) )
if isinstance(video_frames[0], np.ndarray):
print( "numpy")
video_frames = [(frame * 255).astype(np.uint8) for frame in video_frames]
elif isinstance(video_frames[0], PIL.Image.Image):
print( "PIL")
video_frames = [np.array(frame) for frame in video_frames]
print( "video_frames",len(video_frames) )
for i, frame in enumerate(video_frames):
print( "frame", i )
filename = f"frame_{i:04d}.png"
if isinstance(frame, np.ndarray):
print("wrote via np")
imageio.imwrite(os.path.join(output_dir, filename), frame)
elif isinstance(frame, PIL.Image.Image):
print("wrote via PIL")
frame.save(os.path.join(output_dir, filename))
return output_dir`
This rough function was tested briefly but should be rewritten I'm just using it for illustrative purposes since it worked. Please let me know if this idea is worth considering further and if we could proceed with something like this in the standard utilities in future?
|
https://github.com/huggingface/diffusers/issues/10683
|
open
|
[
"stale"
] | 2025-01-29T17:30:21Z
| 2025-03-26T15:04:10Z
| 4
|
lovetillion
|
huggingface/transformers.js
| 1,174
|
How to create a new onnx TTS model like mms-tts-eng
|
### Question
First of all, congratulations on such a great library!
I would like to ask for your guidance and assistance in creating a new onnx model similar to the following one:
https://huggingface.co/Xenova/mms-tts-eng/tree/main
…but for the Malagasy language:
https://huggingface.co/facebook/mms-tts-mlg
Could you provide me with some advice on how to create that model?
Thank you so much.
|
https://github.com/huggingface/transformers.js/issues/1174
|
closed
|
[
"question"
] | 2025-01-29T16:02:13Z
| 2025-02-05T12:48:57Z
| null |
elloza
|
huggingface/open-r1
| 113
|
What is the GPU resource required to run Open-R1 (Deepseek-R1) locally?
|
I am trying to run it using Ollama with Open WebUI in a docker container, does it required a dedicated GPU with high VRAM or an integrated GPU?
Which model (8 billion, 9 billion, 12 billion) can be required with each GPU VRAM?
|
https://github.com/huggingface/open-r1/issues/113
|
open
|
[] | 2025-01-29T14:08:47Z
| 2025-01-29T21:17:17Z
| null |
ruidazeng
|
huggingface/open-r1
| 100
|
What is the compute needed for GRPO for 7B R1-Distill model?
|
Anybody who has tried GRPO over any of the R1-Distill models: what is the minimum GPU compute requirement to run the training?
Let's say for R1-Distill-Qwen-7B ?
I am talking about this from the README:
### GRPO
```
accelerate launch --config_file configs/zero3.yaml src/open_r1/grpo.py \
--output_dir DeepSeek-R1-Distill-Qwen-7B-GRPO \
--model_name_or_path deepseek-ai/DeepSeek-R1-Distill-Qwen-7B \
--dataset_name AI-MO/NuminaMath-TIR \
--max_prompt_length 256 \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 16 \
--logging_steps 10 \
--bf16
```
|
https://github.com/huggingface/open-r1/issues/100
|
open
|
[] | 2025-01-29T03:01:03Z
| 2025-02-10T09:17:47Z
| null |
iamansinha
|
huggingface/diffusers
| 10,677
|
Support for training with Grayscale images?
|
I am trying to train an unconditional diffusion model on grayscale images using your [pipeline](https://huggingface.co/docs/diffusers/training/unconditional_training). When running training with the default parameters I discovered inferred images that contained colour (specifically green). Where it learnt such colours from I do not know but I would predict the issue lies within the initial processing of the image set:
`images = [augmentations(image.convert("RGB")) for image in examples["image"]]`
as such I created a fork of this [repo ](https://github.com/DavidGill159/diffusers/tree/main/examples/unconditional_image_generation)and changed this line to:
`images = [augmentations(image.convert("L")) for image in examples["image"]]`
I also updated the model configuration (UNet2DModel) to work with single-channel inputs and outputs by setting `in_channels=1` and `out_channels=1` when initialising the model.
Am I on the right track? or does the resolution lie elsewhere? I also noticed the resolution of the inferred images is very poor; not on par with the training set. What parameters can I adjust to improve this?
**Ultimately I am interested in a diffusion model that focuses more on the textural composition of images, rather than the colou**r.
|
https://github.com/huggingface/diffusers/issues/10677
|
open
|
[
"stale"
] | 2025-01-28T22:25:19Z
| 2025-02-28T15:02:57Z
| 1
|
DavidGill159
|
pytorch/torchtitan
| 811
|
FSDP checkpoints don't load when run is restarted with greater world size
|
A checkpoint is saved from an 8-GPU run with `dp_shard ` set to 8 and all other parallelisms set to 1. My understanding is that this is configured as an FSDP run.
The checkpoint is resumed from 16 GPUs with `dp_shard` now set to 16. When loading the checkpoint, we get this error:
```
[rank0]: Traceback (most recent call last): (RANK 15) [rank0]: File "/app/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py", line 164, in reduce_scatter [rank0]: local_data = map_fun() [rank0]: File "/app/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/logger.py", line 83, in wrapper
[rank0]: result = func(*args, **kwargs)
[rank0]: File "/app/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_loader.py", line 211, in local_step
[rank0]: local_plan = planner.create_local_plan()
[rank0]: File "/app/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/default_planner.py", line 233, in create_local_plan
[rank0]: return create_default_local_load_plan(
[rank0]: File "/app/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/default_planner.py", line 354, in create_default_local_load
[rank0]: raise RuntimeError(f"Missing key in checkpoint state_dict: {fqn}.")
[rank0]: RuntimeError: Missing key in checkpoint state_dict: dataloader.dp_rank_15.
```
My understanding is that torch distributed checkpoints are supposed to support dynamic resharding at load time. Does this not work with torchtitan?
I was able to successfully resume a checkpoint going down from 32 GPUs to 16.
|
https://github.com/pytorch/torchtitan/issues/811
|
closed
|
[
"bug",
"documentation",
"enhancement",
"module: fsdp"
] | 2025-01-28T21:38:09Z
| 2025-02-07T01:22:26Z
| 4
|
darkmirage
|
huggingface/diffusers
| 10,675
|
Difference in Flux scheduler configuration max_shift
|
### Describe the bug
Could you please check if the value of 1.16 here...
https://github.com/huggingface/diffusers/blob/658e24e86c4c52ee14244ab7a7113f5bf353186e/src/diffusers/pipelines/flux/pipeline_flux.py#L78
...is intentional or maybe a typo?
`max_shift` is 1.15 both in the model configuration...
https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/scheduler/scheduler_config.json
...and in the original inference code by BFL:
https://github.com/black-forest-labs/flux/blob/d06f82803f5727a91b0cf93fcbb09d920761fba1/src/flux/sampling.py#L214
### Reproduction
-
### Logs
```shell
```
### System Info
-
### Who can help?
@yiyixuxu @DN6
|
https://github.com/huggingface/diffusers/issues/10675
|
closed
|
[
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-01-28T20:35:58Z
| 2025-02-18T06:54:58Z
| 2
|
dxqb
|
huggingface/transformers.js
| 1,171
|
Does the image generation model support using LoRA?
|
### Question
I would like to implement an image generation feature to my website using a image generation model and a LoRA. Is LoRA supported in transformers.js?
|
https://github.com/huggingface/transformers.js/issues/1171
|
open
|
[
"question"
] | 2025-01-28T19:48:38Z
| 2025-02-11T23:11:27Z
| null |
hunkim98
|
pytorch/xla
| 8,642
|
Make Mixtral pallas kernels Dynamo/AOTAutograd traceable
|
Similar to https://github.com/pytorch/xla/issues/8633, we'll need to refactor pallas kernels needed by Mixtral (e.g. GMM) into PyTorch custom ops in order to use scan in Mixtral.
|
https://github.com/pytorch/xla/issues/8642
|
open
|
[
"enhancement",
"pallas"
] | 2025-01-28T19:29:33Z
| 2025-02-13T13:15:27Z
| 1
|
tengyifei
|
huggingface/diffusers
| 10,672
|
Please support callback_on_step_end for following pipelines
|
**Is your feature request related to a problem? Please describe.**
Missing callback_on_step_end in these pipeline takes away the capability to show the progress in UI
**Describe the solution you'd like.**
Please support callback_on_step_end
**Describe alternatives you've considered.**
N.A.
**Additional context.**
1. AuraFlowPipeline
TypeError: AuraFlowPipeline.__call__() got an unexpected keyword argument 'callback_on_step_end'
2. LuminaText2ImgPipeline
|
https://github.com/huggingface/diffusers/issues/10672
|
closed
|
[
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-01-28T16:26:56Z
| 2025-02-16T17:28:58Z
| 2
|
nitinmukesh
|
huggingface/transformers.js
| 1,170
|
Processing in image encoding for Florence 2
|
### Question
Hi,
while having a look at the code for generation with the Florence 2 model, I've noticed something weird. The original code for inference uses the [_encode_image](https://huggingface.co/microsoft/Florence-2-base-ft/blob/main/modeling_florence2.py#L2599) method for creating image features. However, looking at the [encode_image](https://github.com/huggingface/transformers.js/blob/main/src/models.js#L1861C1-L1874C6) used in `transformers.js`, I've noticed the postprocessing after the model forward pass is missing. Here's a minimal reproducible example:
```python
import onnxruntime as ort
from transformers import AutoModelForCausalLM, AutoProcessor
from PIL import Image
# The vision encoder was downloaded from:
# https://huggingface.co/onnx-community/Florence-2-base-ft/resolve/main/onnx/vision_encoder.onnx
ONNX_MODEL_PATH = "models/onnx/original/vision_encoder.onnx"
MODEL_NAME = "microsoft/Florence-2-base-ft"
# Image download link:
# https://upload.wikimedia.org/wikipedia/en/7/7d/Lenna_%28test_image%29.png
IMG_PATH = "lena.png"
PROMPT = "<MORE_DETAILED_CAPTION>"
processor = AutoProcessor.from_pretrained(
MODEL_NAME, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME, trust_remote_code=True)
image = Image.open(IMG_PATH)
inputs = processor(text=PROMPT, images=image, return_tensors="pt")
hf_out = model._encode_image(inputs["pixel_values"])
ort_vision_tower = ort.InferenceSession(ONNX_MODEL_PATH)
ort_out = ort_vision_tower.run(
None, {"pixel_values": inputs["pixel_values"].numpy()})[0]
print(hf_out.cpu().detach().numpy())
print()
print(ort_out)
```
The feature differences are pretty big:
```
[[[-0.4047455 0.51958734 -0.23121671 ... 1.0019573 -0.46846968
0.5289913 ]
[-0.08135182 -2.0622678 -0.50597775 ... 0.38061845 -0.7858853
-1.247189 ]
[ 0.69417834 -1.926735 -0.691345 ... -0.17574754 -0.98472327
-1.2420652 ]
...
[ 0.018062 1.2185848 -0.04483193 ... 0.61767036 -0.1832848
0.9324351 ]
[-0.13765828 0.7120823 0.12478658 ... -0.44853052 -0.6390534
0.37095645]
[ 0.58084226 1.6617624 -0.43527135 ... -0.92560166 -0.47037867
-0.81996024]]]
[[[-0.52661824 0.508744 -0.24130312 ... 0.91191643 -0.39472336
1.1632534 ]
[-0.18091503 -2.2187433 -0.7923498 ... 0.6103708 -0.49637306
-0.9830185 ]
[ 0.3002218 -1.9726763 -1.1151179 ... -0.11572987 -0.6870862
-0.96058726]
...
[-0.08202907 0.8105656 -0.1748765 ... 1.0833437 -0.41167092
1.2495995 ]
[-0.01531404 0.6044417 -0.06392197 ... -0.30775025 -0.5735508
0.6775356 ]
[ 0.74322057 1.4011574 -0.5277405 ... -0.61488384 -0.40253094
-0.8440974 ]]]
```
Am I missing something here or is this a potential bug?
|
https://github.com/huggingface/transformers.js/issues/1170
|
closed
|
[
"question"
] | 2025-01-27T16:13:28Z
| 2025-03-02T14:37:52Z
| null |
ir2718
|
huggingface/text-generation-inference
| 2,956
|
How to give custom model code for TGI to run.
|
Is there a way to give custom model inference code for TGI to run during invocation?
|
https://github.com/huggingface/text-generation-inference/issues/2956
|
open
|
[] | 2025-01-27T10:37:55Z
| 2025-01-27T10:37:55Z
| null |
ashwani-bhat
|
huggingface/diffusers
| 10,662
|
Feature Request: Image-to-Image Fine-Tuning Example
|
Hello, and thank you for maintaining this amazing repository!
While working with the Diffusers library, I noticed there is a folder containing fine-tuning examples for text-to-image models but not for image-to-image fine-tuning.
Since image-to-image models have many use cases (e.g., style transfer, image restoration, or domain-specific adaptation), a fine-tuning example for this task would greatly benefit the community and improve accessibility for users looking to customize such models.
Questions:
* Is there any existing implementation or documentation for fine-tuning image-to-image models that I might have missed?
* If not, is there a specific reason this example hasn't been provided yet (e.g., complexity, low demand)?
I'd be happy to contribute or collaborate on this feature if it's considered valuable.
Thank you in advance for your time and response!
|
https://github.com/huggingface/diffusers/issues/10662
|
closed
|
[] | 2025-01-27T08:33:39Z
| 2025-02-07T08:27:44Z
| 6
|
YanivDorGalron
|
pytorch/xla
| 8,632
|
[scan] Avoid re-tracing the combine function on every call
|
## 🚀 Feature
It should be possible to somehow cache the traced graphs in `torch_xla.experimental.scan` so we don't trace on every call.
## Motivation
Today `torch_xla.experimental.scan` and `scan_layers` traces the user function with both AOTAutograd (to get the backward) and with LazyTensor (to lower them to HLO). AOTAutograd is very slow and we can easily become tracing bound. For example, `python3 examples/train_decoder_only_base.py` takes 1min30s but `python3 examples/train_decoder_only_base.py scan.decoder_with_scan.DecoderWithScan` takes 4min.
## Pitch
We could wait for `torch.scan` to support autograd (c.f. https://github.com/pytorch/xla/pull/7901#issuecomment-2546903424) which will take a long time. In the meantime, we can implement some simple caching based on the `id` of the input function/module.
The caching should be opt-in because it's only sound if the function is pure. We can add a `assume_pure=True` argument to `scan` so that it only uses the caching when the user confirms that their function is pure.
|
https://github.com/pytorch/xla/issues/8632
|
closed
|
[
"enhancement",
"good first issue",
"performance"
] | 2025-01-27T06:30:47Z
| 2025-06-19T20:02:13Z
| 21
|
tengyifei
|
huggingface/finetrainers
| 248
|
How to load full finetune for inference?
|
### Feature request / 功能建议

### Motivation / 动机
It seems like only lora inference example in README.MD
### Your contribution / 您的贡献
test the full finetune(LTX-VIDEO,Cogxvideo)
|
https://github.com/huggingface/finetrainers/issues/248
|
closed
|
[] | 2025-01-27T03:49:57Z
| 2025-01-27T06:27:18Z
| null |
BlackTea-c
|
pytorch/text
| 2,282
|
combining TEXT.build_vocab with BERT Embedding
|
## ❓ Questions and Help
**Description**
Hi, we can use glove embedding when building vocab, using
something like:
```
MIN_FREQ = 2
TEXT.build_vocab(train_data,
min_freq = MIN_FREQ,
vectors = "glove.6B.300d",
unk_init = torch.Tensor.normal_)
```
<!-- Please send questions or ask for help here. -->
However, I want to use BERT embedding because I need a sophisticated model to compare the performance of multiple embeddings. How can I use BERT in build_vocab?
|
https://github.com/pytorch/text/issues/2282
|
open
|
[] | 2025-01-27T02:11:21Z
| 2025-01-27T02:11:21Z
| 0
|
muhalfian
|
huggingface/Google-Cloud-Containers
| 143
|
Route to /generate and /metrics
|
Hello team, thanks for supporting :)
Inside https://github.com/huggingface/text-generation-inference/blob/main/router/src/server.rs file,
there is a route for google cloud definition as below.
#[cfg(feature = "google")]
{
tracing::info!("Built with `google` feature");
tracing::info!(
"Environment variables `AIP_PREDICT_ROUTE` and `AIP_HEALTH_ROUTE` will be respected."
);
if let Ok(env_predict_route) = std::env::var("AIP_PREDICT_ROUTE") {
app = app.route(&env_predict_route, post(vertex_compatibility));
}
if let Ok(env_health_route) = std::env::var("AIP_HEALTH_ROUTE") {
app = app.route(&env_health_route, get(health));
}
}
Currently, there is no way to access /generate through VAI because if we define AIP_PREDICT_ROUTE outside of container then it creates new path for prediction.
The problem is that new features like json generation (https://huggingface.co/docs/text-generation-inference/en/guidance) only supports through /generate path.
Can we change this pattern that if AIP_PREDICT_ROUTE or AIP_HEALTH_ROUTE points existing path, then do nothing ?
Then we can route default VAI predict path to /generate and also expose /metrics path through VAI health path.
|
https://github.com/huggingface/Google-Cloud-Containers/issues/143
|
closed
|
[
"question"
] | 2025-01-27T02:02:28Z
| 2025-01-31T11:44:05Z
| null |
jk1333
|
huggingface/optimum
| 2,171
|
Adding Phi3 support in BetterTransformer (to use the microsoft/phi-4 model)
|
### Feature request
Hello,
Is it possible to add the phi3 architecture to BetterTransformer supported models?
### Motivation
Nan
### Your contribution
Nan
|
https://github.com/huggingface/optimum/issues/2171
|
closed
|
[
"Stale"
] | 2025-01-26T19:10:34Z
| 2025-03-04T02:05:22Z
| 2
|
majdabd
|
huggingface/transformers.js
| 1,167
|
How to create and use a customized voice in a tts pipeline?
|
### Question
Hi transformers.js community!
I am new here and I’d like to ask how to create a new voice and use it inside the current tts pipeline? I just create a next.js project and I can run the text-to-speech model in the tutorial, like following code,
```
const synthesizer = await pipeline('text-to-speech', 'Xenova/speecht5_tts', { quantized: false });
const speaker_embeddings = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/speaker_embeddings.bin';
const out = await synthesizer('Hello, my dog is cute', { speaker_embeddings });`
```
Now I want to create a new voice and use it in the pipeline, how should I do? Can I realize it in the same environment? (The speaker creation and speech generation are both processed in the next.js web app). I have searched the web but there is no any tutorials or demo on that, looking forward for the answers!
Best!
|
https://github.com/huggingface/transformers.js/issues/1167
|
open
|
[
"question"
] | 2025-01-26T17:44:57Z
| 2025-02-11T02:55:40Z
| null |
gonggqing
|
huggingface/open-r1
| 56
|
How to supervise non-math data?
|
I see the accuracy reward only can check the numerical equal? But what if my question is MCQ and asking an option?
I did a quick check and find it's not working.
```
from math_verify import parse, verify
# Parse the gold and answer
# If you know that gold will only contain latex or expr (no latex env), use
# parse(gold, extraction_config=[LatexExtractionConfig()]) or parse(gold, extraction_config=[ExprExtractionConfig()])
gold = parse("So the answer is B")
answer = parse("B")
print(gold)
print(answer)
# Order here is important!
print(verify(gold, answer))
[]
[]
False
```
|
https://github.com/huggingface/open-r1/issues/56
|
open
|
[] | 2025-01-26T14:30:13Z
| 2025-01-26T17:52:58Z
| null |
Luodian
|
huggingface/diffusers
| 10,655
|
How to use custon dataset in train_dreambooth_flux.py.
|
Hi. what if i want to train two images with two different prompts. somethink like m1.jpeg , m1.txt ; m2.jpeg, m2.txt.
the default example only shows all images share one instant prompt. thanks for the help!
|
https://github.com/huggingface/diffusers/issues/10655
|
closed
|
[] | 2025-01-26T11:53:01Z
| 2025-01-27T19:43:55Z
| null |
rooooc
|
huggingface/open-r1
| 46
|
how to train on MultiNode MultiGPU
|
https://github.com/huggingface/open-r1/issues/46
|
open
|
[] | 2025-01-26T04:57:11Z
| 2025-02-19T14:00:44Z
| null |
yuepengs
|
|
huggingface/transformers.js
| 1,166
|
Why isn't transformers using filesystem API instead of Cache API?
|
### Question
I find the cache API quite limiting when it comes to user experience. I am curious why transformers.js is not utilizing filesystem API. Is there any practical difficulty in it?
|
https://github.com/huggingface/transformers.js/issues/1166
|
open
|
[
"question"
] | 2025-01-25T14:12:38Z
| 2025-02-08T12:09:16Z
| null |
Nithur-M
|
huggingface/open-r1
| 23
|
How to contribute
|
Hello there 👋!
Replicating all parts of DeepSeek's R1 pipeline is going to take a community effort, especially with dataset curation and creation. If you would like to contribute, please explore the issues linked below.
|
https://github.com/huggingface/open-r1/issues/23
|
open
|
[] | 2025-01-25T13:55:31Z
| 2025-05-06T13:32:10Z
| null |
lewtun
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.