repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
β | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/smolagents
| 842
|
How to pass custom type variables to tools
|
Iβm working on a Telegram bot and using the `smolagents` library to create agents that handle reminders. The issue Iβm facing is related to passing the `context` object (which is specific to each message received by the bot) to a tool function (`add_reminder`). The `context` object is required to access the `job_queue` for scheduling reminders.
### Problem:
Even though Iβm passing the `context` variable through the `additional_args` argument in `agent.run`, the agent doesnβt seem to pass this variable directly to the code interpreter. Instead, it redefines the variable as `None`, which causes the rest of the code to fail.
Hereβs the relevant part of the code:
```python
@tool
def add_reminder(title: str,
date_time: datetime.datetime,
chat_id: str,
context: Any,
location: str = None,
details: str = None) -> dict:
'''
Add a reminder to the job queue.
Args:
title: The title of the reminder (str)
date_time: The time for the reminder
location: The location of the reminder if it is specified. If not then None (str)
details: The details of the reminder if it is specified. If not then None (str)
chat_id: pass the chat_id given to you
context: pass the context given to you
'''
# try:
reminder = {}
reminder['Title'] = title
reminder['Time'] = date_time
reminder['Location'] = location
reminder['Details'] = details
# Convert the reminder time string to a localized datetime object
timer_date = date_time.replace(tzinfo=None)
timer_date = tz.localize(timer_date)
timer_date_string = timer_date.strftime("%H:%M %d/%m/%Y")
timer_name = f"{title} ({timer_date_string})"
reminder['run'] = 'once'
reminder['text'] = reminder_to_text(reminder)
# Calculate the time remaining in seconds
now = datetime.datetime.now(tz)
seconds_until_due = (timer_date - now).total_seconds()
# Check if the time is in the past
if seconds_until_due <= 0:
return {'success': False, 'message': TXT_NOT_ABLE_TO_SCHEDULE_PAST}
reminder['type'] = 'parent'
context.job_queue.run_once(
alarm,
when=timer_date,
chat_id=chat_id,
name=timer_name,
data=reminder,
)
reminder['type'] = '-30'
context.job_queue.run_once(
alarm_minus_30,
when=timer_date - datetime.timedelta(minutes=30),
chat_id=chat_id,
name=timer_name,
data=reminder,
)
return {'success': True, 'message': TXT_REMINDER_SCHEDULED, 'response_for_user': reminder['text']}
async def add_reminder_from_input(update, context):
# Add the reminder
input = update.message.text
chat_id = update.effective_chat.id
now = datetime.datetime.now(tz).strftime("%d/%m/%Y %H:%M")
logger.info(f'chat_id: {chat_id}, input: {input}')
agent = CodeAgent(tools=[add_reminder],
additional_authorized_imports=['datetime'],
model=OpenAIServerModel(model_id='gpt-4o-mini', api_key = OPENAI_TOKEN),
verbosity_level=3,
max_steps = 2)
answer = agent.run(TXT_MENU_AGENT_SYSTEM_PROMPT.format(input=input, now=now),
additional_args={"context": context, "chat_id":chat_id})
await send_message(update, context, text=answer)
```
When the agent runs, it generates code like this:
```python
β Executing parsed code: ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
from datetime import datetime, timedelta
# Set the reminder details
title = "Meeting with John"
date_time = datetime(2025, 3, 1, 9, 0) # March 1, 2025, at 09:00
chat_id = 6129357493
context = None # This would typically be the provided context object
# Add the reminder
reminder_response = add_reminder(tit
|
https://github.com/huggingface/smolagents/issues/842
|
closed
|
[] | 2025-02-28T23:04:49Z
| 2025-03-01T23:45:40Z
| null |
ebravofm
|
pytorch/xla
| 8,774
|
The "Pytorch/XLA overview" is very long, goes into advanced topics, and is overall intimidating for new users.
|
## π Documentation
The "Pytorch/XLA overview" includes many advanced topics that go beyond an "overview", including how to specifically convert Stable Diffusion to run on TPUs (which is more of a Guide) and how to profile (which is more of a Tutorial). The result is an intimidating introduction for potential users of PyTorch/XLA.
I'd suggest we break the SD section into a stand-alone guide. And the profiling section into a standalone tutorial, one with a simplified example that has a successful outcome (the current section ends with a "we found the problems but there's nothing we can do").
The remaining copy can be redrafted into an "intro", so that users can hear about some of the benefits of PyTorch/XLA and as a result get encouraged to continue reading and even trying out the platform.
|
https://github.com/pytorch/xla/issues/8774
|
open
|
[
"enhancement",
"documentation"
] | 2025-02-28T20:47:25Z
| 2025-06-03T17:34:09Z
| 2
|
yaoshiang
|
pytorch/xla
| 8,773
|
Document the virtual device mesh
|
## π Documentation
We need to explain what is a "mesh". The current documentation in https://pytorch.org/xla/master/perf/spmd_basic.html#mesh doesn't explain it very well. For example, it doesn't say what does specifying `device_ids is almost always np.array(range(num_devices)).` do.
|
https://github.com/pytorch/xla/issues/8773
|
closed
|
[
"enhancement",
"documentation"
] | 2025-02-28T19:16:32Z
| 2025-03-16T23:33:32Z
| 1
|
tengyifei
|
pytorch/xla
| 8,772
|
Paramatize test_aten_xla_tensor tests
|
## π Feature
Paramatize test_aten_xla_tensor tests. Inspired by https://github.com/pytorch/xla/pull/8734#discussion_r1968768218.
Example of a test_aten_xla_tensor tests: [test_aten_xla_tensor_1](https://github.com/pytorch/xla/blob/2675e6892c6f955fc2baf88d85dfdfa72062273c/test/cpp/test_aten_xla_tensor_1.cpp)
## Motivation
Decrease and simplify the amount of code we have for testing while increasing readability. Right now test_aten_xla_tensor tests are split into 6 distinct files. Each with over 1000 lines each. 2 with over 5000 lines. This makes tests hard to read, and implementing new tests.
Paramatization will hopefully:
1) Significantly decrease the number of lines on the test
2) Significantly increase readability
3) Increase speed for developing tests
## Pitch
Collapse tests that are simililar into the same Parameterized test.
## Alternatives
There are other paramitization methods for C++ that are less clean than INSTANTIATE_TEST_SUITE_P. We could seek these if they are blockers
## Additional context
We should utilize [INSTANTIATE_TEST_SUITE_P](https://github.com/google/googletest/blob/main/docs/advanced.md#how-to-write-value-parameterized-tests)
|
https://github.com/pytorch/xla/issues/8772
|
open
|
[
"enhancement",
"usability",
"testing"
] | 2025-02-28T18:19:20Z
| 2025-03-06T03:06:31Z
| 2
|
pgmoka
|
pytorch/pytorch
| 148,196
|
[inductor][triton] Decide how to deprecate "old triton versions"
|
### π The feature, motivation and pitch
Right now we have a mess of at least 3 "versions" of Triton - i.e. commit ranges that we are compatible with.
This is beneficial for a few reasons:
* Ability to bisect old versions of Triton
* Compatibility with users who have different (i.e. old) versions of Triton installed - also fbcode/oss mismatches,
* Possibly other Triton forks for different hardware, which may be based off of old versions of Triton
But it has some downsides - mainly messy code trying to handle the various versions of Triton. Also, we don't test the old versions, so there's nothing ensuring that these old code paths are actually still correct. We should probably decide on a policy or a way to determine when we can clean up handling for an old version of Triton.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
|
https://github.com/pytorch/pytorch/issues/148196
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-02-28T17:18:12Z
| 2025-03-04T15:39:46Z
| null |
davidberard98
|
huggingface/sentence-transformers
| 3,254
|
How to train sentencetransformer with multiple negativeοΌ
|
I have a dataset like: {'anchor':str,'postive':str,negative:list[str]}
it seems invalid by example code
```python
model = SentenceTransformer(model_path)
extend_position_embeddings(model._first_module().auto_model,max_length)
loss = CachedMultipleNegativesRankingLoss(model, mini_batch_size=16)
training_args = SentenceTransformerTrainingArguments(
output_dir=f"./model_dir/{args.save_name}-{args.data_mode}",
overwrite_output_dir=True,
logging_dir="./logs",
logging_steps=1,
save_strategy='epoch',
save_total_limit=2,
# max_steps=900,
num_train_epochs=3,
warmup_ratio=0.05,
learning_rate=3e-5,
weight_decay=0.01,
gradient_accumulation_steps=16,
per_device_train_batch_size=4,
dataloader_num_workers=1,
batch_sampler=BatchSamplers.NO_DUPLICATES,
fp16=True,
lr_scheduler_type="cosine",
remove_unused_columns=False,
# deepspeed='/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/ruanjunhao/chatrag-bench/train/ds3.json',
# gradient_checkpointing=True,
)
trainer = SentenceTransformerTrainer(
model=model,
args=training_args,
train_dataset=dataset,
loss=loss,
)
dataloader = trainer.get_train_dataloader()
for d in dataloader:
import pdb
pdb.set_trace()
trainer.train()
```
```bash
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1191, in __init__
self._reset(loader, first_iter=True)
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1228, in _reset
self._try_put_index()
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1471, in _try_put_index
index = self._next_index()
^^^^^^^^^^^^^^^^^^
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 691, in _next_index
return next(self._sampler_iter) # may raise StopIteration
^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/sentence_transformers/sampler.py", line 193, in __iter__
value
TypeError: unhashable type: 'list'
```
|
https://github.com/huggingface/sentence-transformers/issues/3254
|
closed
|
[] | 2025-02-28T15:01:19Z
| 2025-06-13T05:04:35Z
| null |
rangehow
|
huggingface/lerobot
| 789
|
how to run eval with mujoco sim?
|
now ,run eval.py is only output in command line. how to run eval with mujoco sim?
|
https://github.com/huggingface/lerobot/issues/789
|
closed
|
[
"simulation",
"stale"
] | 2025-02-28T10:42:46Z
| 2025-10-08T11:57:42Z
| null |
mmlingyu
|
huggingface/lerobot
| 788
|
offline run convert_dataset_v1_to_v2.py
|
I need help!!!!!
for exampleοΌwhen i run convert_dataset_v1_to_v2.py, it prompts the following:

and what is train.parquet?

how to solve it?
|
https://github.com/huggingface/lerobot/issues/788
|
closed
|
[
"bug",
"question",
"dataset",
"stale"
] | 2025-02-28T06:41:43Z
| 2025-10-09T21:54:09Z
| null |
ximiluuuu
|
pytorch/torchtitan
| 903
|
[Possible PR discuss] Will a PR of training HF model be welcomed?
|
Hi! We are in the process of developing a novel training framework for Reinforcement Learning (RL) following TorchTitan. Recently, we've developed a feature to support direct training from Hugging Face (HF) models and the loading safetensors in online sharded fashion. This may substantially cuts down the cost of adapting a new model. All you have to do is implement the parallelism applying function.
Given this, I wonder whether a PR with the relevant code and a training example for training Hugging Face's Llama model is welcomed. I think this addition will be of great benefit to many in the community.
By the way, during my testing, I found that the HF Llama model demonstrates competitive TPS when compared to the model implemented in TorchTitan.
|
https://github.com/pytorch/torchtitan/issues/903
|
open
|
[
"huggingface integration",
"community help wanted"
] | 2025-02-28T03:13:40Z
| 2025-03-04T08:09:14Z
| 7
|
junjzhang
|
pytorch/torchtitan
| 902
|
Question about triton in deepseek implementtion
|
I noticed that some adaptations related to DeepSeek have already been merged. I would like to understand why Triton is being used for implementation. In certain scenarios, such as on ARM architecture or other privateuse1 backends, Triton is not yet fully supported. Have you considered making the use of Triton an optional configuration? @kwen2501
|
https://github.com/pytorch/torchtitan/issues/902
|
closed
|
[
"question"
] | 2025-02-28T02:55:48Z
| 2025-08-21T03:13:51Z
| null |
zqwenn
|
pytorch/xla
| 8,765
|
Settle on a consistent logging methodology and document it
|
It would be useful for PyTorchXLA to provide easy to use debugging logs. To do so, we need to:
1) Settle on specific logging methodology
2) Document it for further use
3) Document how to activate these logs
|
https://github.com/pytorch/xla/issues/8765
|
open
|
[
"enhancement",
"usability",
"documentation"
] | 2025-02-27T19:28:20Z
| 2025-03-05T20:19:25Z
| 0
|
pgmoka
|
pytorch/xla
| 8,764
|
"Too many open files" error documenting for multi-processing
|
In multiprocessing cases, we can get a "Too many open files" error from too many processes opening at the same time. This can be confusing as this is a common error for file opening. We should add more information to the error to make this issue easier to track.
|
https://github.com/pytorch/xla/issues/8764
|
open
|
[
"enhancement",
"usability",
"documentation"
] | 2025-02-27T19:08:27Z
| 2025-03-05T20:19:12Z
| 0
|
pgmoka
|
pytorch/xla
| 8,763
|
Improve Logging methodology and documentation
|
Standardized logging method which can be leverage with debugging flags.
Afterwards, document how to get these logs in our documentation.
|
https://github.com/pytorch/xla/issues/8763
|
open
|
[
"enhancement",
"usability",
"documentation"
] | 2025-02-27T18:57:29Z
| 2025-03-11T16:48:58Z
| 0
|
pgmoka
|
pytorch/xla
| 8,762
|
Centralize API guide docs
|
Centralize API guide docs. Right now for users interested in our APIs, there are a couple places they might go to:
- https://github.com/pytorch/xla/blob/6f423d0bb284190cf1b12d8a943a334e57b4df28/docs/source/learn/api-guide.rst
- https://pytorch.org/xla/release/r2.6/learn/api-guide.html
- https://github.com/pytorch/xla/blob/6f423d0bb284190cf1b12d8a943a334e57b4df28/API_GUIDE.md?plain=1#L166
|
https://github.com/pytorch/xla/issues/8762
|
open
|
[
"enhancement",
"documentation"
] | 2025-02-27T18:54:49Z
| 2025-03-05T20:18:36Z
| 0
|
pgmoka
|
pytorch/xla
| 8,761
|
Create full tutorial example for transitioning Pytorch to Pytorch XLA
|
It would be useful for new users to have a basic example showing the differences between the two.
|
https://github.com/pytorch/xla/issues/8761
|
open
|
[
"enhancement",
"documentation"
] | 2025-02-27T18:53:29Z
| 2025-03-28T17:54:03Z
| 3
|
pgmoka
|
pytorch/xla
| 8,760
|
Add profiling documentation
|
[re: issues/8743](]https://github.com/pytorch/xla/issues/8743#issuecomment-2686428336)
This issue has a request for adding documentation on the `start_trace` and `stop_trace` API, but we currently don't have any documentation around profiling. Who can I work with to get some profiling documentation written? Thanks!
|
https://github.com/pytorch/xla/issues/8760
|
open
|
[
"enhancement",
"documentation"
] | 2025-02-27T17:48:34Z
| 2025-03-12T00:08:59Z
| 3
|
mikegre-google
|
huggingface/sentence-transformers
| 3,252
|
How to train sentence transformers with multi machines?
|
The [docs](https://sbert.net/docs/sentence_transformer/training/distributed.html) describes how to train sentence transformers with multi-GPUs.
But both my model and my data are huge, and training sentence transformers with 8 GPUs in one single machine is still very slow.
Does sentence transformers support training using mutiple machines, each with 8 GPUs. Do we have any examples?
Thank you very much.
|
https://github.com/huggingface/sentence-transformers/issues/3252
|
open
|
[] | 2025-02-27T13:37:02Z
| 2025-02-27T13:37:02Z
| null |
awmoe
|
huggingface/diffusers
| 10,917
|
Is lumina-2.0 script correct?
|
I wrote a script, based on the one provided [here](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_lumina2.py)
it gets stuck on loss around 0.5, and i think it is a lot, isn't it?
|
https://github.com/huggingface/diffusers/issues/10917
|
open
|
[] | 2025-02-27T11:17:00Z
| 2025-02-28T15:46:43Z
| 3
|
Riko0
|
huggingface/open-r1
| 444
|
How to increase the context window from 4k to 32k on qwen models ?
|
Hello,
I'm trying to distill a subset of the [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/openr1-220k-math) dataset into my Qwen/Qwen2.5-Math-7B-Instruct. I want to do this via a custom SFT pipeline in order to see if I can match the results obtained in the evaluations.
However I'm struggling increasing the context window of the Qwen math model from 4k to 32k tokens.
This is what I tried in the config.json of the model:
```
{
"_name_or_path": "Qwen/Qwen2.5-Math-7B-Instruct",
"architectures": [
"Qwen2ForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 32768,
"max_window_layers": 28,
"model_type": "qwen2",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"type": "linear",
"factor": 8.0
},
"rope_theta": 10000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.48.1",
"use_cache": true,
"use_sliding_window": false,
"vocab_size": 152064
}
```
But the generations obtained with this base model are garbage. Do you have any advices on which parameters are the best and how to be able to train the model on bigger context windows than initially released ?
Thanks !
|
https://github.com/huggingface/open-r1/issues/444
|
closed
|
[] | 2025-02-27T10:27:43Z
| 2025-07-24T23:56:12Z
| null |
Jeremmmyyyyy
|
huggingface/trl
| 2,972
|
How many H20 (96GB) GPUs are needed to train Qwen7B with the GRPO algorithm?
|
I want to use the GRPO algorithm to train Qwen7B, but I failed using 4 H20 (96GB) GPUs with the trl library. I would like to know how many H20 GPUs are needed.
|
https://github.com/huggingface/trl/issues/2972
|
open
|
[
"β question",
"π GRPO"
] | 2025-02-27T04:12:16Z
| 2025-03-14T02:22:36Z
| null |
Tuziking
|
pytorch/ao
| 1,790
|
An error was encountered setting torch._dynamo.decorators.mark_unbacked
|
Hello, I want batch set up to be dynamic and I use torch._dynamo.mark_dynamic to set it. But I found that recompile is triggered when batch is 1 and 2. Then I used torch._dynamo.decorators.mark_unbacked but it quantizes incorrectly. Can you look at this problem?
My environment:
torch: 2.5.0
torchao: 0.8.0
This is the minimum repetition code
```python
import torch
from torchao.quantization.quant_api import (
quantize_,
int8_dynamic_activation_int8_weight
)
torch._logging.set_logs(recompiles=True, recompiles_verbose = True)
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(128, 256)
def forward(self, x):
return self.linear(x)
model = MyModel().cuda().eval()
model = torch.compile(model, fullgraph=True)
# quant
quantize_(model, int8_dynamic_activation_int8_weight())
example_input = torch.randn(2, 64, 128).cuda()
torch._dynamo.decorators.mark_unbacked(example_input, 0)
torch._dynamo.mark_dynamic(example_input, 0)
model(example_input)
x1 = torch.randn(1, 64, 128).cuda()
x2 = torch.randn(2, 64, 128).cuda()
print("input shape: ", x1.shape)
model(x1)
print("input shape: ", x2.shape)
model(x2)
```
This is the error log
<details>
W0227 10:58:38.277000 1279033 torch/fx/experimental/symbolic_shapes.py:5124] [0/0] failed during evaluate_expr(Ne(u0, 1), hint=None, size_oblivious=False, forcing_spec=False
E0227 10:58:38.277000 1279033 torch/fx/experimental/recording.py:298] [0/0] failed while running evaluate_expr(*(Ne(u0, 1), None), **{'fx_node': False})
Traceback (most recent call last):
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 2132, in run_node
return node.target(*args, **kwargs)
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/utils.py", line 433, in _dispatch__torch_function__
return cls._ATEN_OP_OR_TORCH_FN_TABLE[func](func, types, args, kwargs)
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/utils.py", line 412, in wrapper
return func(f, types, args, kwargs)
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/quantization/linear_activation_quantized_tensor.py", line 126, in _
return weight_tensor._quantized_linear_op(input_tensor, weight_tensor, bias)
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/quantization/linear_activation_quantized_tensor.py", line 83, in _quantized_linear_op
quantized_tensor = input_quant_func(input_tensor, **quant_kwargs)
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/quantization/quant_api.py", line 800, in _int8_symm_per_token_reduced_range_quant
return to_affine_quantized_intx(
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/dtypes/affine_quantized_tensor.py", line 250, in from_hp_to_intx
scale, zero_point = choose_qparams_affine(
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/quantization/quant_primitives.py", line 738, in choose_qparams_affine
return _choose_qparams_affine(
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/_ops.py", line 1116, in __call__
return self._op(*args, **(kwargs or {}))
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/quantization/quant_primitives.py", line 840, in _choose_qparams_affine
shape_for_reduction, reduction_dims = _get_reduction_params(
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torchao/quantization/quant_primitives.py", line 229, in _get_reduction_params
if block_size[i] != input_size[i] and block_size[i] > 1:
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/__init__.py", line 680, in __bool__
return self.node.bool_()
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py", line 511, in bool_
return self.guard_bool("", 0)
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py", line 449, in guard_bool
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/fx/experimental/recording.py", line 262, in wrapper
return retlog(fn(*args, **kwargs))
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5122, in evaluate_expr
return self._evaluate_expr(orig_expr, hint, fx_node, size_oblivious, forcing_spec=forcing_spec)
File "/root/picasso/songh/my_venv/py310/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5238, in _evaluate_expr
raise self._make_data_dependen
|
https://github.com/pytorch/ao/issues/1790
|
open
|
[
"question",
"quantize_",
"triaged"
] | 2025-02-27T03:10:43Z
| 2025-03-06T19:07:34Z
| null |
songh11
|
pytorch/torchtitan
| 897
|
Moving train.py to torchtitan submodule makes run_train.sh failed with "Can not find module"
|
### Bug description
Hi team,
I noticed a recent change which moved train.py from the top level fold in the project to torchtitan sub folder. This caused the failure of run_train.sh with following error msg.
It cased the following error with import message "from torchtitan.components.checkpoint import CheckpointManager, TrainState" at the beginning of train.py. This is because the train.py can not find a submodule named "torchtitan" cause train.py is already part of torchtitan.
I fixed by some hacky way but looking forward to more suggestions on this
<img width="1208" alt="Image" src="https://github.com/user-attachments/assets/3a4358ad-e5a0-4fae-8ebe-1dfb3589da44" />
Thank you!
```
(/home/jianiw/local/jiani/pytorch-env) [jianiw@devvm7508]~/local/jiani/torchtitan% LOG_RANK=0,1 NGPU=4 ./run_train.sh
+ NGPU=4
+ LOG_RANK=0,1
+ CONFIG_FILE=./torchtitan/models/llama/train_configs/debug_model.toml
+ overrides=
+ '[' 0 -ne 0 ']'
+ PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
+ torchrun --nproc_per_node=4 --rdzv_backend c10d --rdzv_endpoint=localhost:0 --local-ranks-filter 0,1 --role rank --tee 3 torchtitan/train.py --job.config_file ./torchtitan/models/llama/train_configs/debug_model.toml
W0226 15:57:42.491000 2461839 torch/distributed/run.py:763]
W0226 15:57:42.491000 2461839 torch/distributed/run.py:763] *****************************************
W0226 15:57:42.491000 2461839 torch/distributed/run.py:763] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0226 15:57:42.491000 2461839 torch/distributed/run.py:763] *****************************************
[rank0]:Traceback (most recent call last):
[rank0]: File "/data/users/jianiw/jiani/torchtitan/torchtitan/train.py", line 14, in <module>
[rank0]: from torchtitan.components.checkpoint import CheckpointManager, TrainState
[rank0]:ModuleNotFoundError: No module named 'torchtitan'
[rank1]:Traceback (most recent call last):
[rank1]: File "/data/users/jianiw/jiani/torchtitan/torchtitan/train.py", line 14, in <module>
[rank1]: from torchtitan.components.checkpoint import CheckpointManager, TrainState
[rank1]:ModuleNotFoundError: No module named 'torchtitan'
E0226 15:57:44.126000 2461839 torch/distributed/elastic/multiprocessing/api.py:870] failed (exitcode: 1) local_rank: 0 (pid: 2462029) of binary: /home/jianiw/local/jiani/pytorch-env/bin/python
Traceback (most recent call last):
File "/home/jianiw/local/jiani/pytorch-env/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch', 'console_scripts', 'torchrun')())
File "/data/users/jianiw/jiani/pytorch/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 354, in wrapper
return f(*args, **kwargs)
File "/data/users/jianiw/jiani/pytorch/torch/distributed/run.py", line 889, in main
run(args)
File "/data/users/jianiw/jiani/pytorch/torch/distributed/run.py", line 880, in run
elastic_launch(
File "/data/users/jianiw/jiani/pytorch/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/data/users/jianiw/jiani/pytorch/torch/distributed/launcher/api.py", line 270, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
torchtitan/train.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2025-02-26_15:57:43
host : devvm7508.cco0.facebook.com
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 2462030)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2025-02-26_15:57:43
host : devvm7508.cco0.facebook.com
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 2462032)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
time : 2025-02-26_15:57:43
host : devvm7508.cco0.facebook.com
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 2462033)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-02-26_15:57:43
host : devvm7508.cco0.facebook.com
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 2462029)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
```
### Versions
Current main branch after #894 merged (I don't t
|
https://github.com/pytorch/torchtitan/issues/897
|
closed
|
[] | 2025-02-27T00:11:02Z
| 2025-03-23T01:42:01Z
| 3
|
jianiw25
|
pytorch/xla
| 8,757
|
Document on how to profile with torch_xla
|
## π Documentation
I found we don't have a doc/guide on how to profile with torch_xla. We should add this because getting profile is essential for performance analysis.
|
https://github.com/pytorch/xla/issues/8757
|
closed
|
[
"enhancement",
"documentation"
] | 2025-02-26T23:27:20Z
| 2025-12-02T00:18:03Z
| null |
lsy323
|
pytorch/serve
| 3,394
|
Rename open_inference_grpc.proto package name
|
Hi Team,
Starting from 0.10.0, torchServe introduced [open_inference_grpc.proto](https://github.com/pytorch/serve/blob/v0.10.0/frontend/server/src/main/resources/proto/open_inference_grpc.proto) to allow Pytorch GRPC APIs to follow Kserve open inference V2 protocol. However, I am wondering why the [package name](https://github.com/pytorch/serve/blob/v0.10.0/frontend/server/src/main/resources/proto/open_inference_grpc.proto#L18) used for the proto is different from what's used in [Kserve](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/grpc_predict_v2.proto#L16). Having a different package name would require Pytorch model and non-Pytorch model to use different proto definitions even though they both follow the open inference protocol. I am wondering if it is possible to make the open_inference_grpc.proto within the same package as what is defined in Kserve grpc_predict_v2.proto?
Thank you.
|
https://github.com/pytorch/serve/issues/3394
|
open
|
[] | 2025-02-26T21:49:57Z
| 2025-02-26T21:50:25Z
| 0
|
jwang20250226
|
huggingface/lerobot
| 779
|
Is there a way for a robot arm with kinesthetic teaching function to collect data using lerobot?
|
Hello, I have a robot arm with kinesthetic teaching function. I guess I can teach my robot at the first time, and collect data from the second time using lerobot? I'm here to ask is this easy to achieve by modifying control_robot.py file? Thanks
|
https://github.com/huggingface/lerobot/issues/779
|
closed
|
[
"question",
"stale"
] | 2025-02-26T17:50:51Z
| 2025-10-16T02:28:54Z
| null |
yzzueong
|
huggingface/diffusers
| 10,910
|
ValueError: Attempting to unscale FP16 gradients.
|
### Describe the bug
I encountered the following error when running train_text_to_image_lora.py: ValueError: Attempting to unscale FP16 gradients.
The script I am running is as follows:
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export DATASET_NAME="lambdalabs/naruto-blip-captions"
accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$DATASET_NAME --caption_column="text" \
--resolution=512 --random_flip \
--train_batch_size=1 \
--num_train_epochs=100 --checkpointing_steps=5000 \
--learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \
--seed=42 \
--output_dir="sd-naruto-model-lora-clean" \
--validation_prompt="cute dragon creature" --report_to="wandb"
How can I resolve this error?
### Reproduction
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export DATASET_NAME="lambdalabs/naruto-blip-captions"
accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$DATASET_NAME --caption_column="text" \
--resolution=512 --random_flip \
--train_batch_size=1 \
--num_train_epochs=100 --checkpointing_steps=5000 \
--learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \
--seed=42 \
--output_dir="sd-naruto-model-lora-clean" \
--validation_prompt="cute dragon creature" --report_to="wandb"
### Logs
```shell
```
### System Info
Traceback (most recent call last):
File "train_text_to_image_lora.py", line 975, in <module>
main()
File "train_text_to_image_lora.py", line 856, in main
accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
File "/root/miniconda3/lib/python3.8/site-packages/accelerate/accelerator.py", line 2396, in clip_grad_norm_
self.unscale_gradients()
File "/root/miniconda3/lib/python3.8/site-packages/accelerate/accelerator.py", line 2340, in unscale_gradients
self.scaler.unscale_(opt)
File "/root/miniconda3/lib/python3.8/site-packages/torch/amp/grad_scaler.py", line 338, in unscale_
optimizer_state["found_inf_per_device"] = self._unscale_grads_(
File "/root/miniconda3/lib/python3.8/site-packages/torch/amp/grad_scaler.py", line 260, in _unscale_grads_
raise ValueError("Attempting to unscale FP16 gradients.")
ValueError: Attempting to unscale FP16 gradients.
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/10910
|
closed
|
[
"bug"
] | 2025-02-26T14:43:57Z
| 2025-03-18T17:43:08Z
| 4
|
Messimanda
|
huggingface/transformers.js
| 1,209
|
Is NFD type normalizer supported?
|
### Question
Hi,
I was trying the following code on browser which uses [dewdev/language_detection](https://huggingface.co/dewdev/language_detection):
`import { pipeline, Pipeline } from '@huggingface/transformers';
export class DetectLanguage {
private modelid: string | null = null;
private detectPipeline: Pipeline | null = null;
private initialized: boolean = false;
constructor(modelid: string = 'dewdev/language_detection') {
this.modelid = modelid;
}
async initialize() {
try {
this.detectPipeline = await pipeline('text-classification', this.modelid, {
dtype: 'fp32',
device: navigator.gpu? 'webgpu': 'wasm'
});
this.initialized = true;
console.log("Model initialization successful.");
} catch (error) {
console.error('Error initializing language detection model with fallback:', error);
this.initialized = false;
throw error;
}
}
async detect(text: string) {
if (!this.initialized || !this.detectPipeline) {
console.error("Model not initialized.");
return '';
}
try {
const language = await this.detectPipeline(text, { top: 1 });
return language;
} catch (error) {
console.error('Error during language detection:', error);
return '';
}
}
}
async function main() {
const detectLanguage = new DetectLanguage();
await detectLanguage.initialize();
const text = "This is a test sentence.";
const language = await detectLanguage.detect(text);
console.log(`Detected language: ${language}`);
}
// Call the main function
main();
`
The above code brings up the following error:
Error initializing language detection model with fallback: Error: Unknown Normalizer type: NFD
at Normalizer.fromConfig (tokenizers.js:1011:1)
at tokenizers.js:1187:1
at Array.map (<anonymous>)
at new NormalizerSequence (tokenizers.js:1187:1)
at Normalizer.fromConfig (tokenizers.js:993:1)
at new PreTrainedTokenizer (tokenizers.js:2545:1)
at new BertTokenizer (tokenizers.js:3277:8)
at AutoTokenizer.from_pretrained (tokenizers.js:4373:1)
at async Promise.all (:5173/index 0)
at async loadItems (pipelines.js:3413:1)
Here is the normalizer section from tokenizer:
`"normalizer": {
"type": "Sequence",
"normalizers": [
{
"type": "NFD"
},
{
"type": "BertNormalizer",
"clean_text": true,
"handle_chinese_chars": true,
"strip_accents": true,
"lowercase": true
}
]
},`
May be NFD normalizer is missing.
Is there any way to bypass this error? Can you please me know?
Thanks
|
https://github.com/huggingface/transformers.js/issues/1209
|
closed
|
[
"question"
] | 2025-02-26T08:48:08Z
| 2025-02-26T14:41:38Z
| null |
adewdev
|
pytorch/FBGEMM
| 3,737
|
How to install this on Windows x64
|
I can't pip install FBGEMM, and I've looked through [https://download.pytorch.org/whl/fbgemm-gpu/](https://download.pytorch.org/whl/fbgemm-gpu/), seems like all whl are support for linux (with 'manylinux' in its name)
I just want to use torchrec on Windows, I wonder How to download FBGEMM.
thank you
|
https://github.com/pytorch/FBGEMM/issues/3737
|
open
|
[] | 2025-02-26T05:58:05Z
| 2025-05-09T00:50:07Z
| null |
Elllllllvin
|
huggingface/open-r1
| 436
|
Why is the reward low and not increased in grpo trainingοΌHow to solveοΌ
|
my config
# Model arguments
model_name_or_path: ../experiment/models/Qwen2.5-1.5B-Instruct
#model_revision: main
torch_dtype: bfloat16
attn_implementation: flash_attention_2
# Data training arguments
dataset_name: ../experiment/datasets/NuminaMath-TIR/data
dataset_configs:
- default
system_prompt: "You are a helpful AI Assistant that provides well-reasoned and detailed responses. You first think about the reasoning process as an internal monologue and then provide the user with the answer. Respond in the following format: <think>\n...\n</think>\n<answer>\n...\n</answer>"
# Num processes is less by 1 as vLLM is using 1 GPU
num_processes: 3
# GRPO trainer config
bf16: true
use_vllm: true
vllm_device: auto
vllm_gpu_memory_utilization: 0.7
do_eval: false
gradient_accumulation_steps: 16
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
#hub_model_id: Qwen2.5-1.5B-Open-R1-GRPO
#hub_strategy: every_save
learning_rate: 2.0e-05
log_completions: true
log_level: info
logging_first_step: true
logging_steps: 5
logging_strategy: steps
lr_scheduler_type: cosine
max_prompt_length: 512
max_completion_length: 1024
max_steps: -1
num_generations: 6
num_train_epochs: 1
output_dir: outputs/Qwen2.5-1.5B-Open-R1-GRPO-no-difficulty
overwrite_output_dir: true
per_device_eval_batch_size: 16
per_device_train_batch_size: 8
push_to_hub: false
report_to:
- none
reward_funcs:
- accuracy
- format
#- tag_count
reward_weights:
- 1.0
- 1.0
#- 1.0
save_strategy: "steps"
save_steps: 100
#save_total_limit: 1
seed: 42
warmup_ratio: 0.1
|
https://github.com/huggingface/open-r1/issues/436
|
open
|
[] | 2025-02-26T05:12:18Z
| 2025-02-27T01:06:53Z
| null |
AXy1527
|
huggingface/lerobot
| 773
|
How to overrite the code to collect action datas from others robotοΌ
|
HeyοΌI have got a problem when i try to overwrite the code of lerobot to collect action datas from my own robot. Hereβs the detail. My robot is a single six joint robot arm, so i make a new RobotConfig, which only contains the info of the camera. And then I overwrite the fuction 'teleop_step' in file manipulator.py. I also set a default value of the robot pos to test at first. When i start to record, the datad of observation and action are fine, but when it comes to call the function 'save_eposide', error comes up, which i show below. I reall want to know what else should i suppose to do to make it work, thanks.



|
https://github.com/huggingface/lerobot/issues/773
|
closed
|
[
"question",
"stale"
] | 2025-02-26T03:33:09Z
| 2025-10-16T02:28:56Z
| null |
tjh-flash
|
pytorch/data
| 1,456
|
Discussion: DCP APIs and broader contracts for rescalability
|
After much discussion, it was decided that the best approach to implementing rescalability would be to implement rescaling in the base file reader, in order to maintain low overhead and avoid proliferation of logical shard objects (see #1372 , #1455, [torchtitan PR](https://github.com/pytorch/torchtitan/pull/376)). However, this approach necessitates that all nodes above the base node become rescaling-aware: we must decide what behaviors to support and how to make specifying these behaviors friendly to the user.
I have identified four behaviors that I believe a fully capable rescalable pipeline should support, with some correspondence to the existing placement behaviors of DTensors:
1. Drop on rescale. Certain values, such as scalars and RNG states, cannot be repartitioned and it makes no sense to try. These values should be dropped when rescaling but kept otherwise.
2. Sharded save, sharded load. Large buffers (for example, a local shuffling buffer) can be pooled into a single DTensor, which is then resharded over a new number of workers when rescaling. DCP is largely built around supporting this particular behavior, but note that we must now handle cases where the number of workers may not divide the length of the buffer evenly, and we also may not know the length of the buffer in advance.
3. Replicated values. This encompasses any expensive metadata objects that we may want to construct (slowly) once, but load from checkpoint afterwards. These values would ideally be saved from rank 0 only, but loaded back to all workers. DCP supports this behavior for non-DTensor objects.
4. Sharded save, global load. Any state that cannot be resharded simply via (2), such as logical shard state, which must first be accumulated/divided into global pools of visited vs unvisited shards. Local values are saved from each rank, but accumulated globally on load. DCP supports this behavior for non-DTensor objects, by assigning a unique rank-based key for all such objects and recompiling them manually on load.
Note that while the above 4 behaviors raise some questions on DCP support, the larger question revolves around how we want to expose these options to users and/or incorporate them into existing Datasets or Nodes.
|
https://github.com/meta-pytorch/data/issues/1456
|
open
|
[] | 2025-02-25T23:14:45Z
| 2025-04-21T13:03:30Z
| 2
|
daviswer
|
huggingface/lerobot
| 771
|
Example of training a policy with PI0?
|
is there an example config file for training a policy with PI0 policy?
|
https://github.com/huggingface/lerobot/issues/771
|
closed
|
[
"question",
"policies"
] | 2025-02-25T19:39:51Z
| 2025-04-03T16:44:44Z
| null |
pqrsqwewrty
|
huggingface/diffusers
| 10,904
|
CLIP Score Evaluation without Pre-processing.
|
I am referring to [Evaluating Diffusion Models](https://huggingface.co/docs/diffusers/main/en/conceptual/evaluation), specifically the quantitative evaluation using CLIP score example.
We have images of shape (6, 512, 512, 3).
CLIP score is calculated using `"openai/clip-vit-base-patch16"`.
However, as far as I can tell, the images are not pre-processed to match the format that `"openai/clip-vit-base-patch16"` was trained on (e.g., images of size 224x224 pixels).
Should the images have been processed before or can we still reliably use the CLIP score with the images in their original format?
Please let me know if I have overlooked or am misunderstanding something. Thanks!
|
https://github.com/huggingface/diffusers/issues/10904
|
open
|
[
"stale"
] | 2025-02-25T16:51:44Z
| 2025-03-28T15:03:20Z
| 1
|
e-delaney
|
huggingface/lerobot
| 769
|
How to convert my ALOHA hdf5 data type to your dataset format?
|
https://github.com/huggingface/lerobot/issues/769
|
closed
|
[
"question",
"dataset",
"stale"
] | 2025-02-25T14:07:13Z
| 2025-10-16T02:28:58Z
| null |
return-sleep
|
|
pytorch/pytorch
| 147,850
|
The issue where opt_output in fx_graph_runnable.py is inconsistent with the actual output when testing run_repro(acc=True)
|
### π Describe the bug
Conclusion
β Use .clone() before modifying tensors from expand(), view(), or as_strided().
β Ensure tensors are .contiguous() before operations.
β Debug with x.is_contiguous() to check memory layout.
If the issue persists, share a code snippet for further debugging! π
### Versions
Conclusion
β Use .clone() before modifying tensors from expand(), view(), or as_strided().
β Ensure tensors are .contiguous() before operations.
β Debug with x.is_contiguous() to check memory layout.
If the issue persists, share a code snippet for further debugging! π
|
https://github.com/pytorch/pytorch/issues/147850
|
closed
|
[] | 2025-02-25T12:23:49Z
| 2025-03-03T16:56:35Z
| null |
MovieTrack
|
pytorch/serve
| 3,393
|
map workers and GPUs, deviceIds not considered in ts_config
|
lt;dr: using my existing configuration shows no effect when using the "deviceIds" property.
I am successfully hosting three diffeerent models on a server with two gpus.
Each model can be run on a single gpu, but one is more demanding - so I'd like to control the distribution of workers per gpu.
The deviceIds property seems to be exactly what I'd need for that.
It is described [here](https://github.com/pytorch/serve/tree/master/model-archiver#config-file) for the archiver and [here](https://pytorch.org/serve/configuration.html) for either/and the archivers yaml or the model configuration.
And seems to be implemented [here](https://github.com/pytorch/serve/blob/a9e218ae95fe7690c84b555d0fb9021322c9b049/frontend/archive/src/main/java/org/pytorch/serve/archive/model/ModelConfig.java#L81).
However, using my existing configuration - which succsessfully controls the worker numbers and timeouts - shows no effect whatsoever when using the deviceIds or deviceType properties. Is this only implemented for the YAML file uppon archiving?
Is there a way to set the deviceIds via the API?
Configuration excerpt:
...
"defaultVersion": true,\
"marName": "model.mar",\
"deviceIds": [1,],\
"minWorkers": 4,\
"maxWorkers": 4,\
"batchSize": 1,\
"maxBatchDelay": 50,\
"responseTimeout": 120\
...
------------------------------------------------------------------------------------------
Environment headers
------------------------------------------------------------------------------------------
Torchserve branch:
torchserve==0.12.0
torch-model-archiver==0.12.0
Python version: 3.10 (64-bit runtime)
Python executable: /opt/conda/bin/python
Versions of relevant python libraries:
captum==0.6.0
numpy==2.2.3
pillow==10.3.0
psutil==5.9.8
requests==2.32.0
torch==2.4.0+cu121
torch-model-archiver==0.12.0
torch-workflow-archiver==0.2.15
torchaudio==2.4.0+cu121
torchelastic==0.2.2
torchserve==0.12.0
torchvision==0.19.0+cu121
wheel==0.42.0
torch==2.4.0+cu121
**Warning: torchtext not present ..
torchvision==0.19.0+cu121
torchaudio==2.4.0+cu121
Java Version:
OS: N/A
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: N/A
CMake version: version 3.26.4
Is CUDA available: Yes
CUDA runtime version: 12.1
NVIDIA GPU models and configuration:
NVIDIA RTX 4000 Ada Generation
NVIDIA RTX 4000 Ada Generation
Nvidia driver version: 565.77
Nvidia driver cuda version: 12.7
cuDNN version: 9.1.0
Environment:
library_path (LD_/DYLD_): /usr/local/nvidia/lib:/usr/local/nvidia/lib64
|
https://github.com/pytorch/serve/issues/3393
|
open
|
[] | 2025-02-25T12:23:11Z
| 2025-02-26T14:37:27Z
| 0
|
RuDevKu
|
huggingface/diffusers
| 10,901
|
HunyuanVIdeo in diffusers use negative_prompt but generate wrong video
|
### Describe the bug
Diffusers support negative_prompt for hunyuan_video recently, but when I use negative_prompt and set **guidance_scale** and **true_cfg_scale**, I got a video with all black elements. Maybe I set wrong parameters or save video fail.
How can I fix my problem? Thanks
### Reproduction
import torch
import time
from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel, AutoencoderKLHunyuanVideo
from diffusers.utils import export_to_video, load_image, load_video
NEGATIVE_PROMPT = "Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion"
model_path = "/realpath/hunyuanvideo-community-HunyuanVideo"
pipe = HunyuanVideoPipeline.from_pretrained(model_path, torch_dtype=torch.float16)
pipe.vae.enable_tiling()
pipe.to("cuda")
output = pipe(
prompt="The video shows a man and a woman standing in the snow, wearing winter clothing and holding cups of coffee. ",
negative_prompt=NEGATIVE_PROMPT,
height=480,
width=720,
num_frames=129,
num_inference_steps=10,
true_cfg_scale=6.0,
guidance_scale=1.0,
).frames[0]
export_to_video(output, "diffusers_480p_output.mp4", fps=24)
### Logs
```shell
```
### System Info
H20
resolution = 480 * 720
steps=10
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/10901
|
open
|
[
"bug",
"stale"
] | 2025-02-25T11:08:43Z
| 2025-07-15T07:19:15Z
| 2
|
philipwan
|
huggingface/optimum
| 2,200
|
Bug exporting Whisper?
|
### System Info
Hi! I'm exporting some fine-tuned whisper models, small and base, being fine-tuned in english or spanish. In some cases I've detected that the tokenizer.json is 2.423KB and in other cases 3.839, being the tokenizer.json exported for the same language. I have some models in english where the tokenizer weight's 2.423KB and others where the tokenizer weight's 3.839KB, and same for the spanish ones.
When the tokenizer is 2.423KBs I get problems generating the output, as it reachs the max_lenght of the model, but when the tokenizer file is 3.839KBs, the output gets as it should.
The tokenizer from the original models weights 2.423KBs, and I they works well, but when finetuned the weight change. I don't know if this is an expected output,
### Who can help?
@michaelbenayoun @JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
I have used the following URL to train my models: https://huggingface.co/blog/fine-tune-whisper
The datasets I have used in spanish are:
```py
voxpopuli_spanish = load_dataset(
"facebook/voxpopuli", "es", split="train", streaming=True, trust_remote_code=True
) # I take 133 random instances
common_voice_spanish = load_dataset(
"mozilla-foundation/common_voice_17_0",
"es",
split="train",
streaming=True,
trust_remote_code=True,
) # I take 66 random instances
librispeech_spanish = load_dataset(
"facebook/multilingual_librispeech", "spanish", split="train", streaming=True
) # I take 66 random instances
```
I have used the same datasets for english:
In case of the common_voice and voxpopuli, I just change "es"for "en". For the librispeech:
```py
librispeech_asr = load_dataset(
"openslr/librispeech_asr", split="train.other.500", streaming=True, trust_remote_code=True
)
```
I use other private dataset that I can't share right now, but they are around 200 instances.
For exporting the model I use the following line:
```
optimum-cli export onnx --model whisper-small-es-trained whisper-small-es-onnx --task automatic-speech-recognition --opset 18
```
I have tested using multiple opsets, but I get the same output.
### Expected behavior
I don't know if the behavior is the correct one, or I the exported tokenizer.json must be always the same.
|
https://github.com/huggingface/optimum/issues/2200
|
open
|
[
"bug"
] | 2025-02-25T09:45:02Z
| 2025-03-05T20:58:30Z
| 1
|
AlArgente
|
huggingface/diffusers
| 10,899
|
Whether lohaconfig is supported in the convert_state_dict_to_diffusers method
|
In the train_text_to_image_lora.py file
unet_lora_config = LoraConfig(
r=cfg.rank,
lora_alpha=cfg.rank,
init_lora_weights="gaussian",
target_modules=["to_k", "to_q", "to_v", "to_out.0"],
)
modified to
unet_lora_config = LoHaConfig(
r=cfg.rank,
alpha=cfg.rank,
target_modules=["to_k", "to_q", "to_v", "to_out.0"],
),
unet_lora_state_dict = convert_state_dict_to_diffusers(
get_peft_model_state_dict(unwrapped_unet)
)
in this line, an error will occur. Please tell me how to modify it.
|
https://github.com/huggingface/diffusers/issues/10899
|
open
|
[
"stale"
] | 2025-02-25T08:39:08Z
| 2025-03-27T15:03:17Z
| 2
|
llm8047
|
pytorch/data
| 1,452
|
Open for contribution on utility nodes like `Filter`, `Shuffler`, `Header`, `Cycler`?
|
Hi, do you think this kind of nodes would be in the scope of Torchdata? Then I'm down to open a PR to add them. with remaining and testing, for sure.
```python
import logging
import random
from collections import deque
from typing import Any, Callable, Deque, Dict, Optional, TypeVar, Optional
from torchdata.nodes import BaseNode
logger = logging.getLogger(__name__)
X = TypeVar("X")
T = TypeVar("T")
U = TypeVar("U")
class Filter(BaseNode[T]):
"""Node that filters items from source node based on predicate function."""
SOURCE_KEY = "source"
def __init__(self, source_node: BaseNode[T], filter_fn: Callable[[T], bool]):
super().__init__()
self.source = source_node
self.filter_fn = filter_fn
def reset(self, initial_state: Optional[Dict[str, Any]] = None):
super().reset(initial_state)
self.source.reset(initial_state.get(self.SOURCE_KEY) if initial_state else None)
def next(self) -> T:
while True:
item = next(self.source)
if self.filter_fn(item):
return item
def get_state(self) -> Dict[str, Any]:
return {self.SOURCE_KEY: self.source.state_dict()}
class Shuffler(BaseNode[T]):
"""Node that shuffles items from source node using a buffer."""
SOURCE_KEY = "source"
def __init__(self, source_node: BaseNode[T], buffer_size: int, seed: Optional[int] = None):
super().__init__()
if buffer_size < 1:
raise ValueError("Buffer size must be at least 1")
self.source = source_node
self.buffer_size = buffer_size
self.buffer: Deque[T] = deque()
self.rng = random.Random(seed)
self._initial_seed = seed
def reset(self, initial_state: Optional[Dict[str, Any]] = None):
super().reset(initial_state)
self.buffer.clear()
if initial_state is not None:
self.source.reset(initial_state.get(self.SOURCE_KEY))
self.rng.setstate(initial_state["rng_state"])
else:
self.source.reset()
if self._initial_seed is not None:
self.rng = random.Random(self._initial_seed)
def _fill_buffer(self) -> bool:
"""Fill buffer with items from source. Returns True if any items were added."""
try:
while len(self.buffer) < self.buffer_size:
self.buffer.append(next(self.source))
return True
except StopIteration:
return len(self.buffer) > 0
def next(self) -> T:
if not self.buffer and not self._fill_buffer():
raise StopIteration
# Randomly select and remove an item from the buffer
idx = self.rng.randrange(len(self.buffer))
item = self.buffer[idx]
self.buffer[idx] = self.buffer[-1]
self.buffer.pop()
# Try to refill buffer
self._fill_buffer()
return item
def get_state(self) -> Dict[str, Any]:
return {self.SOURCE_KEY: self.source.state_dict(), "rng_state": self.rng.getstate()}
class Header(BaseNode[T]):
"""Node that yields only the first N items from source node."""
SOURCE_KEY = "source"
def __init__(self, source_node: BaseNode[T], n: int):
super().__init__()
if n < 0:
raise ValueError("n must be non-negative")
self.source = source_node
self.n = n
self._count = 0
def reset(self, initial_state: Optional[Dict[str, Any]] = None):
super().reset(initial_state)
self.source.reset(initial_state.get(self.SOURCE_KEY) if initial_state else None)
if initial_state is not None:
self._count = initial_state["count"]
else:
self._count = 0
def next(self) -> T:
if self._count >= self.n:
raise StopIteration
item = next(self.source)
self._count += 1
return item
def get_state(self) -> Dict[str, Any]:
return {self.SOURCE_KEY: self.source.state_dict(), "count": self._count}
class Cycler(BaseNode[T]):
"""Node that cycles through source node indefinitely."""
SOURCE_KEY = "source"
def __init__(self, source_node: BaseNode[T]):
super().__init__()
self.source = source_node
self._cycle_count: int = 0
def reset(self, initial_state: Optional[Dict[str, Any]] = None):
super().reset(initial_state)
if initial_state is not None:
self._cycle_count = initial_state["cycle_count"]
self.source.reset(initial_state.get(self.SOURCE_KEY))
else:
self._cycle_count = 0
self.source.reset(None)
def next(self) -> T:
try:
return next(self.source)
except StopIteration:
self._cycle_count += 1
self.source.reset(None)
return next(self.source)
def get_state(self) -> Dict[str, Any]:
return {self.SOURCE_KEY: self.source.state_dict(), "cycle_count": self._cycle_count
|
https://github.com/meta-pytorch/data/issues/1452
|
open
|
[] | 2025-02-25T03:36:59Z
| 2025-02-25T05:08:09Z
| 1
|
keunwoochoi
|
pytorch/torchtitan
| 885
|
Possible to integrate DeepEP?
|
ref: https://github.com/deepseek-ai/DeepEP
|
https://github.com/pytorch/torchtitan/issues/885
|
open
|
[] | 2025-02-25T03:24:56Z
| 2026-01-05T17:13:54Z
| 5
|
airlsyn
|
pytorch/xla
| 8,740
|
Add single processing to Getting Started Instructions
|
In our initial README document, we currently only have instructions on multi-processing steps for getting started. We should add information to single processing.
|
https://github.com/pytorch/xla/issues/8740
|
closed
|
[
"documentation"
] | 2025-02-25T01:15:38Z
| 2025-03-27T17:30:35Z
| 0
|
pgmoka
|
huggingface/sentence-transformers
| 3,246
|
How to save the merged model trained with peft?
|
I am working on fine tuning a 7B model and due to the size, we trained it with lora- by following the guidance (https://sbert.net/examples/training/peft/README.html)
```python
peft_config = LoraConfig(
task_type=TaskType.FEATURE_EXTRACTION,
inference_mode=False,
r=8,
lora_alpha=32,
lora_dropout=0.1,
)
model.add_adapter(peft_config)
```
Training works great and we are looking for some guidances to merge the lora layer with the base model and saved.
What we have tried:
1. `model.save_pretrained("")` => only save the lora layer
2. using `peft` library: this doesn't seem to work correctly, as the inference result is the same as the base model.
```
model.save_pretrained(tmp_path)
base_model = SentenceTransformer(model_name_or_path=model_path)
adapter_model = PeftModel.from_pretrained(base_model, adapter_tmp_path)
merged_model = adapter_model.merge_and_unload()
merged_model.config = transformers.AutoConfig.from_pretrained(model_path)
merged_model.save_pretrained(path)
```
We are reaching out for insights about how to merge the sentence transformer trained peft model with the base model. Thanks!
|
https://github.com/huggingface/sentence-transformers/issues/3246
|
closed
|
[] | 2025-02-25T00:56:20Z
| 2025-12-05T12:33:48Z
| null |
chz816
|
huggingface/datasets
| 7,420
|
better correspondence between cached and saved datasets created using from_generator
|
### Feature request
At the moment `.from_generator` can only create a dataset that lives in the cache. The cached dataset cannot be loaded with `load_from_disk` because the cache folder is missing `state.json`. So the only way to convert this cached dataset to a regular is to use `save_to_disk` which needs to create a copy of the cached dataset. For large datasets this can end up wasting a lot of space. In my case the saving operation failed so I am stuck with a large cached dataset and no clear way to convert to a `Dataset` that I can use. The requested feature is to provide a way to be able to load a cached dataset using `.load_from_disk`. Alternatively `.from_generator` can create the dataset at a specified location so that it can be loaded from there with `.load_from_disk`.
### Motivation
I have the following workflow which has exposed some awkwardness about the Datasets saving/caching.
1. I created a cached dataset using `.from_generator` which was cached in a folder. This dataset is rather large (~600GB) with many shards.
2. I tried to save this dataset using `.save_to_disk` to another location so that I can use later as a `Dataset`. This essentially creates another copy (for a total of 1.2TB!) of what is already in the cache... In my case the saving operation keeps dying for some reason and I am stuck with a cached dataset and no copy.
3. Now I am trying to "save" the existing cached dataset but it is not clear how to access the cached files after `.from_generator` has finished e.g. from a different process. I should not be even looking at the cache but I really do not want to waste another 2hr to generate the set so that if fails agains (I already did this couple of times).
- I tried `.load_from_disk` but it does not work with cached files and complains that this is not a `Dataset` (!).
- I looked at `.from_file` which takes one file but the cached file has many (shards) so I am not sure how to make this work.
- I tried `.load_dataset` but this seems to either try to "download" a copy (of a file which is already in the local file system!) which I will then need to save or I need to use `streaming=False` to create an `IterableDataset `which then I need to convert (using the cache) to `Dataset` so that I can save it. With both options I will end up with 3 copies of the same dataset for a total of ~2TB! I am hoping here is another way to do this...
Maybe I am missing something here: I looked at docs and forums but no luck. I have a bunch of arrow files cached by `Dataset.from_generator` and no clean way to make them into a `Dataset` that I can use.
This all could be so much easer if `load_from_disk` can recognize the cached files and produce a `Dataset`: after the cache is created I would not have to "save" it again and I can just load it when I need. At the moment `load_from_disk` needs `state.json` which is lacking in the cache folder. So perhaps `.from_generator` could be made to "finalize" (e.g. create `state.json`) the dataset once it is done so that it can be loaded easily. Or provide `.from_generator` with a `save_to_dir` parameter in addition to `cache_dir` which can be used for the whole process including creating the `state.json` at the end.
As a proof of concept I just created `state.json` by hand and `load_from_disk` worked using the cache! So it seems to be the missing piece here.
### Your contribution
Time permitting I can look into `.from_generator` to see if adding `state.json` is feasible.
|
https://github.com/huggingface/datasets/issues/7420
|
open
|
[
"enhancement"
] | 2025-02-24T22:14:37Z
| 2026-01-05T15:16:35Z
| 3
|
vttrifonov
|
pytorch/torchtitan
| 883
|
[Evaluation] Minimal support for downstream tasks
|
Hello and thanks for the great work,
For now torchtitan only has an evaluation on train loss. Do you have in mind to provide a minimal support for a downstream task like for example a general knowledge score on MMLU?
The aim would be to provide the minimum necessary to accomplish a downstream task, a bit like the minimal example with a HuggingFace dataset (c4 in this case) while trying to keep the native pytorch spirit as much as possible.
If so, can I participate by initiating a PR?
|
https://github.com/pytorch/torchtitan/issues/883
|
closed
|
[
"enhancement",
"high priority",
"triage review"
] | 2025-02-24T16:07:57Z
| 2025-07-10T12:30:00Z
| 14
|
K-H-Ismail
|
huggingface/open-r1
| 413
|
How many resources are required to train deepseek r1 671b using grpo?
|
.
|
https://github.com/huggingface/open-r1/issues/413
|
open
|
[] | 2025-02-24T11:55:12Z
| 2025-02-24T11:55:12Z
| null |
LiuShixing
|
huggingface/safetensors
| 577
|
Could I get safe tensor without lazy loading?
|
### System Info
I see safe_open and deserialize, it seems that both two are lazy loading.
So if I don't want to load safetensor without lazy loading
how could I do, thanks
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Reproduction
I use sglang, and in sglang model_loader/weight_utils.py
it load safetensors like this
`if not is_all_weights_sharded:
with safe_open(st_file, framework="pt") as f:
for name in f.keys(): # noqa: SIM118
param = f.get_tensor(name)
yield name, param
else:
result = load_file(st_file, device="cpu")
for name, param in result.items():
yield name, param
`
I found it loads safe tensor too slow(about 20min+), whether is_all_weights_sharded is True
and if I prefetch safetensors before load_model(like cat * > /dev/null), it could only cost 5min
I try to use threadExecutor to parallel this code, and although get_tensor could be quick, but loading weight still cost 20min +, so I doubt that lazy loading.thanks
### Expected behavior
without lazy loading
|
https://github.com/huggingface/safetensors/issues/577
|
open
|
[] | 2025-02-24T07:55:33Z
| 2025-03-13T16:51:49Z
| 1
|
voidxb
|
pytorch/xla
| 8,738
|
support more op in jaten.py
|
## β Questions and Help
Hi, I want to convert llama2-7b model, and I want to use jlibrary.register_jax_composite to composite some op.
Now I need to composite below 2 ops: torch.nn.RMSNorm and transformers.models.llama.modeling_llama.LlamaRotaryEmbedding.
Do you have plan to add above 2 ops in jaten.py?
[xla](https://github.com/pytorch/xla/tree/master)/[torchax](https://github.com/pytorch/xla/tree/master/torchax)/[torchax](https://github.com/pytorch/xla/tree/master/torchax/torchax)/[ops](https://github.com/pytorch/xla/tree/master/torchax/torchax/ops)/jaten.py
Another question, after using jlibrary.register_jax_composite, I got call op in IR, do you have plan to use composite op replace call op? If have, is there an approximate time for completion?
|
https://github.com/pytorch/xla/issues/8738
|
closed
|
[
"question",
"torchxla2"
] | 2025-02-24T06:10:50Z
| 2025-03-04T06:06:09Z
| null |
raninbowlalala
|
huggingface/trl
| 2,941
|
How to dynamically adjust params during grpo training?
|
How to dynamically adjust params during training? For example, I want to adopt a smaller num_generations(8) at the beginning of grpo training, and enlarge it to 32 and also adopt a larger temperature from the 50th step.
|
https://github.com/huggingface/trl/issues/2941
|
open
|
[
"β question",
"π GRPO"
] | 2025-02-24T02:08:52Z
| 2025-02-24T07:49:10Z
| null |
Tomsawyerhu
|
huggingface/open-r1
| 406
|
How many GPU hours you take to train a simple model?
|
I wonder how many hours you take to use this repo to train a simple model, like DeepSeek-R1-Distill-Qwen-1.5B or DeepSeek-R1-Distill-Qwen-7B, if on 8 H100?
|
https://github.com/huggingface/open-r1/issues/406
|
closed
|
[] | 2025-02-24T00:27:52Z
| 2025-02-24T06:31:31Z
| null |
Red-Scarff
|
huggingface/safetensors
| 576
|
How to access header with python
|
Is there a way to access the header in Python to know the offsets of each tensor data?
|
https://github.com/huggingface/safetensors/issues/576
|
closed
|
[] | 2025-02-23T17:42:46Z
| 2025-03-13T16:58:36Z
| null |
justinchuby
|
huggingface/diffusers
| 10,878
|
How to expand peft.LoraConfig
|
If expanding
peft.LoraConfigοΌ How to modify to accommodate more lora?
|
https://github.com/huggingface/diffusers/issues/10878
|
open
|
[
"stale"
] | 2025-02-23T14:01:11Z
| 2025-03-25T15:03:28Z
| null |
llm8047
|
huggingface/diffusers
| 10,874
|
Does it support adding LoHa method
|
Does it support adding LoHa methodοΌ
Where can I modify itοΌ
|
https://github.com/huggingface/diffusers/issues/10874
|
open
|
[
"stale"
] | 2025-02-23T12:06:14Z
| 2025-03-25T15:03:41Z
| 3
|
llm8047
|
huggingface/diffusers
| 10,872
|
[Feature request] Please add from_single_file support in SanaTransformer2DModel to support first Sana Apache licensed model
|
**Is your feature request related to a problem? Please describe.**
We all know Sana model is very good but unfortunately the LICENSE is restrictive.
Recently a Sana finetuned model is released under Apache LICENSE. Unfortunately SanaTransformer2DModel does not support from_single_file to use it
**Describe the solution you'd like.**
```python
import torch
from diffusers import SanaPipeline
from diffusers import SanaTransformer2DModel
model_path = "Efficient-Large-Model/Sana_1600M_1024px_MultiLing"
dtype = torch.float16
transformer = SanaTransformer2DModel.from_single_file (
"Swarmeta-AI/Twig-v0-alpha/Twig-v0-alpha-1.6B-2048x-fp16.pth",
torch_dtype=dtype,
)
pipe = SanaPipeline.from_pretrained(
pretrained_model_name_or_path=model_path,
transformer=transformer,
torch_dtype=dtype,
use_safetensors=True,
)
pipe.to("cuda")
pipe.enable_model_cpu_offload()
pipe.enable_vae_slicing()
pipe.enable_vae_tiling()
inference_params = {
"prompt": "rose flower",
"negative_prompt": "",
"height": 1024,
"width": 1024,
"guidance_scale": 4.0,
"num_inference_steps": 20,
}
image = pipe(**inference_params).images[0]
image.save("sana.png")
```
```
(venv) C:\aiOWN\diffuser_webui>python sana_apache.py
Traceback (most recent call last):
File "C:\aiOWN\diffuser_webui\sana_apache.py", line 6, in <module>
transformer = SanaTransformer2DModel.from_single_file (
AttributeError: type object 'SanaTransformer2DModel' has no attribute 'from_single_file'
```
**Describe alternatives you've considered.**
No alternatives available as far as I know
**Additional context.**
N.A.
|
https://github.com/huggingface/diffusers/issues/10872
|
closed
|
[
"help wanted",
"Good second issue",
"contributions-welcome",
"roadmap"
] | 2025-02-23T11:36:21Z
| 2025-03-10T03:08:32Z
| 5
|
nitinmukesh
|
pytorch/ao
| 1,764
|
[QST] Tensor subclass serialization
|
Pardon the naive question, trying to understand how to implement a basic tensor subclass.
The problem I'm encountering is that the tensor subclass loses its attributes after calling torch.save on a state dict containing the subclass likely due to the use of `swap_tensors`.
Minimal repro:
```python
from io import BytesIO
import torch
from torch._ops import OpOverload
from torchao.dtypes.nf4tensor import _INNER_TENSOR_NAMES_FOR_SHARDING, NF4Tensor, to_nf4
aten = torch.ops.aten
class SimpleTensor(torch.Tensor):
@staticmethod
def __new__(cls, inner_tensor, *args, **kwargs):
kwargs["device"] = inner_tensor.device
kwargs["layout"] = inner_tensor.layout
kwargs["dtype"] = inner_tensor.dtype
kwargs["requires_grad"] = inner_tensor.requires_grad
print(f"New SimpleTensor: {kwargs}")
return torch.Tensor._make_wrapper_subclass(cls, inner_tensor.shape, **kwargs) # type: ignore[attr-defined]
def __init__(self, inner_tensor, *args, **kwargs):
self.inner_tensor = inner_tensor
def __repr__(self):
return f"SimpleTensor({self.inner_tensor.shape})"
def __tensor_flatten__(self):
return ["inner_tensor"], None
def __tensor_unflatten__(inner_tensors, metadata, outer_size, outer_stride):
return SimpleTensor(inner_tensors["inner_tensor"])
@classmethod
def __torch_function__(cls, func, types, args=(), kwargs=None):
kwargs = {} if kwargs is None else kwargs
try:
print(f"calling {func.__name__} with args: {[type(arg) for arg in args]} and kwargs: {kwargs}")
with torch._C.DisableTorchFunctionSubclass():
return func(*args, **kwargs)
except Exception as e:
print(f"ERR: subclass doesn't implement {func}")
raise e
def __torch_dispatch__(self, func: OpOverload, types, args=(), kwargs=None):
FUNCS = [aten.detach.default, aten.copy_.default]
print(f"dispatching {func._schema.name} {func._opname} {func._overloadname} with {len(args)} args: {[type(arg) for arg in args]} and kwargs: {kwargs}")
print(f"Func in impelmented funcs: {func in FUNCS}")
if func is torch.ops.aten.detach.default:
print(f"returning {args[0]}")
return args[0]
if func is aten.copy_.default:
print(f"copying {args[0]} to {args[1]}")
original = args[0]
copy_in = args[1]
original.inner_tensor.copy_(copy_in.inner_tensor)
return
return func(*args, **kwargs)
torch.serialization.add_safe_globals([SimpleTensor])
###
dtype = torch.bfloat16
device = "cuda"
batch_size = 2
in_features = 256
out_features = 128
original_tensor = torch.randn(out_features, in_features, dtype=dtype, device=device)
print("\n=================== SimpleTensor =================================\n")
simple_tensor = SimpleTensor(original_tensor)
try:
print(f"Simple tensor: {simple_tensor.inner_tensor.shape}")
except Exception as e:
print(f"Simple tensor error: {e}")
torch.utils.swap_tensors(original_tensor, simple_tensor)
try:
print(f"Swapped tensor: {original_tensor.inner_tensor.shape}")
except Exception as e:
print(f"Swapped tensor error: {e}")
buffer = BytesIO()
torch.save({"weight": original_tensor}, buffer)
buffer.seek(0)
try:
state_dict = torch.load(buffer)
except Exception as e:
print(f"State load error: {e}")
try:
restored_tensor = state_dict['weight']
print(f"Restored tensor: {restored_tensor.inner_tensor.shape}")
except Exception as e:
print(f"Restored tensor error: {e}")
print("\n=================== NF4Tensor =================================\n")
original_tensor = torch.randn(out_features, in_features, dtype=dtype, device=device)
nf4_tensor = to_nf4(original_tensor)
try:
for name in _INNER_TENSOR_NAMES_FOR_SHARDING:
print(f"NF4 tensor {name}: {getattr(nf4_tensor, name).shape}")
except Exception as e:
print(f"NF4 tensor error: {e}")
torch.utils.swap_tensors(original_tensor, nf4_tensor)
try:
for name in _INNER_TENSOR_NAMES_FOR_SHARDING:
print(f"Swapped tensor {name}: {getattr(original_tensor, name).shape}")
except Exception as e:
print(f"Swapped tensor Error: {e}")
buffer = BytesIO()
torch.save({"weight": original_tensor}, buffer)
buffer.seek(0)
state_dict = torch.load(buffer)
try:
restored_tensor = state_dict['weight']
for name in _INNER_TENSOR_NAMES_FOR_SHARDING:
print(f"State dict {name}: {getattr(restored_tensor, name).shape}")
except Exception as e:
print(f"State dict error: {e}")
```
Running the above gives the following prints an error while loading the state dict for `SimpleTensor` with `weights_only=True` even after registering `SimpleTensor` as safe (`torch.serialization.add_safe_globals([SimpleTensor])`):
```
State load error: Weights only load failed. In PyTorch 2.6, we changed the default value of the `weights_only` argument i
|
https://github.com/pytorch/ao/issues/1764
|
open
|
[
"question"
] | 2025-02-23T03:25:05Z
| 2025-03-01T19:32:57Z
| null |
jeromeku
|
huggingface/lerobot
| 761
|
How to convert from custom dataset format to LeRobotDataset format?
|
I'm trying to train a LeRobot model on some custom data I've recorded on a custom robot, but first, I need to convert that custom data into the correct format for LeRobotDataset. I'm guessing that an example of how to do this is in the `pusht_zarr.py` file.
Questions:
1) Is the example in `pusht_zarr.py` the proper way to do this dataset format conversion
2) I only care about predicting future actions, so I don't need a `reward` or `success` field for each frame. Can I omit these fields or should I put a dummy value for them? e.g. in these lines of code below in `pusht_zarr.py`, can I omit the `next.reward` and `next.success` fields or must I put some dummy values for them? (and if so, what are the recommended dummy values?)
```
frame = {
"action": torch.from_numpy(action[i]),
# Shift reward and success by +1 until the last item of the episode
"next.reward": reward[i + (frame_idx < num_frames - 1)],
"next.success": success[i + (frame_idx < num_frames - 1)],
}
```
|
https://github.com/huggingface/lerobot/issues/761
|
closed
|
[] | 2025-02-22T02:35:36Z
| 2025-02-25T19:39:08Z
| null |
pqrsqwewrty
|
huggingface/trl
| 2,922
|
How to support multi-device VLLM inference in the GRPO Trainer
|
https://github.com/huggingface/trl/blob/e5ae703d352b29537159180087ef8bd4b41bf625/trl/trainer/grpo_trainer.py#L439-L461
In the current GRPO implementation, VLLM can only run on a single GPU, which becomes a performance bottleneck. For example, in an 8-GPU setup, the remaining 7 GPUs have to wait for 1 GPU to complete inference, and it also can't accommodate larger models.
How can we enable VLLM to run on multiple GPUs? The only concern is that we need to figure out a way to update the parameters across multiple GPUs each time the model is reloaded:
https://github.com/huggingface/trl/blob/e5ae703d352b29537159180087ef8bd4b41bf625/trl/trainer/grpo_trainer.py#L624-L653
|
https://github.com/huggingface/trl/issues/2922
|
open
|
[
"β¨ enhancement",
"π GRPO"
] | 2025-02-21T09:24:51Z
| 2025-03-14T02:45:21Z
| null |
0x404
|
huggingface/safetensors
| 575
|
How to change the model weights in safetensors?
|
### Feature request
For example, I want to change some weight with shape [K,K,C] into [K,K,C/2], how can I achieve this hacking?
### Motivation
N/A
### Your contribution
N/A
|
https://github.com/huggingface/safetensors/issues/575
|
open
|
[] | 2025-02-21T03:36:27Z
| 2025-03-13T16:59:32Z
| null |
JulioZhao97
|
pytorch/torchtitan
| 875
|
RuntimeError: Got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators
|
When I ran the llama3-8b model with cp on a third party device, I ran into a problem with the error message:
`RuntimeError: npu.npu_fusion_attention.default: got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators.`
npu_fusion_attention is called in the torch.nn.functional.scaled_dot_product_attention function, which is a custom operator . How can I solve this problem? Do I need to register a custom operator somewhere?
|
https://github.com/pytorch/torchtitan/issues/875
|
closed
|
[
"question",
"module: context parallel",
"module: dtensor"
] | 2025-02-21T03:23:27Z
| 2025-02-28T08:30:44Z
| null |
aahehehe
|
huggingface/transformers.js
| 1,201
|
Unable to convert Janus models to ONNX
|
### Question
I see that @xenova has successfully export Janus-1.3B and Janus-Pro-1B to ONNX, presumably using some version of scripts/convert.py. We are interested in exporting Janus-Pro-7B to ONNX as well, but have not been able to do so using this script (nor any other path). Attempting to convert either of the previous two models encounters the same errors, so hopefully whatever steps were taken to convert those will also enable the 7B version.
The initial error was:
```
ValueError: The checkpoint you are trying to load has model type `multi_modality` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
```
This was fixed by installing https://github.com/deepseek-ai/Janus and adding
`from janus.models import MultiModalityCausalLM`
to convert.py.
The error that I'm now stuck at is:
```
KeyError: "Unknown task: any-to-any. Possible values are: `audio-classification` for AutoModelForAudioClassification, `audio-frame-classification` for AutoModelForAudioFrameClassification, `audio-xvector` for AutoModelForAudioXVector, `automatic-speech-recognition` for ('AutoModelForSpeechSeq2Seq', 'AutoModelForCTC'), `depth-estimation` for AutoModelForDepthEstimation, `feature-extraction` for AutoModel, `fill-mask` for AutoModelForMaskedLM, `image-classification` for AutoModelForImageClassification, `image-segmentation` for ('AutoModelForImageSegmentation', 'AutoModelForSemanticSegmentation'), `image-to-image` for AutoModelForImageToImage, `image-to-text` for AutoModelForVision2Seq, `mask-generation` for AutoModel, `masked-im` for AutoModelForMaskedImageModeling, `multiple-choice` for AutoModelForMultipleChoice, `object-detection` for AutoModelForObjectDetection, `question-answering` for AutoModelForQuestionAnswering, `semantic-segmentation` for AutoModelForSemanticSegmentation, `text-to-audio` for ('AutoModelForTextToSpectrogram', 'AutoModelForTextToWaveform'), `text-generation` for AutoModelForCausalLM, `text2text-generation` for AutoModelForSeq2SeqLM, `text-classification` for AutoModelForSequenceClassification, `token-classification` for AutoModelForTokenClassification, `zero-shot-image-classification` for AutoModelForZeroShotImageClassification, `zero-shot-object-detection` for AutoModelForZeroShotObjectDetection"
```
I can't find anything about optimum supporting this task, so it is unclear to me how @xenova was able to get around this.
Any insight or assistance would be greatly appreciated.
|
https://github.com/huggingface/transformers.js/issues/1201
|
open
|
[
"question"
] | 2025-02-20T17:55:00Z
| 2025-08-19T12:55:58Z
| null |
turneram
|
huggingface/datasets
| 7,415
|
Shard Dataset at specific indices
|
I have a dataset of sequences, where each example in the sequence is a separate row in the dataset (similar to LeRobotDataset). When running `Dataset.save_to_disk` how can I provide indices where it's possible to shard the dataset such that no episode spans more than 1 shard. Consequently, when I run `Dataset.load_from_disk`, how can I load just a subset of the shards to save memory and time on different ranks?
I guess an alternative to this would be, given a loaded `Dataset`, how can I run `Dataset.shard` such that sharding doesn't split any episode across shards?
|
https://github.com/huggingface/datasets/issues/7415
|
open
|
[] | 2025-02-20T10:43:10Z
| 2025-02-24T11:06:45Z
| 3
|
nikonikolov
|
huggingface/trl
| 2,913
|
How to specify the GPU used by vllm
|
https://github.com/huggingface/trl/blob/a92e00e810762548787fadd5c4a5e6fc13a4928a/trl/trainer/grpo_trainer.py#L392
I have an 8-GPUs server, of which only the last two GPUs are available, and I set CUDA_VISIBLE_DEVICE=6,7, the value of torch.cuda.device_count() is 2. I want to load vllm into GPU 6, and I set vllm_device=cuda:6, but this line of code keeps giving an ValueError. What should I do?
|
https://github.com/huggingface/trl/issues/2913
|
closed
|
[
"β question"
] | 2025-02-20T10:32:30Z
| 2025-02-21T03:14:13Z
| null |
xiaolizh1
|
huggingface/open-r1
| 381
|
how to set sampling parameters when do evaluation
|
As you said you use greedy decoding to reproduce deepseek's evaluation results, And I get different score, there may be something not aligning. So I want to know how to set the sampling parameters and how to see them when I use the 'evaluate.py' to do evaluation.
|
https://github.com/huggingface/open-r1/issues/381
|
open
|
[] | 2025-02-20T08:41:26Z
| 2025-02-24T06:57:59Z
| null |
ItGirls
|
huggingface/open-r1
| 380
|
How to set cuda device for your Data generation pipline
|
Hi author, thanks for your work.
When I use your pipline to generate data set (deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
I find I can not set device with os.environ

It is actually always on the cude:0, how can I set it correctl? Thank you!
|
https://github.com/huggingface/open-r1/issues/380
|
open
|
[] | 2025-02-20T07:06:44Z
| 2025-02-20T07:06:44Z
| null |
Aristo23333
|
pytorch/xla
| 8,728
|
Debug XLA using GDB
|
## β Questions and Help
I would like to debug XLA code using gdb via C++/Python Debugger, which means that I need a _XLAC.cpython-310-x86_64-linux-gnu.so built in debug mode to have debug symbols, just like DCMAKE_BUILD_TYPE=Debug. I don't know how to get this artifact.
Thanks for your help.
|
https://github.com/pytorch/xla/issues/8728
|
closed
|
[] | 2025-02-20T03:22:41Z
| 2025-02-20T08:00:00Z
| 2
|
yuanfz98
|
huggingface/transformers
| 36,293
|
Bug in v4.49 where the attention mask is ignored during generation (t5-small)
|
### System Info
Hi all!
First, thank you very much for your hard work and making these features avalible.
I'm seeing a bug after updating to v4.49 where the output changes even though the attention mask should be masking padded values. Below is a script to reproduce the error.
It will tokenize two prompts, and then call `.generate` on the shorter prompt while trying different slices of the padded `input_ids` and padded `attention_mask`. At some point, the generated response will change for v4.49 but not v4.48.
Enviroment information
```
- `transformers` version: 4.49.0
- Platform: macOS-15.3-arm64-arm-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.29.0
- Safetensors version: 0.5.2
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
```
output of `uv pip compile requirements.in`
```
transformers==4.48.0 # change this to 4.49.0 to reproduce the error
asttokens==3.0.0
certifi==2025.1.31
charset-normalizer==3.4.1
decorator==5.1.1
exceptiongroup==1.2.2
executing==2.2.0
filelock==3.17.0
fsspec==2025.2.0
huggingface-hub==0.29.0
idna==3.10
ipython==8.32.0
jedi==0.19.2
jinja2==3.1.5
markupsafe==3.0.2
matplotlib-inline==0.1.7
mpmath==1.3.0
networkx==3.4.2
numpy==2.2.3
packaging==24.2
parso==0.8.4
pexpect==4.9.0
prompt-toolkit==3.0.50
ptyprocess==0.7.0
pure-eval==0.2.3
pygments==2.19.1
pyyaml==6.0.2
regex==2024.11.6
requests==2.32.3
safetensors==0.5.2
sentencepiece==0.2.0
stack-data==0.6.3
sympy==1.13.1
tokenizers==0.21.0
torch==2.6.0
tqdm==4.67.1
traitlets==5.14.3
typing-extensions==4.12.2
urllib3==2.3.0
wcwidth==0.2.13
```
### Who can help?
@ArthurZucker
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
tokenizer = AutoTokenizer.from_pretrained("t5-small")
cfg = GenerationConfig(
max_new_tokens=512,
do_sample=False,
use_cache=True, # same behavior with use_cache=False
)
shortprompt = ("summarize: Transformers v4.49 appears to have a bug where .generate stops respecting "
"the attention_mask after some number of tokens.")
longprompt = ("summarize: I enjoy walking with my cute dog, especially in the early mornings "
"when the air is crisp and the streets are quiet. Watching my dog happily trot along, "
"always brings a smile to my face.")
# ---
print("# Single prompt ---")
inputs = tokenizer(
[shortprompt], return_tensors="pt", padding=True
)
outputs = model.generate(**inputs, generation_config=cfg)
expected = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(f"short prompt: '{expected}'")
print()
# ---
print("# Double prompt ---")
inputs = tokenizer(
[shortprompt, longprompt], return_tensors="pt", padding=True
)
outputs = model.generate(**inputs, generation_config=cfg)
text = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(f"short prompt: '{text[0]}'")
print(f"long prompt: '{text[1]}'")
print()
# ---
print("# Single shortprompt with mask ---")
def run_sliced_input(slice_, show_text=False):
shortprompt_tokens = inputs.input_ids[0:1, slice_]
shortprompt_mask = inputs.attention_mask[0:1, slice_]
outputs = model.generate(inputs=shortprompt_tokens, attention_mask=shortprompt_mask, generation_config=cfg)
text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
if show_text:
print(f"'{text}'")
return text != expected
# run a bisect search to find the first slice that fails
import bisect
start = inputs.attention_mask[0].sum().item()
full_range = inputs.attention_mask.size(1)
ends = range(start, full_range)
print(f"searching in range {start} to {full_range}")
first_failure = start + bisect.bisect_left(
[slice(None, end) for end in ends], True, key=run_sliced_input
)
if first_failure == full_range:
print("No failure found in the full range!")
else:
print(f"First failing slice: {first_failure}")
print(f"Output with slice at {first_failure-1}: ", end="")
run_sliced_input(slice(None, first_failure-1), show_text=True)
print(f"Output with slice at {first_failure}: ", end="")
run_sliced_input(slice(None, first_failure), show_text=True)
```
### Expected behavior
version 4.48
```
# Single prompt ---
short prompt: 'v4.49 appears to have a bug where.generate stops respecting the attention_mask after some tokens.'
# Double prompt ---
short prompt: 'v4.49 appears to have a bug w
|
https://github.com/huggingface/transformers/issues/36293
|
closed
|
[
"bug"
] | 2025-02-20T02:16:23Z
| 2025-02-20T16:28:11Z
| null |
bdhammel
|
pytorch/xla
| 8,727
|
Create a site map or centralize links in README
|
## π Documentation
Add repo map to https://github.com/pytorch/xla/blob/master/README.md. Currently we have many helpful links, but they are spread around the repo. We should have a location with these centralized to help people find useful documentation easily.
|
https://github.com/pytorch/xla/issues/8727
|
closed
|
[
"documentation"
] | 2025-02-20T00:04:49Z
| 2025-03-24T18:58:57Z
| 1
|
pgmoka
|
pytorch/xla
| 8,726
|
Add documentation on xla_native_functions.yaml categories
|
## π Documentation
Add more information to https://github.com/pytorch/xla/blob/60160233ad413f030da1e7e383cc85950bcf347c/codegen/xla_native_functions.yaml#L3 on what the different categories mean in terms of lowering operations
|
https://github.com/pytorch/xla/issues/8726
|
open
|
[
"documentation"
] | 2025-02-19T21:57:04Z
| 2025-02-20T12:54:47Z
| 2
|
pgmoka
|
pytorch/xla
| 8,725
|
Add operation lowering unit tests to test_operations.py
|
## π Feature
We should expand test/test_operations to check if operations are being lowered. We have previously seen issues being cause due to this issue (see https://github.com/pytorch/xla/issues/4032 and https://github.com/pytorch/xla/issues/8713). An example of this test can be seen in https://github.com/pytorch/xla/pull/8686.
We should study to see if it is possible to generalize this test, and expand to test our other lowered operations
## Motivation
Improve our unit tests to expand coverage while continuing to be readable
|
https://github.com/pytorch/xla/issues/8725
|
open
|
[
"testing"
] | 2025-02-19T20:18:28Z
| 2025-03-04T22:56:09Z
| 1
|
pgmoka
|
pytorch/torchtitan
| 862
|
SimpleFSDP vs. FSDP2
|
Hi @tianyu-l , just came across [SimpleFSDP](https://arxiv.org/pdf/2411.00284) and its [implementation](https://github.com/facebookresearch/capi/blob/main/fsdp.py) (nice project!).
In the paper, SimpleFSDP is extensively compared with FSDP2. May I know if torchtitan is going to support it or there is a way to somehow combine SimpleFSDP and FSDP2?
|
https://github.com/pytorch/torchtitan/issues/862
|
closed
|
[
"question"
] | 2025-02-19T20:16:58Z
| 2025-02-20T08:18:36Z
| null |
yenchenlin
|
huggingface/optimum-nvidia
| 176
|
How to run whisper after #133
|
I see that previously, whisper could be run as follows: [https://github.com/huggingface/optimum-nvidia/blob/whisper-inference/examples/automatic-speech-recognition/whisper.py](https://github.com/huggingface/optimum-nvidia/blob/whisper-inference/examples/automatic-speech-recognition/whisper.py)
But after #133 the code has been significantly refactored. Is there any documentation that shows how to properly run whisper with a tensorRT backend?
```python
from optimum.nvidia.pipelines import pipeline
asr = pipeline("automatic-speech-recognition", model="openai/whisper-base", device=device)
> NotImplementedError: Model type whisper is not currently supported
```
```python
from optimum.nvidia.models.whisper import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base", torch_dtype=torch_dtype)
> AttributeError: type object 'WhisperForConditionalGeneration' has no attribute 'from_pretrained'
```
|
https://github.com/huggingface/optimum-nvidia/issues/176
|
open
|
[] | 2025-02-19T17:45:01Z
| 2025-02-19T17:45:01Z
| null |
huggingfacename
|
pytorch/xla
| 8,722
|
Add args documentation to xla.launch
|
## π Documentation
In https://github.com/pytorch/xla/blob/60160233ad413f030da1e7e383cc85950bcf347c/torch_xla/torch_xla.py#L212, we should have arguments be documented to note that:
1) The callable function's firts argument is the process id;
2) The args tuple is passed to the callable function afterwards.
The pattern being called by Callable is something like:
Callable(process_id, args...).
We should make this clear from the method call.
|
https://github.com/pytorch/xla/issues/8722
|
closed
|
[
"documentation"
] | 2025-02-19T17:44:44Z
| 2025-02-20T18:21:22Z
| 1
|
pgmoka
|
huggingface/peft
| 2,388
|
ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel is not supported.
|
## Context
I'm finetuning the Qwen2.5-Vl model with swift for data extraction using LoRA. I'm not sure what is the correct way to save and upload the adapter and be able to recharge it correctly.
In short, I followed these steps
```python
# load model
model, processor = get_model_tokenizer(
'Qwen/Qwen2.5-VL-3B-Instruct',
torch_dtype=torch.bfloat16,
use_hf=True,
attn_impl="flash_attn",
)
# get lora
...
model_arch = get_model_arch(model.model_meta.model_arch)
lora_config = LoraConfig(
task_type='CAUSAL_LM',
r=4,
lora_alpha=8,
lora_dropout=0.05,
use_rslora=True,
target_modules=get_multimodal_target_regex(
model_arch,
freeze_llm=False,
freeze_vit=False,
freeze_aligner=True
),
)
model = Swift.prepare_model(model, lora_config)
# train config e run
...
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=template.data_collator,
train_dataset=train_dataset,
eval_dataset=val_dataset,
template=template,
callbacks= [
EarlyStoppingCallback(
early_stopping_patience=6,
early_stopping_threshold=0.001
)
]
)
stats = trainer.train()
# push adapter
model.push_to_hub(f"tech4humans/{model_name}", private=True)
```
debugging the peft model was loaded with the class `PeftModelForCausalLM`.
## Problem
Then after I tried to recharge the adapter and I get an error with peft
```python
from transformers import Qwen2_5_VLForConditionalGeneration
model = Qwen2_5_VLForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct", device_map="auto")
model.load_adapter("tech4humans/Qwen2.5-VL-3B-Instruct-r4-tuned")
```
```python
/usr/local/lib/python3.10/dist-packages/peft/tuners/lora/model.py in _create_new_module(lora_config, adapter_name, target, **kwargs)
345 if new_module is None:
346 # no module could be matched
--> 347 raise ValueError(
348 f"Target module {target} is not supported. Currently, only the following modules are supported: "
349 "`torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, ".
ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel(
(patch_embed): Qwen2_5_VisionPatchEmbed(
(proj): Conv3d(3, 1280, kernel_size=(2, 14, 14), stride=(2, 14, 14), bias=False)
)
(rotary_pos_emb): Qwen2_5_VisionRotaryEmbedding()
(blocks): ModuleList(
(0-31): 32 x Qwen2_5_VLVisionBlock(
(norm1): Qwen2RMSNorm((1280,), eps=1e-06)
(norm2): Qwen2RMSNorm((1280,), eps=1e-06)
(attn): Qwen2_5_VLVisionSdpaAttention(
(qkv): Linear(in_features=1280, out_features=3840, bias=True)
(proj): Linear(in_features=1280, out_features=1280, bias=True)
)
(mlp): Qwen2_5_VLMLP(
(gate_proj): Linear(in_features=1280, out_features=3420, bias=True)
(up_proj): Linear(in_features=1280, out_features=3420, bias=True)
(down_proj): Linear(in_features=3420, out_features=1280, bias=True)
(act_fn): SiLU()
)
)
)
(merger): Qwen2_5_VLPatchMerger(
(ln_q): Qwen2RMSNorm((1280,), eps=1e-06)
(mlp): Sequential(
(0): Linear(in_features=5120, out_features=5120, bias=True)
(1): GELU(approximate='none')
(2): Linear(in_features=5120, out_features=2048, bias=True)
)
)
) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, `transformers.pytorch_utils.Conv1D`, `torch.nn.MultiheadAttention.`.
```
## Sytem info
```
transformers 4.50.0.dev0
peft 0.14.1.dev0
ms-swift 3.2.0.dev0
Python 3.10.12
CUDA Version: 12.6
```
Am I missing something or doing something wrong? Any pointers would be appreciated. Thanks!
|
https://github.com/huggingface/peft/issues/2388
|
closed
|
[] | 2025-02-19T15:09:17Z
| 2025-04-09T16:23:53Z
| 8
|
samuellimabraz
|
huggingface/trl
| 2,905
|
How to use GRPOTrainer to train a LLM for code generation? What is the format of the dataset?
|
https://github.com/huggingface/trl/issues/2905
|
open
|
[] | 2025-02-19T12:38:13Z
| 2025-02-19T12:38:13Z
| null |
xiangxinhello
|
|
huggingface/open-r1
| 370
|
how to train grpo on 2 nodes(16gpus)
|
how to train grpo on 2 nodes(16gpus)? 10000 thanks for giving a successful example.
|
https://github.com/huggingface/open-r1/issues/370
|
closed
|
[] | 2025-02-19T09:15:14Z
| 2025-03-26T11:36:03Z
| null |
glennccc
|
huggingface/finetrainers
| 267
|
How to save the best performing checkpoint during LoRA fine-tuning on Hunyuan Video?
|
In the HunyuanVideo training scripts, we can save checkpoints every 500 steps by passing `--checkpointing_steps 500`. The final model is saved through the following code:
```python
if accelerator.is_main_process:
transformer = unwrap_model(accelerator, self.transformer)
if self.args.training_type == "lora":
transformer_lora_layers = get_peft_model_state_dict(transformer)
self.model_config["pipeline_cls"].save_lora_weights(
save_directory=self.args.output_dir,
transformer_lora_layers=transformer_lora_layers,
)
else:
transformer.save_pretrained(os.path.join(self.args.output_dir, "transformer"))
```
(Reference: https://github.com/a-r-r-o-w/finetrainers/blob/4bb10c62324aef4fbac85bb381acb9f6f39a5076/finetrainers/trainer.py#L837C1-L848C95)
My question is: How can I ensure that I save the best performing model during LoRA fine-tuning? The final saved model might not be the best, as the loss could fluctuate during training. The same applies to intermediate checkpoints. Is there a recommended approach for tracking and saving the best-performing model?
|
https://github.com/huggingface/finetrainers/issues/267
|
open
|
[] | 2025-02-19T07:49:11Z
| 2025-02-21T01:39:30Z
| null |
dingangui
|
huggingface/lerobot
| 748
|
[pi0] confusion about the state embedding dimension in `embed_suffix`
|
### System Info
```Shell
- `lerobot` version: 0.1.0
- Platform: Linux-5.14.0-284.86.1.el9_2.x86_64-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.28.1
- Dataset version: 3.2.0
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Cuda version: 12040
- Using GPU in script?: Yes
```
### Information
- [x] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
In the model definition of `modeling_pi0.py`,[ line 567](https://github.com/huggingface/lerobot/blob/fe483b1d0d4ad8506f61924d905943eaa6d3ece0/lerobot/common/policies/pi0/modeling_pi0.py#L567), we see that
```
# Embed state
state_emb = self.state_proj(state)
state_emb = state_emb.to(dtype=torch.bfloat16)
embs.append(state_emb[:, None, :])
bsize = state_emb.shape[0]
dtype = state_emb.dtype
device = state_emb.device
```
We see that the state embedding dimension is bumped up at the 1st dimension.
The problem is, models like pi0 usually use datasets that have `n_obs_steps.`, which is the default of LeRobot's own datasets as well. For example, if I use the `pusht` dataset as specified in this LeRobot example [script](https://github.com/huggingface/lerobot/blob/main/examples/3_train_policy.py), we see that the dimension of the dataset looks something like this
```
image shape torch.Size([64, 2, 3, 96, 96])
state shape torch.Size([64, 2, 2])
action shape torch.Size([64, 16, 2])
```
The first 2 in the dimensions of image and state come from the fact that the dataset gives you two frames of the past in one batch. The 16 in action comes from the fact that diffusion policy has an action horizon of 16 frames in the future.
Now, if we train on dataset like this or any similar dataset, it would have a dimension mismatch in `embed_suffix` because it would bump the state_embedding and give you something like
```
RuntimeError: Tensors must have same number of dimensions: got 4 and 3
```
For pi0 it's more or less okay, because the default n_obs_steps is usually 1, so you can squeeze out the 1st dimension of state, but this current way doesn't seem very expendable in the future, and also not consistent with LeRobot's usual dataset format.
### Expected behavior
I would like to hear some reasoning behind the design choice like this so I can know if I am misunderstanding something.
Thank you very much in advance!
|
https://github.com/huggingface/lerobot/issues/748
|
closed
|
[
"question",
"policies",
"stale"
] | 2025-02-19T03:33:01Z
| 2025-10-20T02:31:45Z
| null |
IrvingF7
|
pytorch/tutorials
| 3,272
|
Introduction to Libuv TCPStore Backend
|
Thanks for the [article](https://github.com/pytorch/tutorials/blob/main/intermediate_source/TCPStore_libuv_backend.rst). Wondering if you can provide some details about the content of the TCPStore and what is its role in c10d .
|
https://github.com/pytorch/tutorials/issues/3272
|
closed
|
[
"question"
] | 2025-02-18T20:56:09Z
| 2025-04-16T17:57:44Z
| null |
githubsgi
|
huggingface/transformers.js
| 1,198
|
whisper: how to get streaming word level timestamps? (automatic-speech-recognition)
|
### Question
## Goal
- streaming
- word level timestamps
## Issue
`on_chunk_start` / `on_chunk_end` are not called when using `return_timestamps: "word"`.
These callbacks only provide timestamps with `return_timestamps: true`
I also tried to decode tokens, as Iβve seen it in the demo, but that uses callbacks that no longer exist (e.g. `chunk_callback(chunk)` and `callback_function(item)`)
## Setup
```ts
const transcriber = await pipeline(
"automatic-speech-recognition",
"Xenova/whisper-tiny",
{
device: "webgpu",
}
);
```
```ts
token_callback_function: (tokens) => {
const { feature_extractor } = transcriber.processor;
const { config: modelConfig } = transcriber.model;
const time_precision = feature_extractor.config.chunk_length / modelConfig.max_source_positions;
if (tokens) {
const data = transcriber.tokenizer._decode_asr(
[{ tokens, finalised: false }],
{
time_precision,
return_timestamps: true,
force_full_sequences: false,
}
);
console.log("data", data);
}
};
```
Decoding works, but timestamps are null.
<img width="370" alt="Image" src="https://github.com/user-attachments/assets/38779a91-7a2a-43c3-be29-cd785e294378" />
|
https://github.com/huggingface/transformers.js/issues/1198
|
open
|
[
"question"
] | 2025-02-18T15:29:42Z
| 2025-02-20T04:45:48Z
| null |
getflourish
|
huggingface/diffusers
| 10,817
|
auto_pipeline missing SD3 contol nets
|
### Describe the bug
Hey, auto_pipeline seesm to be missing the control nets variants for SD3
venv\Lib\site-packages\diffusers\pipelines\auto_pipeline.py
### Reproduction
Load an sd3 model checkpoint with a controlnet loading any of the auto pipes you will just get the none control net variations as its not set in the configuration.
### Logs
```shell
```
### System Info
- π€ Diffusers version: 0.32.2
- Platform: Windows-10-10.0.19045-SP0
- Running on Google Colab?: No
- Python version: 3.12.7
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.27.1
- Transformers version: 4.48.0
- Accelerate version: 1.2.1
- PEFT version: not installed
- Bitsandbytes version: 0.45.2
- Safetensors version: 0.5.2
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 3080 Ti, 12288 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/10817
|
closed
|
[
"bug",
"help wanted",
"contributions-welcome"
] | 2025-02-18T12:54:40Z
| 2025-02-24T16:21:03Z
| 3
|
JoeGaffney
|
huggingface/lerobot
| 746
|
How should I run the model on my own datasets in different envs which is not clearly mentioned in the README?
|
I want to run the diffusion model on my own real world arms datasets, which are different from the example env and input format in observation and action dims.
I've seem some yaml files to store these parameters in earlier version of the repo, but I can't find it in the newest version of the repo. So should I write this params myself in some yaml-like or json-like files or there are some new ways to solve these problems.
This is my first issue in github, so the format may be informal, but I'm really eager for the answers.
Thank you for your answers!!!
|
https://github.com/huggingface/lerobot/issues/746
|
closed
|
[
"question",
"policies",
"dataset",
"stale"
] | 2025-02-18T12:33:07Z
| 2025-10-19T02:32:17Z
| null |
shi-akihi
|
pytorch/pytorch
| 147,374
|
[ONNX] How to export triton custom kernels as custom ops?
|
### π Describe the bug
can't export triton cumstom op kernel when use torch.onnx.export(dynamo=True)
i have use triton_op and wrap_triton to wrap this triton kernel
```python
import torch
from torch.library import triton_op, wrap_triton
import triton
from triton import language as tl
@triton.jit
def add_kernel(
in_ptr0,
in_ptr1,
out_ptr,
n_elements,
BLOCK_SIZE: "tl.constexpr",
):
pid = tl.program_id(axis=0)
block_start = pid * BLOCK_SIZE
offsets = block_start + tl.arange(0, BLOCK_SIZE)
mask = offsets < n_elements
x = tl.load(in_ptr0 + offsets, mask=mask)
y = tl.load(in_ptr1 + offsets, mask=mask)
output = x + y
tl.store(out_ptr + offsets, output, mask=mask)
@triton_op("mylib::add", mutates_args={})
def add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
output = torch.empty_like(x)
n_elements = output.numel()
def grid(meta):
return (triton.cdiv(n_elements, meta["BLOCK_SIZE"]),)
# NB: we need to wrap the triton kernel in a call to wrap_triton
wrap_triton(add_kernel)[grid](x, y, output, n_elements, 16)
return output
@torch.compile
def f(x, y):
return add(x, y)
x = torch.randn(3, device="cuda")
y = torch.randn(3, device="cuda")
z = f(x, y)
assert torch.allclose(z, x + y)
with torch.no_grad():
torch.onnx.export(f,
(x,y,),
"triton_export.onnx",
export_params=True,
dynamo=True,
opset_version=18,
do_constant_folding=False,
optimize=False,
#custom_translation_table=custom_translation_table,
input_names=["zzq_a","zzq_b"],
output_names=["zzq_out"],
verbose=True)
```
error msg:
```
torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with `torch.export.export(..., strict=False)`...
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with `torch.export.export(..., strict=False)`... β
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with `torch.export.export`...
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with `torch.export.export`... β
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with Torch Script...
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with Torch Script... β
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with internal Dynamo apis...
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with internal Dynamo apis... β
[torch.onnx] Run decomposition...
[torch.onnx] Run decomposition... β
[torch.onnx] Translate the graph into ONNX...
[torch.onnx] Translate the graph into ONNX... β
Traceback (most recent call last):
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py", line 708, in _translate_fx_graph
_handle_call_function_node_with_lowering(
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py", line 490, in _handle_call_function_node_with_lowering
raise _errors.DispatchError(
torch.onnx._internal.exporter._errors.DispatchError: No ONNX function found for <torch._higher_order_ops.triton_kernel_wrap.TritonKernelWrapperFunctional object at 0x7f63c5fa01c0>. Failure message: No decompositions registered for the real-valued input
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py", line 1372, in export
onnx_program = _exported_program_to_onnx_program(
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py", line 1008, in _exported_program_to_onnx_program
values = _translate_fx_graph(
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py", line 734, in _translate_fx_graph
raise _errors.ConversionError(
torch.onnx._internal.exporter._errors.ConversionError: Error when translating node %triton_kernel_wrapper_functional_proxy : [num_users=1] = call_function[target=torch.ops.higher_order.triton_kernel_wrapper_functional](args = (), kwargs = {kernel_idx: 0, constant_args_idx: 10, grid: [(1, 1, 1)], tma_descriptor_metadata: {}, kwargs: {in_ptr0: %arg0, in_ptr1: %arg1, out_ptr: %empty_like, n_elements: 3, BLOCK_SIZE: 16}, tensors_to_clone: [out_ptr]}). See the stack trace for more information.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/app/torch_ddp/triton_export.py", line 38, in <module>
torch.onnx.export(f,
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/__init__.py", line 351, in export
return _compat.export_compat(
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/e
|
https://github.com/pytorch/pytorch/issues/147374
|
closed
|
[
"module: onnx",
"triaged"
] | 2025-02-18T12:11:20Z
| 2025-02-19T22:57:49Z
| null |
zzq96
|
pytorch/xla
| 8,715
|
Pytorch XLA XMP Spawn Error
|
## π Bug
<!-- A clear and concise description of what the bug is. -->
I'm currently trying to run a very simple example of just calling "Hello World" from each TPU. I'm currently running based on the torch xla versions on the vllm-tpu docker
## To Reproduce
<!--
It is really important for the team to have a quick repro, which requires no setup work.
The quicker is the repro to be run, the higher the chances the bug will be addressed sooner.
The best way to create quick repros is to create a Colab based on the following template:
https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#using-debug_runpy-to-collect-debug-information
Things to avoid in repros is the need to download datasets which require setting up keys or other login information, like Kaggle downloads for example.
Another example are Colab which mount user's Google Drive storages.
Using a fake data generator could be a solution, in case the dataset cannot be easily downloaded without setting up credentials:
https://github.com/pytorch/xla/blob/784b4d4f21751a54be0029a95f47d3896561c2a9/test/test_train_mp_mnist.py#L65
-->
Steps to reproduce the behavior:
1. Run the docker image for vllm-tpu: https://hub.docker.com/r/vllm/vllm-tpu/tags
Run code:
```
import ray
import torch_xla.core.xla_model as xm
import torch_xla.distributed.xla_multiprocessing as xmp
def train_mp(rank):
# Get XLA device
device = xm.xla_device()
print(f"Hello from rank {rank} on device {device}")
@ray.remote(num_cpus=10, resources={"TPU": 8, "TPU-v6e-8-head": 1})
def run_on_tpu():
# Spawn 8 processes, one for each TPU core
xmp.spawn(train_mp, nprocs=8)
if __name__ == "__main__":
future = run_on_tpu.remote()
ray.get(future)
```
Error:
```
(pid=3030, ip=10.202.15.237) WARNING:root:libtpu.so and TPU device found. Setting PJRT_DEVICE=TPU.
Traceback (most recent call last):
File "/tmp/ray/session_2025-02-17_21-04-51_192242_540/runtime_resources/working_dir_files/_ray_pkg_b1a1e85c76a92463/experiments/test_xla_infer.py", line 17, in <module>
ray.get(future)
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
return fn(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
return func(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/_private/worker.py", line 2691, in get
values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/_private/worker.py", line 871, in get_objects
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(ValueError): ray::run_on_tpu() (pid=3030, ip=10.202.15.237)
File "/tmp/ray/session_2025-02-17_21-04-51_192242_540/runtime_resources/working_dir_files/_ray_pkg_b1a1e85c76a92463/experiments/test_xla_infer.py", line 13, in run_on_tpu
xmp.spawn(train_mp, nprocs=8)
File "/home/ray/anaconda3/lib/python3.10/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 39, in spawn
return pjrt.spawn(fn, nprocs, start_method, args)
File "/home/ray/anaconda3/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 209, in spawn
raise ValueError(
ValueError: Unsupported nprocs (8). Please use the environment variable for the hardware you are using (X_NUM_DEVICES where X is CPU, GPU, TPU, NEURONCORE, etc).
```
I've tried some things such as setting `TPU_NUM_DEVICES` in the environment variables to 8 but that didn't help.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. Or better use the Colab template: https://github.com/pytorch/xla/blob/master/contrib/colab/issue-report.ipynb -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
I would expect a Hello world from each of the devices
## Environment
- Reproducible on XLA backend [CPU/TPU/CUDA]: TPU
- torch_xla version:
```
Using torch_xla version: 2.6.0+git39e67b5
```
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/xla/issues/8715
|
closed
|
[
"distributed"
] | 2025-02-18T06:29:06Z
| 2025-02-20T18:59:56Z
| 3
|
BabyChouSr
|
pytorch/ao
| 1,724
|
[Question] Static Quantization for Open-Source LLMs
|
## Description
Hi, I am a beginner in quantization and would like to experiment with INT8 dynamic and static quantization on open-source LLMs.
* For dynamic quantization, I found that `int8_dynamic_activation_int8_weight` is available in `torchao/quantization/quant_api.py`.
* For static quantization, I did not find an INT8 version. Instead, I only found `float8_static_activation_float8_weight`.
## Questions
* Why is only INT8 dynamic quantization provided? Is there a specific concern that prevents static INT8 quantization?
* If I want to implement INT8 static quantization, can I follow `tutorials/calibration_flow/static_quant.py` as a reference?
* For `float8_static_activation_float8_weight`, it requires a scalar parameter. What would be a recommended way to determine this parameter?
Any insights or guidance would be greatly appreciated. Thanks in advance! π
|
https://github.com/pytorch/ao/issues/1724
|
open
|
[
"question",
"quantize_"
] | 2025-02-18T02:32:20Z
| 2025-02-19T13:13:44Z
| null |
yang-ahuan
|
huggingface/lerobot
| 741
|
Inquiry on Implementing NoMaD Model (Transformers and Diffusion Policy)
|
I am planning to implement the NoMaD model, which combines Transformers and Diffusion Policy, within the LeRobot project. Before proceeding, I wanted to check if anyone else is currently working on or has already started implementing this model.
For reference, here are the relevant resources:
Website: https://general-navigation-models.github.io/nomad/
Paper: https://arxiv.org/pdf/2310.07896
Please let me know if there is ongoing work related to this model or if anyone is interested in collaborating.
|
https://github.com/huggingface/lerobot/issues/741
|
closed
|
[
"question",
"stale"
] | 2025-02-17T19:57:23Z
| 2025-10-08T20:56:42Z
| null |
vaishanth-rmrj
|
pytorch/torchtitan
| 852
|
How to define Custom Communication Operations for Custom Operators in Distributed Settings
|
Thank you for your awesome project. I would like to ask how to solve the following issue:
I have implemented the logcumsumexp operator, where the input placement is Shard(-1) and the output placement is Replicate(). To obtain the final result, I need to create a custom all-reduce operator (instead of using the conventional sum). How should I go about implementing this?
More generally, for an operator function `f`, given an input placement1 and an output placement2, where should I implement various custom communication operations? I would greatly appreciate it if you could provide some examples for this.
|
https://github.com/pytorch/torchtitan/issues/852
|
closed
|
[
"question",
"module: dtensor"
] | 2025-02-17T16:49:25Z
| 2025-08-21T03:07:29Z
| null |
Doraemonzzz
|
pytorch/serve
| 3,392
|
How to run the benchmark scripts on the local model ?
|
How to run the benchmark scripts on the local model ?
I tried following but it fails with `ModelNotFoundException`
python benchmark_ab.py --config benchmark_config.json
```
{
"url": "./model_store/custom_model.mar",
"requests": 100,
"concurrency": 10,
"input": "kitten_small.jpg",
"exec_env": "local",
"device": "cpu"
}
```
|
https://github.com/pytorch/serve/issues/3392
|
closed
|
[] | 2025-02-17T14:16:47Z
| 2025-02-17T14:53:01Z
| null |
ranipakeyur
|
pytorch/torchtitan
| 850
|
"Universal" Checkpointing
|
Is there an equivalent of Deepspeed [Universal Checkpointing](https://github.com/deepspeedai/DeepSpeed/blob/master/blogs/deepspeed-ucp/README.md) currently for distributed checkpointing, DTensor and FSDP2? That is, how to use torch-native tooling to convert from a checkpoint with a given sharded / parallelism config to a new config such that the sharded state dicts can be directly loaded with a new world size.
For example, train a model on 128 GPUs with `FSDP` (`DP128`) and save a checkpoint with 128 sharded state dicts. Resume training on 64 GPUs with `TP2` / `FSDP` (`DP32`).
Manually, one could merge the original checkpoint from 128 shards -> single merged state dict, then reshard to `TP2` followed by partitioning `TP` shards to 32 `DP` partitions for a total of 64 sharded state dicts, then directly load these state dicts on each rank (without having to first materialize the full state dict on any rank).
@awgu
|
https://github.com/pytorch/torchtitan/issues/850
|
closed
|
[
"question",
"module: checkpoint"
] | 2025-02-17T12:32:39Z
| 2025-06-05T06:28:04Z
| null |
jeromeku
|
huggingface/lerobot
| 738
|
convert simulation data of insertion from v1 to v2
|
I cannot convert using the file (datasets/v2/convert_dataset_v1_to_v2.py) which requires robotconfig which I don't have
I just want to convert your data on lerobot/act_aloha_sim_transfer_cube_human
|
https://github.com/huggingface/lerobot/issues/738
|
closed
|
[
"question",
"dataset",
"stale"
] | 2025-02-17T11:00:38Z
| 2025-10-08T08:59:52Z
| null |
AbdElrahmanMostafaRifaat1432
|
huggingface/open-r1
| 340
|
About the data using in sft, how to set SFTConfig.dataset_text_field?
|
how to use the HuggingFaceH4/Bespoke-Stratos-17k in sft.
I find there are two items in the data, "system" and "conversations". So, when I download this data and to finetune a LLM such as Qwen2.5-1.5B-Instruct, how to organize the data, in trl SFTConfig has a default parameter named dataset_text_field, it's default value is "text" which is not exists in such data, I mean Bespoke-Stratos-17k .
|
https://github.com/huggingface/open-r1/issues/340
|
open
|
[] | 2025-02-17T07:06:14Z
| 2025-02-20T08:59:49Z
| null |
ItGirls
|
huggingface/finetrainers
| 264
|
How to set --precompute_conditions for CogvideoI2V training?
|
cause i don't find this feature in Image2Video training.
does it exist?
|
https://github.com/huggingface/finetrainers/issues/264
|
open
|
[] | 2025-02-17T06:00:50Z
| 2025-03-05T03:49:05Z
| null |
BlackTea-c
|
huggingface/diffusers
| 10,805
|
is there inpainiting dataset and parameters example provided for xl training?
|
**What API design would you like to have changed or added to the library? Why?**
**What use case would this enable or better enable? Can you give us a code example?**
Hi patil-suraj @patil-suraj , appreciated for the convenient script ! Is there any code example and dataset example to run the script: https://github.com/huggingface/diffusers/blob/inpainting-script/examples/inpainting/train_inpainting_sdxl.py ?
|
https://github.com/huggingface/diffusers/issues/10805
|
closed
|
[] | 2025-02-17T01:56:14Z
| 2025-02-17T02:03:09Z
| 2
|
fire2323
|
huggingface/gsplat.js
| 109
|
Info request: How to update individual points in splat?
|
I would like to update position of individual points dynamically in order to create animations and effects.
What would be the optimal way to do it?
|
https://github.com/huggingface/gsplat.js/issues/109
|
open
|
[] | 2025-02-16T18:11:14Z
| 2025-02-16T18:43:23Z
| null |
sjovanovic
|
huggingface/diffusers
| 10,803
|
SANARubber a flexible version of SANA with i2i and multidiffusion/regional diffusion
|
### Model/Pipeline/Scheduler description
I made a pipeline that is as reliable as the basic SANA pipeline but more flexible by making it run an array of functions which runs everything the og pipeline does. this can make easy combinations if necessary.
here's the link, enjoy
https://github.com/alexblattner/SANARubber
example of multidiffusion in sana:
['bright moon','red','blue','green','black'] (first prompt is applied in the background
["0:0-512:512","512:0-1024:512","512:1024-1024:1024","0:512-512:1024"] those are the areas of the rest of the prompts
[.7,.7,.7,.7] those are the strengths of the areas applied with their prompts

again with i2i at stength .5 and the same settings as before (mild changes only):

ENJOY!
### Open source status
- [x] The model implementation is available.
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/diffusers/issues/10803
|
open
|
[
"stale"
] | 2025-02-16T15:08:11Z
| 2025-03-19T15:03:31Z
| 1
|
alexblattner
|
huggingface/candle
| 2,774
|
Dumb Question: How to do forward hooks ?
|
For example I want to extract activations of intermediate layers. How do I register forward hooks similar to PyTorch or is there a similar/comparable paradigm in candle for this ?
|
https://github.com/huggingface/candle/issues/2774
|
open
|
[] | 2025-02-16T12:41:26Z
| 2025-02-16T12:41:26Z
| null |
pzdkn
|
huggingface/diffusers
| 10,799
|
Effective region mask for controlnet
|
Hi, I just want to ask is there any way to use controlnet with mask like [this](https://github.com/Mikubill/sd-webui-controlnet/discussions/2831)
As you know comfyui, webui support effective region (mask for controlnet affect).
But I can't find how to do this with diffusers.
|
https://github.com/huggingface/diffusers/issues/10799
|
closed
|
[
"stale"
] | 2025-02-15T17:42:20Z
| 2025-04-03T04:01:37Z
| 8
|
Suprhimp
|
huggingface/swift-coreml-diffusers
| 102
|
Question: how to use in my own swift project for inference?
|
How would I run diffusers on device on all apple devices in my swift Xcode project?
|
https://github.com/huggingface/swift-coreml-diffusers/issues/102
|
open
|
[] | 2025-02-15T15:56:36Z
| 2025-02-15T15:56:36Z
| null |
SpyC0der77
|
pytorch/pytorch
| 147,263
|
How to trigger several independent communications simultaneously?
|
For example, in training with 4 GPUs, I divide the GPUs into pairs and create two communication groups: group1 = dist.new_group([0, 1]) and group2 = dist.new_group([2, 3]). If I want to run independent dist.all_gather operations within both communication groups simultaneously, it results in an error. I'd like to ask how to implement this correctly.
```
File "/home/yeleyi/anaconda3/envs/torch/lib/python3.10/site-packages/deepspeed/comm/torch.py", line 209, in all_gather
return torch.distributed.all_gather(tensor_list=tensor_list, tensor=tensor, group=group, async_op=async_op)
File "/home/yeleyi/anaconda3/envs/torch/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
return func(*args, **kwargs)
File "/home/yeleyi/anaconda3/envs/torch/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2617, in all_gather
work = group.allgather([tensor_list], [tensor])
torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1691, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.19.3
ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
Last error:
socketStartConnect: Connect to 192.168.1.91<48217> failed : Software caused connection abort
node06:1913795:1914481 [2] NCCL INFO Setting affinity for GPU 2 to 0fffff,ff000000,0fffffff
node06:1913796:1914482 [3] NCCL INFO Setting affinity for GPU 3 to 0fffff,ff000000,0fffffff
node06:1913795:1914481 [2] NCCL INFO Channel 00/04 : 0 1
node06:1913795:1914481 [2] NCCL INFO Channel 01/04 : 0 1
node06:1913795:1914481 [2] NCCL INFO Channel 02/04 : 0 1
node06:1913795:1914481 [2] NCCL INFO Channel 03/04 : 0 1
node06:1913795:1914481 [2] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 [2] 1/-1/-1->0->-1 [3] -1/-1/-1->0->1
node06:1913795:1914481 [2] NCCL INFO P2P Chunksize set to 131072
node06:1913796:1914482 [3] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 [2] -1/-1/-1->1->0 [3] 0/-1/-1->1->-1
node06:1913796:1914482 [3] NCCL INFO P2P Chunksize set to 131072
node06:1913795:1914481 [2] NCCL INFO Channel 00/0 : 0[2] -> 1[3] via P2P/CUMEM
node06:1913796:1914482 [3] NCCL INFO Channel 00/0 : 1[3] -> 0[2] via P2P/CUMEM
node06:1913795:1914481 [2] NCCL INFO Channel 01/0 : 0[2] -> 1[3] via P2P/CUMEM
node06:1913796:1914482 [3] NCCL INFO Channel 01/0 : 1[3] -> 0[2] via P2P/CUMEM
node06:1913795:1914481 [2] NCCL INFO Channel 02/0 : 0[2] -> 1[3] via P2P/CUMEM
node06:1913796:1914482 [3] NCCL INFO Channel 02/0 : 1[3] -> 0[2] via P2P/CUMEM
node06:1913795:1914481 [2] NCCL INFO Channel 03/0 : 0[2] -> 1[3] via P2P/CUMEM
node06:1913796:1914482 [3] NCCL INFO Channel 03/0 : 1[3] -> 0[2] via P2P/CUMEM
node06:1913796:1914482 [3] NCCL INFO Connected all rings
node06:1913796:1914482 [3] NCCL INFO Connected all trees
node06:1913796:1914482 [3] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
node06:1913795:1914481 [2] NCCL INFO Connected all rings
node06:1913796:1914482 [3] NCCL INFO 4 coll channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
node06:1913795:1914481 [2] NCCL INFO Connected all trees
node06:1913795:1914481 [2] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
node06:1913795:1914481 [2] NCCL INFO 4 coll channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
node06:1913795:1914481 [2] NCCL INFO comm 0x1a9590b0 rank 0 nranks 2 cudaDev 2 nvmlDev 2 busId 6c000 commId 0xdd736563a6f28c07 - Init COMPLETE
node06:1913796:1914482 [3] NCCL INFO comm 0x1931a220 rank 1 nranks 2 cudaDev 3 nvmlDev 3 busId 6d000 commId 0xdd736563a6f28c07 - Init COMPLETE
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
|
https://github.com/pytorch/pytorch/issues/147263
|
open
|
[
"oncall: distributed",
"triaged"
] | 2025-02-15T11:47:10Z
| 2025-04-23T20:54:39Z
| null |
Ind1x1
|
huggingface/transformers.js
| 1,194
|
How do I know which ONNX transformation models are available? (Errors when loading models with CDN)
|
### Question
I am using a CDN to load the models, as shown in the code below.
I filtered the models in HuggingFace the way you recommend (text-generation, transformers.js) and put the id of the model I looked up. As I understand it, to change the model, I only need to change the model id.
However, I get an error for each of the below models.
`Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'model')`
- **HuggingFaceTB/SmolLM2-135M-Instruct**
- **Xenova/codegen-350M-mono**
...
`Uncaught (in promise) Error: Can't create a session. ERROR_CODE: 1, ERROR_MESSAGE: Deserialize tensor model.layers.4.mlp.gate_proj.MatMul.weight_Q4 failed.Failed to load external data file ""model_q4f16.onnx_data"", error: Module.MountedFiles is not available.`
- **onnx-community/Phi-3.5-mini-instruct-onnx-web**
...
I'm ultimately saying that I don't know what model will be available.
Additionally, I was wondering if there is a way to know 'in advance' which 'dtype' and 'device' can be supported.
```
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.3.3';
generator = await pipeline('text-generation', 'onnx-community/DeepSeek-R1-Distill-Qwen-1.5B-ONNX', {
dtype: "auto",
device: "auto",
});
```
|
https://github.com/huggingface/transformers.js/issues/1194
|
open
|
[
"question"
] | 2025-02-15T10:31:32Z
| 2025-02-16T14:02:08Z
| null |
mz-imhj
|
huggingface/open-r1
| 333
|
how to use tensorboard instead of wandbοΌ
|
https://github.com/huggingface/open-r1/issues/333
|
closed
|
[] | 2025-02-15T08:00:06Z
| 2025-02-15T08:02:35Z
| null |
ngrxmu
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.