repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/peft
| 2,255
|
Is this the right way to check whether a model has been trained as expected?
|
I'd like to check whether my PEFT model has been trained as intended, i.e. whether the PEFT weights have changed, but not the base weights. The following code works, but I'm sure a PEFT specialist will suggest a better way.
```python
import tempfile
import torch
from datasets import load_dataset
from peft import LoraConfig, get_peft_model
from transformers import AutoModelForCausalLM
from trl import SFTConfig, SFTTrainer
# Get the base model
model_id = "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5"
model = AutoModelForCausalLM.from_pretrained(model_id)
# Get the base model parameter names
base_param_names = [f"base_model.model.{n}" for n, _ in model.named_parameters()]
# Turn the model into a peft model
lora_config = LoraConfig()
model = get_peft_model(model, lora_config)
# Get the dataset
dataset = load_dataset("trl-internal-testing/zen", "standard_language_modeling", split="train")
with tempfile.TemporaryDirectory() as tmp_dir:
# Initialize the trainer
training_args = SFTConfig(output_dir=tmp_dir, report_to="none")
trainer = SFTTrainer(args=training_args, model=model, train_dataset=dataset)
# Save the initial parameters to compare them later
previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()}
trainer.train()
# Check the peft params have changed and the base model params have not changed
for n, param in previous_trainable_params.items():
new_param = trainer.model.get_parameter(n)
if n in base_param_names: # We expect the base model parameters to be the same
if not torch.allclose(param, new_param):
print(f"Parameter {n} has changed, but it should not have changed")
elif "base_layer" not in n: # We expect the peft parameters to be different (except for the base layer)
if torch.allclose(param, new_param):
print(f"Parameter {n} has not changed, but it should have changed")
```
|
https://github.com/huggingface/peft/issues/2255
|
closed
|
[] | 2024-12-03T17:36:00Z
| 2024-12-04T12:01:37Z
| 5
|
qgallouedec
|
huggingface/peft
| 2,251
|
a guide to add a new fine-tuning method in the doc
|
### Feature request
Hello, I am a researcher in the finetune area. Can you publish a guide to add a new fine-tuning method in the doc? I think researchers like me are glad to experiment their methods based on this repo.
### Motivation
Researchers like me are glad to experiment their methods based on this repo, but don't know how to add.
### Your contribution
Yes, but after verifying the feasibility of my method.
|
https://github.com/huggingface/peft/issues/2251
|
closed
|
[] | 2024-12-03T13:46:02Z
| 2024-12-04T02:12:35Z
| 2
|
YF-T
|
pytorch/vision
| 8,777
|
Documentation for the expected input dimension of the model class
|
### 📚 The doc issue
The built-in models are really convenient. However, the documentation usually did not specified the expected input dimension, I always find it troublesome to confirm what is the correct input dimension for the model class that i want to use.
For example:
https://pytorch.org/vision/main/models/generated/torchvision.models.resnet18.html
https://pytorch.org/vision/main/models/generated/torchvision.models.swin_t.html
https://pytorch.org/vision/main/models/generated/torchvision.models.video.swin3d_b.html
Is there clear documentation for this issue? Or is there a simple and clear rule that i can use (e.g., a rule that were used to develop these model class in pytorch that are consistent throughout?)
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/vision/issues/8777
|
closed
|
[] | 2024-12-02T17:55:40Z
| 2024-12-03T10:30:23Z
| 2
|
hzhz2020
|
huggingface/diffusers
| 10,076
|
Do we have any script covert from hf format to orginal format?
|
**Is your feature request related to a problem? Please describe.**
scripts/convert_cogvideox_to_diffusers.py
in this script, we can convert cogvideox -> diffusers. Do we have the opposite script?
cc @yiyixuxu
|
https://github.com/huggingface/diffusers/issues/10076
|
open
|
[
"good first issue",
"contributions-welcome",
"conversion script"
] | 2024-12-02T07:49:34Z
| 2024-12-02T18:22:50Z
| 1
|
foreverpiano
|
huggingface/trl
| 2,424
|
How to calculate the loss of multi-turn dialogue training data?
|
In a single data entry containing multiple turns of dialogue, abbreviated as Q1 + A1 + Q2 + A2, does this project calculate the loss only for the last answer of the multi-turn dialogue, or for each answer?
|
https://github.com/huggingface/trl/issues/2424
|
closed
|
[
"❓ question",
"🏋 SFT"
] | 2024-12-02T07:47:17Z
| 2025-01-20T02:47:34Z
| null |
NUMB1234
|
huggingface/diffusers
| 10,074
|
how to install diffusers 0.32.0
|
FluxFillPipeline Function need =0.32.0 But I don't know how to install it, can anyone help me? Thanks in advance
|
https://github.com/huggingface/diffusers/issues/10074
|
closed
|
[] | 2024-12-02T07:05:24Z
| 2024-12-02T19:11:34Z
| null |
babyta
|
huggingface/diffusers
| 10,070
|
Xformers info , memory efficient atttention unavailable
|
### Describe the bug
I just started learning Stable Diffuision on Win11. After I installed xformers, I found several memory_efficient_attention string is unavailable. Is it possible to make them available? Thanks for any help.
### Reproduction
xFormers 0.0.28.post3
memory_efficient_attention.ckF: unavailable
memory_efficient_attention.ckB: unavailable
memory_efficient_attention.ck_decoderF: unavailable
memory_efficient_attention.ck_splitKF: unavailable
memory_efficient_attention.cutlassF-pt: available
memory_efficient_attention.cutlassB-pt: available
memory_efficient_attention.fa2F@v2.6.3-24-gbdf733b: available
memory_efficient_attention.fa2B@v2.6.3-24-gbdf733b: available
memory_efficient_attention.fa3F@0.0.0: unavailable
memory_efficient_attention.fa3B@0.0.0: unavailable
memory_efficient_attention.triton_splitKF: available
indexing.scaled_index_addF: available
indexing.scaled_index_addB: available
indexing.index_select: available
sequence_parallel_fused.write_values: available
sequence_parallel_fused.wait_values: available
sequence_parallel_fused.cuda_memset_32b_async: available
sp24.sparse24_sparsify_both_ways: available
sp24.sparse24_apply: available
sp24.sparse24_apply_dense_output: available
sp24._sparse24_gemm: available
sp24._cslt_sparse_mm_search@0.0.0: available
sp24._cslt_sparse_mm@0.0.0: available
swiglu.dual_gemm_silu: available
swiglu.gemm_fused_operand_sum: available
swiglu.fused.p.cpp: available
is_triton_available: True
pytorch.version: 2.5.1+cu124
pytorch.cuda: available
gpu.compute_capability: 8.9
gpu.name: NVIDIA GeForce RTX 4070
dcgm_profiler: unavailable
build.info: available
build.cuda_version: 1204
build.hip_version: None
build.python_version: 3.10.11
build.torch_version: 2.5.1+cu124
build.env.TORCH_CUDA_ARCH_LIST: 6.0+PTX 7.0 7.5 8.0+PTX 9.0a
build.env.PYTORCH_ROCM_ARCH: None
build.env.XFORMERS_BUILD_TYPE: Release
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
build.env.NVCC_FLAGS: -allow-unsupported-compiler
build.env.XFORMERS_PACKAGE_FROM: wheel-v0.0.28.post3
build.nvcc_version: 12.4.131
source.privacy: open source
### Logs
_No response_
### System Info
Win11, Python 3.10.6,pytorch 2.5.1+cu124, xFormers 0.0.28.post3, triton==3.0.0
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/10070
|
open
|
[
"bug",
"stale"
] | 2024-12-01T16:14:21Z
| 2025-01-01T15:03:09Z
| 1
|
Stareshine
|
huggingface/Google-Cloud-Containers
| 126
|
Deployment error on GKE
|
Hello!
I deployed Gemma 2 2b it on GKE with autopilot mode following these instructions https://cloud.google.com/kubernetes-engine/docs/tutorials/serve-gemma-gpu-tgi#autopilot. There's this error Node scale up in zones us-central1-c associated with this pod failed: GCE quota exceeded. Pod is at risk of not being scheduled. I checked quota there's enough GPU. However the pod is in pending state.
|
https://github.com/huggingface/Google-Cloud-Containers/issues/126
|
closed
|
[
"question"
] | 2024-12-01T14:09:29Z
| 2025-01-07T08:39:07Z
| null |
piksida
|
huggingface/lerobot
| 538
|
questions about load dataset for localhost, make own policy and use headless eval mode
|
Hello, I'm trying to download a data set on hugging face to the local and then call this data set from the local. For example, 'aloha_sim_insertion_scripted_image' , its format is many 'episode_000000.parquet' files . Then how to load this format by LeRobotDataset() func or other ways?
Second, I want to create my own policy. After I parse the code framework, I think I may need to create my policy code file by mimicking the following files:
+ lerobot/common/policies/act/configuration_act.py
+ lerobot/common/policies/act/modeling_act.py
However, I am having some difficulties in making my own policy now, and I want to create a new policy to implement my idea, which is to introduce the concept of comparative learning. That is to say, the policy enables the agent to learn the correct samples and stay away from the wrong samples. I would like to ask you what should be modified to complete this idea.
I really need examples of this, and it would be very helpful if you could give me detailed advice!
Finally, my server is headless, which means that when evaluating a policy, there is no way to call mujujo to view the evaluation, so can our code framework support headless mode and save the evaluation video?
As a new researcher in this field, it would be great if I could further communicate with you about the above issues. Thank you very much!
Best wishes : )
|
https://github.com/huggingface/lerobot/issues/538
|
closed
|
[
"question",
"stale"
] | 2024-12-01T03:32:06Z
| 2025-10-19T02:32:41Z
| null |
zhouzhq2021
|
huggingface/lerobot
| 536
|
How auto calibration works
|
Is there any details about run_arm_auto_calibration_moss and run_arm_auto_calibration_so100 we can refer? I read the code but couldn't fully understand.
When should we use auto_calibration, instead of the manual calibration calculating the homing_offset of the rotated (90d) pose?
What to check whether my understanding is correct: for manual calibration, the homing offset include 2 terms, 1) the true offset causing by motor installation, 2) human bias due to manually rotate the motor. If correct, is there a way to also remove the second term? Considering using multiple robots for data collection, guess removing term (2) is required.
|
https://github.com/huggingface/lerobot/issues/536
|
closed
|
[
"question",
"robots",
"stale"
] | 2024-11-30T18:04:23Z
| 2025-10-08T08:37:24Z
| null |
wzds2015
|
pytorch/torchtitan
| 709
|
First Shard Group Save and Load Checkpoint for HSDP
|
Based on my understanding, current strategy:
1. All ranks currently read and load the checkpoint.
2. All ranks also save and write the checkpoint.
I have a question regarding the HSDP case:
If different shard groups write data to storage, could this lead to data corruption?
Ideally, should only the first shard group read the data, broadcast it, and handle writing to ensure consistency?
|
https://github.com/pytorch/torchtitan/issues/709
|
closed
|
[
"question"
] | 2024-11-29T22:20:42Z
| 2025-01-08T07:52:58Z
| null |
qsh-zh
|
huggingface/accelerate
| 3,269
|
🤨Question: What if model has float16 dtype and `mixed_precision` is set to fp16 as well?
|
As the title:
**🤨Question: What if model has float16 dtype and `mixed_precision` is set to fp16 as well?**
- Will it computate in original float16? Like Auto-Mixed-Precision never exist
- or some modules, which are easy to overflow(e.g. BatchNorm, LayerNorm), will be upcasted to float32, as AMP fp32->fp16 does?
Could someone please help me with this question? ❤
|
https://github.com/huggingface/accelerate/issues/3269
|
closed
|
[] | 2024-11-29T17:55:58Z
| 2025-01-07T15:33:26Z
| null |
townwish4git
|
huggingface/chat-macOS
| 36
|
Document how to download and install a local model
|
1st, thanks very much for this work!
I'm a bit of nube here.
The 'Get' button takes you to web page for the example, however chat-macOS instruction are not part of the options. And also where do you place the downloaded model for the "add +" option and where do the models go? Is there a way to configure where models are stored?
Thanks!
|
https://github.com/huggingface/chat-macOS/issues/36
|
open
|
[] | 2024-11-29T17:18:43Z
| 2024-11-29T17:18:43Z
| null |
deepcoder
|
pytorch/rl
| 2,618
|
[Feature Request] Provide documentation on how to use CatFrames with a data collector and replay buffer for images
|
## Motivation
Using CatFrames for inference is fairly straightforward and is already well documented.
That being said, using CatFrames to reconstruct a stack of frames when sampling from the replay buffer is not so straightforward I find (subjective) and is not explicitly documentd for images (objective).
Using frame stacking for Visual RL is very common practice so I feel like the community would benefit from getting a better documentation on how to use CatFrames **for images**.
## Solution
Provide a clear documentation that explains everything in details (no magic flags/magic values) on how to use CatFrames with a data collector and replay buffer (both extend and sample() method should be shown) for images.
I have created a gist of my attempt to use CatFrames for images and while the inference part works, the stack frames retrieved from the replay buffer do not make sense.
https://gist.github.com/AlexandreBrown/fe378f26a87bdc40c5995dcc7d42f482
Any help on how to make the last part where we sample from the replay buffer return the correct CatFrames data is appreciated.
## Contributions
I am willing to work on the PR for the documentation update if someone can help me get the [MVP script](https://gist.github.com/AlexandreBrown/fe378f26a87bdc40c5995dcc7d42f482) working.
## Checklist
- [x] I have checked that there is no similar issue in the repo (**required**)
|
https://github.com/pytorch/rl/issues/2618
|
open
|
[
"enhancement"
] | 2024-11-29T16:57:42Z
| 2024-11-29T16:58:06Z
| null |
AlexandreBrown
|
pytorch/TensorRT
| 3,307
|
❓ [Question] TensorRT Export Failure with Large Input Sizes
|
## ❓ Question
<!-- Your question -->
I'm trying to export a torch model that processes large inputs (e.g., 8192x2048). I have noticed that `torch_tensorrt.compile` fails with inputs greater than 4096x2048 (I haven't tried them all, only powers of 2). Specifically, the conversion fails for convolution and ReLU operations with a "No valid tactics" and "Illegal memory access" error:
```
[1A2024-11-29 16:56:42,307 - torch_tensorrt [TensorRT Conversion Context] - ERROR - [scopedCudaResources.cpp::~ScopedCudaStream::55] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
2024-11-29 16:56:42,311 - torch_tensorrt [TensorRT Conversion Context] - ERROR - IBuilder::buildSerializedNetwork: Error Code 10: Internal Error (Could not find any implementation for node [CONVOLUTION]-[aten_ops.convolution.default]-[teacher.3/convolution_5] + [RELU]-[aten_ops.relu.default]-[teacher.4/relu_4].)
2024-11-29 16:56:42,312 - [MODEL EXPORT] - ERROR - TensorRT export failed:
Traceback (most recent call last):
File "/nfs/home/bragagnolo/qinstinct-fabric-inspection/tools/launchers.py", line 398, in <module>
export(
File "/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/nfs/home/bragagnolo/qinstinct-fabric-inspection/tools/launchers.py", line 298, in export
trt_model = torch_tensorrt.compile(model, **compile_spec)
File "/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch_tensorrt/_compile.py", line 269, in compile
trt_graph_module = dynamo_compile(
File "/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py", line 288, in compile
trt_gm = compile_module(
File "/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py", line 464, in compile_module
trt_module = convert_module(
File "/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 142, in convert_module
interpreter_result = interpret_module_to_result(
File "/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 121, in interpret_module_to_result
interpreter_result = interpreter.run()
File "/nfs/home/bragagnolo/qinstinct-fabric-inspection/.venv/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 635, in run
assert serialized_engine
AssertionError
```
<!-- A clear and concise description of what you have already done. -->
Here attached is the script and full output log: [issue.zip](https://github.com/user-attachments/files/17961259/issue.zip)
## Environment
- PyTorch Version (e.g., 1.0): 2.5.1+cu121
- TorchTensorRT Version: 2.5.0
- CPU Architecture: AMD EPYC 7543 32-Core Processor
- OS (e.g., Linux): Ubuntu 22.04.5 LTS
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Python version: 3.10.12
- CUDA version: Cuda compilation tools, release 12.1, V12.1.66 Build cuda_12.1.r12.1/compiler.32415258_0
- GPU models and configuration: NVIDIA A100-SXM4-80GB, on SLURM with MIG enabled.
Is there any limit to the input size when converting using torch_tensorrt? Any solution to this problem?
Thanks.
|
https://github.com/pytorch/TensorRT/issues/3307
|
open
|
[
"question"
] | 2024-11-29T16:01:14Z
| 2024-12-04T15:53:40Z
| null |
AndreaBrg
|
huggingface/diffusers
| 10,055
|
Training script for a Controlnet based on SD3 does not work
|
### Describe the bug
Hi @sayakpaul and all others :)
The training script for a Control-net based on Stable Diffusion 3 seems to not work.
**RuntimeError: Given groups=1, weight of size [1536, 17, 2, 2], expected input[4, 16, 64, 64] to have 17 channels, but got 16 channels instead**
I tried to follow the documentation on how to train a control net based on SD3.
I used a custom dataset that I also used to train a control net based on SD1.5.
Once i run the script. I receive a tensors channel do not match error.
### Reproduction
!accelerate launch train_controlnet_sd3.py \
--pretrained_model_name_or_path="stabilityai/stable-diffusion-3-medium-diffusers" \
--output_dir="/home/xxx/models/v1/cn-stablediff-v3_out" \
--dataset_name="StudentYannik/v1-prepared-cn" \
--resolution=512 \
--learning_rate=1e-5 \
--max_train_steps=10000 \
--train_batch_size=4 \
--num_train_epochs=10 \
--gradient_accumulation_steps=4
### Logs
```shell
11/29/2024 14:35:32 - INFO - __main__ - Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda
Mixed precision type: no
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type t5 to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
{'base_image_seq_len', 'base_shift', 'max_image_seq_len', 'use_beta_sigmas', 'invert_sigmas', 'use_karras_sigmas', 'use_dynamic_shifting', 'max_shift', 'use_exponential_sigmas'} was not found in config. Values will be initialized to default values.
Downloading shards: 100%|██████████████████████| 2/2 [00:00<00:00, 12539.03it/s]
Loading checkpoint shards: 100%|██████████████████| 2/2 [00:09<00:00, 4.92s/it]
{'mid_block_add_attention'} was not found in config. Values will be initialized to default values.
{'dual_attention_layers', 'qk_norm'} was not found in config. Values will be initialized to default values.
11/29/2024 14:35:54 - INFO - __main__ - Initializing controlnet weights from transformer
{'dual_attention_layers', 'pos_embed_type', 'qk_norm', 'use_pos_embed', 'force_zeros_for_pooled_projection'} was not found in config. Values will be initialized to default values.
11/29/2024 14:36:14 - INFO - __main__ - ***** Running training *****
11/29/2024 14:36:14 - INFO - __main__ - Num examples = 150
11/29/2024 14:36:14 - INFO - __main__ - Num batches each epoch = 38
11/29/2024 14:36:14 - INFO - __main__ - Num Epochs = 1000
11/29/2024 14:36:14 - INFO - __main__ - Instantaneous batch size per device = 4
11/29/2024 14:36:14 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 16
11/29/2024 14:36:14 - INFO - __main__ - Gradient Accumulation steps = 4
11/29/2024 14:36:14 - INFO - __main__ - Total optimization steps = 10000
Steps: 0%| | 0/10000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/xxxx/repos/control-net/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1412, in <module>
main(args)
File "/home/xxxx/repos/control-net/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1278, in main
control_block_res_samples = controlnet(
File "/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xxxx/repos/control-net/diffusers/src/diffusers/models/controlnets/controlnet_sd3.py", line 365, in forward
hidden_states = hidden_states + self.pos_embed_input(controlnet_cond)
File "/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xxxx/repos/control-net/diffusers/src/diffusers/models/embeddings.py", line 266, in forward
latent = self.proj(latent)
File "/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "
|
https://github.com/huggingface/diffusers/issues/10055
|
open
|
[
"bug",
"stale"
] | 2024-11-29T13:46:29Z
| 2025-02-03T15:03:46Z
| 17
|
Putzzmunta
|
huggingface/diffusers
| 10,050
|
Is there any img2img KDiffusion equivalent of StableDiffusionKDiffusionPipeline?
|
### Model/Pipeline/Scheduler description
I'm working on result alignment between diffusers and A1111 webui.
In txt2img scene, I can achieve via `StableDiffusionKDiffusionPipeline`, refer to https://github.com/huggingface/diffusers/issues/3253.
But in img2img scene, is there any KDiffusion pipeline equivalent?
I'm also trying to implement this by merging `StableDiffusionKDiffusionPipeline` and `StableDiffusionImg2ImgPipeline` together.
Any clarification and help is appreciated.
### Open source status
- [ ] The model implementation is available.
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/diffusers/issues/10050
|
open
|
[
"stale"
] | 2024-11-29T07:47:11Z
| 2024-12-29T15:03:05Z
| 2
|
juju812
|
huggingface/diffusers
| 10,043
|
F5-TTS Integration
|
### Model/Pipeline/Scheduler description
F5-TTS is a fully non-autoregressive text-to-speech system based on flow matching with Diffusion Transformer (DiT).
It has excellent voice cloning capabilities, and audio generation is of quite high quality.
### Open source status
- [X] The model implementation is available.
- [X] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
Paper - https://arxiv.org/abs/2410.06885
Code - https://github.com/SWivid/F5-TTS?tab=readme-ov-file
Weights - https://huggingface.co/SWivid/F5-TTS
Author - @SWivid
|
https://github.com/huggingface/diffusers/issues/10043
|
open
|
[
"help wanted",
"contributions-welcome"
] | 2024-11-28T11:14:18Z
| 2025-11-02T18:46:02Z
| 11
|
nityanandmathur
|
pytorch/pytorch
| 141,746
|
How to specify the port for processes with rank > 1 in the Gloo communication backend?
|
In Pytorch, when performing distributed training using gloo as the communication backend, you only need to specify master_addr and master_port; other processes will actively connect and use random ports for initialization. May I ask if it is possible for other processes to perform initialization by specifying the port?
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
|
https://github.com/pytorch/pytorch/issues/141746
|
open
|
[
"oncall: distributed",
"triaged"
] | 2024-11-28T02:07:49Z
| 2024-12-19T03:52:52Z
| null |
tecaccc
|
huggingface/lerobot
| 533
|
How to merge multiple recorded datasets?
|
Hi, Thank you so much for the automatic resume during data recording,sometimes ubstable camera issues or other situations (e.g. do not have enough time to finish recording) might cause process stopping.
I was wondering is there anyway to merge multiple recorded datasets? for instance I have two datasets 'cube grabbing' and 'cylinder grabbing' which were both recorded 50 episodes each and in the save environment, do you have tutorial about how to merge them into a 100-episode larger datasets?
BTW, another reason for merging datasets is because storage usage is extremely high before video encoding, and record large datasets at once can be limited by storage. but merge several encoded datasets can mitigate this problem.
Thanks
|
https://github.com/huggingface/lerobot/issues/533
|
closed
|
[
"question",
"dataset"
] | 2024-11-28T01:53:28Z
| 2025-10-08T08:33:31Z
| null |
mydhui
|
huggingface/transformers
| 34,981
|
How to Log Training Loss at Step Zero in Hugging Face Trainer or SFT Trainer?
|
### Feature request
log train loss on start
----
’m using the Hugging Face `Trainer` (or `SFTTrainer`) for fine-tuning, and I want to log the training loss at step 0 (before any training steps are executed). I know there’s an `eval_on_start` option for evaluation, but I couldn't find a direct equivalent for training loss logging at the beginning of training.
Is there a way to log the initial training loss at step zero (before any updates) using `Trainer` or `SFTTrainer`? Ideally, I'd like something similar to `eval_on_start`.
Here’s what I’ve tried so far:
#### Solution 1: Custom Callback
I implemented a custom callback to log the training loss at the start of training:
```python
from transformers import TrainerCallback
class TrainOnStartCallback(TrainerCallback):
def on_train_begin(self, args, state, control, logs=None, **kwargs):
# Log training loss at step 0
logs = logs or {}
logs["train/loss"] = None # Replace None with an initial value if available
logs["train/global_step"] = 0
self.log(logs)
def log(self, logs):
print(f"Logging at start: {logs}")
wandb.log(logs)
# Adding the callback to the Trainer
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
args=training_args,
optimizers=(optimizer, scheduler),
callbacks=[TrainOnStartCallback()],
)
```
This works but feels a bit overkill. It logs metrics at the start of training before any steps.
#### Solution 2: Manual Logging
Alternatively, I manually log the training loss before starting training:
```python
wandb.log({"train/loss": None, "train/global_step": 0})
trainer.train()
```
### Question:
Are there any built-in features in `Trainer` or `SFTTrainer` to log training loss at step zero? Or is a custom callback or manual logging the best solution here? If so, are there better ways to implement this functionality? similar to the `eval_on_start` but `train_on_start`?
cross: https://discuss.huggingface.co/t/how-to-log-training-loss-at-step-zero-in-hugging-face-trainer-or-sft-trainer/128188
### Motivation
Crucial sanity check
### Your contribution
yes, happy to implement this.
|
https://github.com/huggingface/transformers/issues/34981
|
open
|
[
"Feature request"
] | 2024-11-28T00:24:43Z
| 2024-11-29T07:35:28Z
| null |
brando90
|
huggingface/transformers.js
| 1,055
|
Support for Typescript docs
|
### Question
I have been trying to implement server side sentiment analysis using this [tutorial](https://huggingface.co/docs/transformers.js/main/en/tutorials/next#prerequisites) but its in Javascript. I looked through the docs but there seems to be no information on implementing it using Typescript. So far I have integrated Typescript but there is one error that is difficult to fix. This is what I have implemented so far:
pipeline.ts
```ts
import { pipeline, PipelineType } from "@huggingface/transformers";
// Use the Singleton pattern to enable lazy construction of the pipeline.
// NOTE: We wrap the class in a function to prevent code duplication (see below).
const P = () => class PipelineSingleton {
static task: PipelineType = 'text-classification';
static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';
static instance: PipelineSingleton | null = null;
// eslint-disable-next-line @typescript-eslint/no-unsafe-function-type
static async getInstance(progress_callback: Function | undefined = undefined) {
if (!this.instance) {
this.instance = pipeline(this.task, this.model, { progress_callback });
}
return this.instance;
}
}
let PipelineSingleton: ReturnType<typeof P>;
if (process.env.NODE_ENV !== 'production') {
// When running in development mode, attach the pipeline to the
// global object so that it's preserved between hot reloads.
// For more information, see https://vercel.com/guides/nextjs-prisma-postgres
const globalWithPipeline = global as typeof global & { PipelineSingleton: ReturnType<typeof P> };
if (!globalWithPipeline.PipelineSingleton) {
globalWithPipeline.PipelineSingleton = P();
}
PipelineSingleton = globalWithPipeline.PipelineSingleton;
} else {
PipelineSingleton = P();
}
export default PipelineSingleton;
```
request.ts
```ts
import { NextResponse } from 'next/server'
import PipelineSingleton from './pipeline';
export async function GET(request: Request) {
// Extract the text parameter from the query string
const url = new URL(request.url);
const text = url.searchParams.get('text');
if (!text) {
return NextResponse.json({
error: 'Missing text parameter',
}, { status: 400 });
}
// Get the classification pipeline. When called for the first time,
// this will load the pipeline and cache it for future use.
const classifier = await PipelineSingleton.getInstance(); // SHOWS THE ERROR - Type 'PipelineSingleton' has no call signatures.ts(2349)
// Actually perform the classification
const result = await classifier(text);
return NextResponse.json(result);
}
```
The problem is in the routes.ts when calling the classifier method. Typescript shows the error:
> This expression is not callable.
> Type 'PipelineSingleton' has no call signatures.ts(2349)
So this probably means that my Typescript implementation is incorrect for Pipeline. Would appreciate any help on this. TIA.
|
https://github.com/huggingface/transformers.js/issues/1055
|
open
|
[
"question"
] | 2024-11-26T21:38:54Z
| 2024-11-27T02:20:59Z
| null |
SadmanYasar
|
huggingface/datasets
| 7,299
|
Efficient Image Augmentation in Hugging Face Datasets
|
### Describe the bug
I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient.
I'm new to the Hugging Face datasets library, I didn't find nothing in the documentation or the issues here on github.
Is there an existing way to add image transformations directly to the dataset loading pipeline?
### Steps to reproduce the bug
from datasets import load_dataset
from torch.utils.data import DataLoader
```python
def collate_fn(batch):
images = [item['image'] for item in batch]
texts = [item['text'] for item in batch]
return {
'images': images,
'texts': texts
}
dataset = load_dataset("Yuki20/pokemon_caption", split="train")
dataloader = DataLoader(dataset, batch_size=4, collate_fn=collate_fn)
# Output shows varying image sizes:
# [(1280, 1280), (431, 431), (789, 789), (769, 769)]
```
### Expected behavior
I'm looking for a way to resize images on-the-fly when loading the dataset, similar to PyTorch's Dataset.__getitem__ functionality. This would be more efficient than handling resizing in the collate_fn.
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.11.10
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
|
https://github.com/huggingface/datasets/issues/7299
|
open
|
[] | 2024-11-26T16:50:32Z
| 2024-11-26T16:53:53Z
| 0
|
fabiozappo
|
huggingface/lerobot
| 527
|
Is there a `select_actions` abstraction?
|
This line references a `select_actions` function which doesn't seem to exist. This functionality (abstract away access to the future action queue, instead of just returning the first action) would be useful - did it use to / will it exist?
https://github.com/huggingface/lerobot/blob/96c7052777aca85d4e55dfba8f81586103ba8f61/lerobot/common/policies/act/modeling_act.py#L102
|
https://github.com/huggingface/lerobot/issues/527
|
closed
|
[
"question",
"policies",
"stale"
] | 2024-11-26T14:22:31Z
| 2025-10-08T08:33:51Z
| null |
genemerewether
|
huggingface/diffusers
| 10,025
|
attention mask for transformer Flux
|
### Describe the bug
Is it possible to get back the `attention_mask` argument in the flux attention processor
```
hidden_states = F.scaled_dot_product_attention(query, key, value, dropout_p=0.0, is_causal=False,attn_mask=attention_mask)
```
https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1910
in order to tweak things a bit ? otherwise the argument `attention_mask` is unused.
Thanks a lot
### Reproduction
pip install diffusers
### Logs
_No response_
### System Info
Ubuntu
### Who can help?
@yiyixuxu @sayakpaul @DN6 @asomoza
|
https://github.com/huggingface/diffusers/issues/10025
|
closed
|
[
"bug"
] | 2024-11-26T08:51:20Z
| 2024-12-05T00:22:37Z
| 19
|
christopher5106
|
huggingface/accelerate
| 3,263
|
How to load checkpoint shards one by one to avoid OOM error?
|
### System Info
```Shell
- `Accelerate` version: 1.1.0
- Platform: Linux-5.10.112-005.ali5000.al8.x86_64-x86_64-with-glibc2.17
- `accelerate` bash location: /home/admin/anaconda3/envs/llama_factory/bin/accelerate
- Python version: 3.10.14
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- PyTorch MLU available: False
- PyTorch MUSA available: False
- System RAM: 128.00 GB
- GPU type: NVIDIA H20
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
My code can run on 1/2/3/4 GPU(s), but errors occur when I try to use more GPUs.
The command I use :
`accelerate launch --multi_gpu --gpu_ids 0,1,2,3,4,5,6,7,8 --num_processes 8 --main_process_port 2525 ./train_args_multi.py --batch_size 4 --save_name tmp_model_multi`
The code where errors occur:
```
accelerator = Accelerator()
device = accelerator.device
print('Device: ', device)
model = MyModel(path=path, device=device).to(device)
random.seed(seed)
torch.manual_seed(seed)
np.random.seed(seed)
train_data, train_loader = data_provider(train_data_path, batch_size, num_workers=num_workers, flag='train')
test_data, test_loader = data_provider(test_data_path, batch_size, num_workers=num_workers, flag='test')
model_optim = optim.Adam(trained_parameters, lr=learning_rate)
print('Preparing for accelerator...')
model, model_optim, train_loader, test_loader = accelerator.prepare(model, model_optim, train_loader, test_loader)
```
### Expected behavior
Errors occur when loading checkpoint shards (as the bar shows below):
```
$accelerate launch --multi_gpu --num_processes 8 --gpu_ids 0,1,2,3,4,5,6,7 --main_process_port 25252 ./train_args_multi.py --batch_size 4 --save_name tmp_model_multi
Device: cuda:0
Device: cuda:6
Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s$
Device: cuda:5
Device: cuda:3
Device: cuda:4
Device: cuda:7
Device: cuda:1
Device: cuda:2
Loading checkpoint shards: 50%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 2/4 [00:11<00:12, 6....r(args)
File "/home/admin/anaconda3/envs/llama_factory/lib/python3.10/site-packages/accelerate/commands/launch.py", line 793, in multi_gpu_launcher
distrib_run.run(args)
File "/home/admin/anaconda3/envs/llama_factory/lib/python3.10/site-packages/torch/distributed/run.py", line 892, in run
elastic_launch(
File "/home/admin/anaconda3/envs/llama_factory/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/admin/anaconda3/envs/llama_factory/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
======================================================
./train_args_multi.py FAILED
------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-11-26_16:17:47
host : pe-resource-pool033093226243.center
rank : 5 (local_rank: 5)
exitcode : -9 (pid: 84403)
error_file: <N/A>
traceback : Signal 9 (SIGKILL) received by PID 84403
======================================================
(llama_factory)
```
I found that the memory ran out (not CUDA memory) when loading the models b
|
https://github.com/huggingface/accelerate/issues/3263
|
closed
|
[] | 2024-11-26T08:25:37Z
| 2025-01-06T15:06:50Z
| null |
amoyplane
|
pytorch/torchtitan
| 700
|
Is `autocast` needed with FSDP2?
|
Hi, is it necessary to wrap the forward pass in `autocast` when using FSDP2? I noticed that the `torchtitan` training loop does not.
If I wrap in `torch.autocast(device_type="cuda", dtype=torch.bfloat16)` my matmuls will be `bfloat16`, but my softmaxes (say) will be in `float32`. This behavior requires the autocast wrapper:
```python
t = torch.randn(100, device="cuda", dtype=torch.bfloat16)
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
out = t.softmax(dim=-1)
out.dtype # torch.float32
# Without autocast:
t.softmax(dim=-1).dtype # torch.bfloat16
```
This is the usual way to do DDP or non-distributed mixed-precision training.
It seems to me that this behavior is lost in the `torchtitan` training loop which doesn't use the `autocast` [context manager](https://github.com/garrett361/torchtitan/blob/3247841423429faf37bdf6918204350db293e482/train.py#L308-L314). Is this not true? Does FSDP2 somehow still perform the upcast for the usual upcasted amp ops like softmax? Not seeing how it might do so, and can't test easily at the moment.
I believe I correctly understand that `MixedPrecisionPolicy` controls the `dtype`s that weights are held in, reductions are performed in, and whether to cast a given module's outputs to a certain `dtype`, but that is all orthogonal to the dispatcher flags that `autocast` controls, IIUC.
Relates to #600 and #591. Also, I believe [OLMo uses autocast with FSDP](https://github.com/allenai/OLMo/blob/9c677c90cc881c37787c71373836d6889ad4de4a/olmo/train.py#L799-L809), but that is FSDP1 last time I checked.
CC @awgu
|
https://github.com/pytorch/torchtitan/issues/700
|
closed
|
[
"question"
] | 2024-11-25T22:32:13Z
| 2024-12-05T15:51:06Z
| null |
garrett361
|
pytorch/vision
| 8,749
|
Pretrained weights for ResNet[18, 34, 50, 101] are incorrect
|
### 🐛 Describe the bug
Hi,
I have been trying to run the pretrained ResNet models. The model weights seem to be incorrect. Below is a code to reproduce the erroneous results:
```
import torch
from torchvision.models import resnet18, ResNet18_Weights
from PIL import Image
resnet = resnet18(weights=ResNet18_Weights.IMAGENET1K_V1)
preprocess = ResNet18_Weights.IMAGENET1K_V1.transforms()
# !wget "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
input_image = Image.open('dog.jpg')
# !wget https://upload.wikimedia.org/wikipedia/commons/b/b6/Felis_catus-cat_on_snow.jpg -O cat.jpg
# input_image = Image.open('cat.jpg')
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model
with torch.no_grad():
output = resnet(input_batch)
probabilities = torch.nn.functional.softmax(output[0], dim=0)
# !wget https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt
with open("imagenet_classes.txt", "r") as f:
categories = [s.strip() for s in f.readlines()]
# Show top categories per image
top5_prob, top5_catid = torch.topk(probabilities, 5)
for i in range(top5_prob.size(0)):
print(categories[top5_catid[i]], top5_prob[i].item())
```
Output is the same for both "cat.jpg" and "dog.jpg" for ResNet18:
```
bucket 0.008743884041905403
plunger 0.006772771943360567
hook 0.005883160978555679
paper towel 0.005243286956101656
ashcan 0.005110109690576792
```
These predictions are clearly incorrect. Through a noncomprehensive testing, the garbage output occurs for the model weights:
```
ResNet18_Weights.IMAGENET1K_V1
ResNet34_Weights.IMAGENET1K_V1
ResNet50_Weights.IMAGENET1K_V1
ResNet101_Weights.IMAGENET1K_V1
```
while the output for the following model weights are correct:
```
ResNet50_Weights.IMAGENET1K_V2
ResNet101_Weights.IMAGENET1K_V2
```
My guess is that the pretrained weight files are linked incorrectly for the V1 models.
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Rocky Linux 9.4 (Blue Onyx) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:38:13) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.42.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10
GPU 1: NVIDIA A10
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6342 CPU @ 2.80GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 6
CPU(s) scaling MHz: 98%
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 2.3 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache:
|
https://github.com/pytorch/vision/issues/8749
|
closed
|
[] | 2024-11-25T22:17:58Z
| 2024-11-27T18:24:38Z
| 3
|
longyuxi
|
huggingface/lerobot
| 525
|
Train a RL agent (without initial dataset)
|
Hi,
I'm currently working on trying to integrate the following environment in the repo : https://github.com/perezjln/gym-lowcostrobot
I would like to use it for learning a RL agent in sim and try it out on the real robot after.
However, the current training script requires to have a local or online pre-recorded dataset. Is there a way to avoid this and pass an option to not load a dataset ?
Thank you in advance
|
https://github.com/huggingface/lerobot/issues/525
|
closed
|
[
"enhancement",
"question",
"simulation"
] | 2024-11-25T20:02:38Z
| 2025-04-07T16:19:01Z
| null |
alexcbb
|
huggingface/chat-ui
| 1,592
|
Add Markdown support for user messages
|
## Describe your feature request
In pr #1562 , a WSIWYG editor has been added to the text input area, however, when a text is sent, it is displayed in unrendered markdown. The idea is to use `marked` to conditionally render certain elements in the user's sent message into markdown, and leave others untouched.
The WSIWYG editor currently converts the following into markdown:
- bold
- italic
- code blocks
- code spans
The sent user messages should display those specific elements converted into markdown, and leave the rest untouched and unconverted, such as headings.
## Screenshots
An example of how a user message is currently displayed:

## Implementation idea
The idea is to create a custom `renderer` which might be done using `marked` to be used when the message sender is the `user`.
The renderer allows certain modifications, such as explicitly specifying what it should and should not convert, something like:
```typescript
const renderer = new marked.Renderer();
renderer.list = (body, _ordered) => {
return body;
};
renderer.heading = (text: string, _level: number) => {
return text;
};
// continue to disable unwanted features
// enable what we need
renderer.code = (code: string) => `<pre><code>${code}</code></pre>`;
renderer.codespan = (text: string) => `<code>${text}</code>`;
renderer.strong = (text: string) => `<strong>${text}</strong>`;
renderer.em = (text: string) => `<em>${text}</em>`;
```
However any other implementation ideas are welcome!
|
https://github.com/huggingface/chat-ui/issues/1592
|
open
|
[
"enhancement"
] | 2024-11-25T17:26:10Z
| 2024-11-27T20:42:19Z
| 2
|
Mounayer
|
huggingface/accelerate
| 3,260
|
How to Properly Resume Multi-GPU Training with accelerate launch Without OOM or Loss Issues?
|
I encountered an issue while running multi-GPU training using `accelerate launch`. I am using 4 GPUs for training, and during the process, I save my model state using:
```python
accelerator.save_state(state_path)
```
Later, I attempt to resume training by loading the model parameters with:
```python
accelerator.load_state(state_path)
```
However, when I start training again, I observe multiple strange processes on the first GPU, which causes an OOM (out of memory) error, as shown in the attached figure.
To address this, I tried adding the following line before:
```python
accelerator.load_state(state_path)
```
The updated code looks like this:
```python
if self.accelerator.is_main_process:
self.accelerator.load_state(state_path)
```
I then used:
```python
accelerator.wait_for_everyone()
```
afterward to synchronize the model state across all four GPUs. While this resolved the issue of multiple processes on the first GPU, the model's loss increases significantly. It seems that the trained weights are not being properly synchronized across all GPUs.
Could anyone please suggest how to correctly resume training in a multi-GPU setup with `accelerate launch`, ensuring the model weights are properly loaded and synchronized across all devices? Thank you!


|
https://github.com/huggingface/accelerate/issues/3260
|
closed
|
[] | 2024-11-25T17:19:06Z
| 2025-05-29T10:26:13Z
| null |
tqxg2018
|
pytorch/xla
| 8,413
|
Review documentation in the docs/source/contribute directory
|
## 📚 Documentation
Review content in the docs/source/learn directory to improve readability and ensure it aligns with Google documentation standards.
|
https://github.com/pytorch/xla/issues/8413
|
closed
|
[
"documentation"
] | 2024-11-25T17:13:51Z
| 2025-06-02T21:59:49Z
| 2
|
mikegre-google
|
huggingface/chat-ui
| 1,589
|
Models using OpenAI endpoint have caching enabled
|
When using models that are currently using the OpenAI endpoint type on HuggingChat (Nemotron, llama 3.2, qwen coder) they seem to have caching enabled.
This means retrying will just reload the previous response extremely quickly. This is not the intended behaviour and does not match what is happening when using the TGI endpoint.
|
https://github.com/huggingface/chat-ui/issues/1589
|
closed
|
[
"huggingchat"
] | 2024-11-25T12:47:01Z
| 2025-03-12T12:56:00Z
| 1
|
nsarrazin
|
pytorch/pytorch
| 141,473
|
How to use torch.compile + HF model?
|
### 🐛 Describe the bug
Problem: There seem to be 2 ways of using torch compile with a HF model, both of which don't work for all the ways a model inference is called, which is one of 3 possible methods: `generate()`, `forward()` and `__call__()`.
## Option 1: `model = torch.compile(model)`
This works if we use either `forward()` or the `__call__()` methods. But, if we try to call the `.generate()` method (which is the more popular API for inferencing and calls `forward()` internally), we notice that we DON'T seem to be using the compiled model (ex. `TORCH_LOGS="dynamo"` gives no output).
Simple reproducible example (custom class with `generate` and `forward` like implementations):
```
import torch
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input):
# return average of the inputs
return torch.Tensor(torch.sum(input)/len(input))
def generate(self, max_tokens, input):
for i in range(max_tokens):
output = self(input) # Doesn't work with either call or forward
input = torch.cat((input, output.view(1)))
return input
model = MyModule()
model = torch.compile(model)
input = torch.rand(4)
output = model.generate(input=input, max_tokens=3) # THIS DOES NOT WORK!!!
#output = model.forward(input=input) # THIS WORKS
```
or use any HF model compile followed by generate:
```
model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m")
model = torch.compile(model)
output = model.generate(input_ids, max_new_tokens=100)
```
The problem is that the output of `torch.compile(model)` is an `OptimizedModule` object with the `__call__()` set to the compiled forward and `orig_mod` set to `model` itself.
When `compiled_model.generate()` is called, this accesses the generate through the `__getattr__()` function which gets the model's generate. That `generate` calls `self()`, which calls the original model's forward instead of the compiled forward.
## Option 2: `model.compile()`
The other option is to use the `torch.nn.Module`'s compile, which does an inplace modification where the compiled forward is stored in `_compiled_call_impl` variable and used when `__call__()` is done. But, this only works with the `__call__()` method and does NOT work with the `forward()` method. If the `generate()` internally uses call, then generate works.
```
model.compile()
output = model.generate(input=input, max_tokens=3) # Works
#output = model.forward(input_ids) # DOES NOT WORK
```
Problem is that neither of these approaches works with both `generate()` and `forward()` methods.
As an aside, I tried a couple of unsuccessful possible fixes:
- Tried if Option 1 could be fixed somehow by setting the `orig_mod.forward` to the compiled forward but that causes infinite recursion because of the circular dependency
- I also tried changing `TorchDynamoContext.__call__()` (in `eval_frame.py`) in the nn.Module case, to internally do `model.compile` instead of creating an OptimizedModule. This fixes things slightly, for ex. Option 1 works if it generate uses `call` instead of `forward`, but obviously, not really a solution.
cc: @chanderg
### Error logs
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241103+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s):
|
https://github.com/pytorch/pytorch/issues/141473
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2024-11-25T05:19:31Z
| 2024-11-26T04:21:05Z
| null |
SilverSoldier
|
huggingface/diffusers
| 10,004
|
how to use kohya sd-scripts flux loras with text encoder keys in diffusers?
|
resulting lora weights from setting train text encoder to true is incompatible with diffusers load_lora_weights. the script networks/convert_flux_lora.py does not convert the text encoder keys either.
|
https://github.com/huggingface/diffusers/issues/10004
|
open
|
[
"contributions-welcome"
] | 2024-11-23T20:54:30Z
| 2025-03-16T15:39:25Z
| null |
neuron-party
|
pytorch/pytorch
| 141,422
|
What is "recompilation profiler" in doc? (Seems to have a dangling link)
|
### 📚 The doc issue
https://pytorch.org/docs/stable/torch.compiler_faq.html says:

But by clicking on it, it jumps to nowhere. I would appreciate it if I could know how to debug this excessive recompilation issue.
### Suggest a potential alternative/fix
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
|
https://github.com/pytorch/pytorch/issues/141422
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2024-11-23T06:01:44Z
| 2024-11-26T23:22:21Z
| null |
fzyzcjy
|
pytorch/torchtitan
| 696
|
[question] Need clarification on the purpose and performance benefits of GarbageCollection class
|
For the [impl](https://github.com/pytorch/torchtitan/blob/5525d7723175a1b4477bde3034a96f803b6c3fae/torchtitan/utils.py#L104)
I have several questions about the motivation and use cases for this class:
Could you provide examples of scenarios where this class can improves performance? compare against default Python GC?
To my understanding, during backward, activation cuda memory should be released timely when we run backward in computational graph, will the GarbageCollection affect how we release cuda memory?
What are the tradeoffs of disabling automatic GC (gc.disable())?
|
https://github.com/pytorch/torchtitan/issues/696
|
closed
|
[
"documentation",
"question"
] | 2024-11-23T04:39:20Z
| 2024-11-26T00:25:12Z
| null |
qsh-zh
|
huggingface/transformers.js
| 1,050
|
How to lengthen the Whisper max audio length?
|
### Question
I'm working from the [webgpu-whisper](https://github.com/huggingface/transformers.js/tree/main/examples/webgpu-whisper) demo, and I'm having a hard time lengthening the maximum audio input allowed. I made the following changes:
```js
-const MAX_AUDIO_LENGTH = 30; // seconds
+const MAX_AUDIO_LENGTH = 120; // seconds
-const MAX_NEW_TOKENS = 64;
+const MAX_NEW_TOKENS = 624;
```
This seems to allow for longer input, but after 30 seconds I get the following error:
```
Attempting to extract features for audio longer than 30 seconds. If using a pipeline to extract transcript from a long audio clip, remember to specify `chunk_length_s` and/or `stride_length_s`.
```
I can't seem to find where to add [stride_length_s](https://huggingface.co/docs/transformers.js/main/en/api/pipelines#pipelinesautomaticspeechrecognitionpipelinetype--code-promise--automaticspeechrecognitionoutputarray--automaticspeechrecognitionoutput----code) in the demo code, however. Could someone point me in the right direction?
|
https://github.com/huggingface/transformers.js/issues/1050
|
closed
|
[
"question"
] | 2024-11-22T17:50:50Z
| 2024-11-26T03:59:03Z
| null |
stinoga
|
huggingface/diffusers
| 9,996
|
Flux.1 cannot load standard transformer in nf4
|
### Describe the bug
loading different flux transformer models is fine except for nf4.
it works for 1% of fine-tunes provided on Huggingface, but it doesn't work for 99% standard fine-tunes available on CivitAI.
example of such model: <https://civitai.com/models/118111?modelVersionId=1009051>
*note* i'm using `FluxTransformer2DModel` directly as its easiest for reproduction plus majority of flux fine-tunes are provided as transformer-only, not full models. but where full model does exist, its exactly the same problem using `FluxPipeline`
### Reproduction
```py
import torch
import bitsandbytes as bnb
import diffusers
print(f'torch=={torch.__version__} diffusers=={diffusers.__version__} bnb=={bnb.__version__}')
kwargs = { 'low_cpu_mem_usage': True, 'torch_dtype': torch.bfloat16, 'cache_dir': '/mnt/models/huggingface' }
files = [
'flux-c4pacitor_v2alpha-f1s-bf16.safetensors',
'flux-iniverse_v2-f1d-fp8.safetensors',
'flux-copax_timeless_xplus_mix2-nf4.safetensors',
]
for f in files:
print(f)
try:
transformer = diffusers.FluxTransformer2DModel.from_single_file(f, **kwargs)
print(transformer.__class__)
except Exception as e:
print(e)
transformer = None
torch.cuda.empty_cache()
```
### Logs
```shell
in `diffusers/loaders/single_file_utils.py:convert_flux_transformer_checkpoint_to_diffusers`
q, k, v, mlp = torch.split(checkpoint.pop(f"single_blocks.{i}.linear1.weight"), split_size, dim=0)
> RuntimeError: split_with_sizes expects split_sizes to sum exactly to 33030144 (input tensor's size at dimension 0), but got split_sizes=[3072, 3072, 3072, 12288]
```
### System Info
torch==2.5.1+cu124 diffusers==0.32.0.dev0 bnb==0.44.1
### Who can help?
@yiyixuxu @sayakpaul @DN6 @asomoza
|
https://github.com/huggingface/diffusers/issues/9996
|
open
|
[
"bug",
"wip"
] | 2024-11-22T16:55:11Z
| 2024-12-28T19:56:54Z
| 16
|
vladmandic
|
huggingface/diffusers
| 9,990
|
How to diagnose problems in training custom inpaint model
|
### Discussed in https://github.com/huggingface/diffusers/discussions/9989
<div type='discussions-op-text'>
<sup>Originally posted by **Marquess98** November 22, 2024</sup>
What I want to do is to perform image inpainting when the input is a set of multimodal images, using sdxl as the pre trained model. But the results are very poor now, and I cannot determine whether it is a problem with the code, dataset, pre trained model, or training parameters.
The infer code snipped is as follows:
noise_scheduler = DDIMScheduler.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", subfolder="scheduler")
noise_scheduler.set_timesteps(denoise_steps, device=device)
zi = vae.encode(masked_image).latent_dist.sample()
# zi = vae.encode(masked_image).latent_dist.sample()
zi = zi * vae.config.scaling_factor
zd = vae.encode(img2).latent_dist.sample()
zd = zd * vae.config.scaling_factor
zi_m = vae.encode(masked_image).latent_dist.sample()
zi_m = zi_m * vae.config.scaling_factor
noise = torch.randn_like(zi)
denoise_steps = torch.tensor(denoise_steps,dtype=torch.int32,device=device)
timesteps_add, _ = get_timesteps(noise_scheduler, denoise_steps, 1.0, device, denoising_start=None)
start_step = 5
zi_t = noise_scheduler.add_noise(zi, noise, timesteps_add[start_step])
# mask = mask.unsqueeze(1)
m = F.interpolate(mask.to(zi.dtype), size=(zi.shape[2], zi.shape[3]),
mode='bilinear', align_corners=False)
input_ids = dataset["prompt_ids"].to(device)
input_ids = input_ids.unsqueeze(0)
encoder_hidden_states = text_encoder(input_ids, return_dict=False)[0]
timesteps = noise_scheduler.timesteps
iterable = tqdm(
enumerate(timesteps),
total=len(timesteps),
leave=False,
desc=" " * 4 + "Diffusion denoising",
)
# iterable = enumerate(timesteps)
start_step = 1
# -----------------------denoise------------------------
for i, t in iterable:
if i >= start_step:
unet_input = torch.cat([zi_t, zi_m, zd, m], dim=1)
with torch.no_grad():
noise_pred = unet(unet_input, t,
encoder_hidden_states)[0]
zi_t = noise_scheduler.step(noise_pred, t, zi_t).prev_sample
# torch.cuda.empty_cache()
decode_rgb = vae.decode(zi_t / vae.config.scaling_factor)
decode_rgb = decode_rgb['sample'].squeeze()
And the results of different start_steps are as follow:[0, 5, 15 respectively]



Another wired thing is the decoder_rgb range is about [-2, 2], Shouldn't its range be [-1, 1] ?
Currently, I think the problem may lie in either the infer code or the scale of dataset(about 5000 sets images so far). Can someone guide me on how to determine which part of the problem it is?
Any suggestions and ideas will be greatly appreciated !!!!</div>
|
https://github.com/huggingface/diffusers/issues/9990
|
closed
|
[] | 2024-11-22T03:16:50Z
| 2024-11-23T13:37:53Z
| null |
Marquess98
|
pytorch/executorch
| 7,030
|
how to build a llama2 runner binary with vulkan backends in the server with intel x86 server
|
### 📚 The doc issue
https://pytorch.org/executorch/stable/native-delegates-executorch-vulkan-delegate.html
https://pytorch.org/executorch/stable/build-run-vulkan.html
dear helper, above documentation descripe how to build the LLaMA runner binary on Android with VULKAN backend. however I can't find how to build the LLaMA runner binary onthe server with intel x86 server with vulkan backends. Could you help me about the issue? thank you in advanced.
### Suggest a potential alternative/fix
_No response_
cc @SS-JIA @manuelcandales
|
https://github.com/pytorch/executorch/issues/7030
|
closed
|
[
"module: vulkan",
"triaged"
] | 2024-11-22T03:16:40Z
| 2025-12-18T21:39:49Z
| null |
l2002924700
|
pytorch/xla
| 8,405
|
Einsum is not added to the supported list for autocast
|
We noticed that einsum is not added to the supported ops list for low precision policy in autocast, is there a reason for that? Does this op have some issues in the support?
|
https://github.com/pytorch/xla/issues/8405
|
closed
|
[
"enhancement"
] | 2024-11-21T17:25:01Z
| 2025-02-17T14:31:09Z
| 3
|
avizon-aws
|
pytorch/torchtitan
| 687
|
Question about FSDP2 + FP8 all gather
|
Does FSDP2 work with both FP8 allgather and FP8 linear?
|
https://github.com/pytorch/torchtitan/issues/687
|
closed
|
[
"question"
] | 2024-11-21T17:13:39Z
| 2024-11-21T23:52:06Z
| null |
sbhavani
|
huggingface/Google-Cloud-Containers
| 123
|
Querying PaliGemma VLMs
|
My collaborators and I are trying to use your very useful containers to deploy and use Google's PaliGemma models on GCS/Vertex. I was wondering what is the best way to query the model with images, especially if the images are stored locally? I see that there is an [example showing this for Llama Vision](https://github.com/huggingface/Google-Cloud-Containers/blob/main/examples/vertex-ai/notebooks/deploy-llama-vision-on-vertex-ai/vertex-notebook.ipynb) but it seems like you have to pass in the images as urls which may not be feasible for us..
We're getting some success by doing something like this, but unsure if that's the right way:
```py
image_path = "/PATH/rabbit.png"
with open(image_path, "rb") as f:
image = base64.b64encode(f.read()).decode("utf-8")
image = f"data:image/png;base64,{image}"
output = deployed_model.predict(
instances=[
{
"inputs":f"What is the animal wearing?",
"parameters":{"max_new_tokens": 100, "do_sample": False}
}
]
)
#> space suit
```
Please let me know if you need more details! Any assistance would be much appreciated!
|
https://github.com/huggingface/Google-Cloud-Containers/issues/123
|
closed
|
[
"question"
] | 2024-11-21T14:52:41Z
| 2024-12-04T16:31:01Z
| null |
kanishkamisra
|
huggingface/diffusers
| 9,983
|
Using StableDiffusionControlNetImg2ImgPipeline Enable_vae_tiling(), seemingly fixed the patch is 512 x 512, where should I set the relevant parameters
|
```
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "a beautiful landscape photograph"
pipe.enable_vae_tiling()
```
|
https://github.com/huggingface/diffusers/issues/9983
|
closed
|
[] | 2024-11-21T09:21:24Z
| 2024-12-02T08:32:52Z
| null |
reaper19991110
|
huggingface/datatrove
| 305
|
How to read text files
|
Hey all is there any text reader in the repo?
I have text files where each line is a document/data sample.
Are there any readers which can read these kind of files directly?
|
https://github.com/huggingface/datatrove/issues/305
|
open
|
[] | 2024-11-21T06:55:21Z
| 2025-05-16T10:51:33Z
| null |
srinjoym-cerebras
|
huggingface/diffusers
| 9,979
|
flux img2img controlnet channels error
|
### Describe the bug
When I use flux's img2img controlnet for inference, a channel error occurs.
### Reproduction
```python
import numpy as np
import torch
import cv2
from PIL import Image
from diffusers.utils import load_image
from diffusers import FluxControlNetImg2ImgPipeline, FluxControlNetPipeline
from diffusers import FluxControlNetModel
from controlnet_aux import HEDdetector
base_model = "black-forest-labs/FLUX.1-dev"
controlnet_model = "Xlabs-AI/flux-controlnet-hed-diffusers"
controlnet = FluxControlNetModel.from_pretrained(
controlnet_model,
torch_dtype=torch.bfloat16,
use_safetensors=True,
)
pipe = FluxControlNetImg2ImgPipeline.from_pretrained(
base_model, controlnet=controlnet, torch_dtype=torch.bfloat16
)
pipe.load_lora_weights("./toonystarkKoreanWebtoonFlux_fluxLoraAlpha.safetensors")
pipe.enable_sequential_cpu_offload()
hed = HEDdetector.from_pretrained("lllyasviel/Annotators")
image_source = load_image("./03.jpeg")
control_image = hed(image_source)
control_image = control_image.resize(image_source.size)
if control_image.mode != 'RGB':
control_image = control_image.convert('RGB')
control_image.save(f"./hed_03.png")
prompt = "bird, cool, futuristic"
image = pipe(
prompt,
image=image_source,
control_image=control_image,
control_guidance_start=0.2,
control_guidance_end=0.8,
controlnet_conditioning_scale=0.5,
num_inference_steps=50,
guidance_scale=6,
).images[0]
image.save("flux.png")
```
### Logs
```shell
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[13], line 2
1 prompt = "bird, cool, futuristic"
----> 2 image = pipe(
3 prompt,
4 image=image_source,
5 control_image=control_image,
6 control_guidance_start=0.2,
7 control_guidance_end=0.8,
8 controlnet_conditioning_scale=0.5,
9 num_inference_steps=50,
10 guidance_scale=6,
11 ).images[0]
12 image.save("flux.png")
File /opt/conda/lib/python3.11/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File /opt/conda/lib/python3.11/site-packages/diffusers/pipelines/flux/pipeline_flux_controlnet_image_to_image.py:924, in FluxControlNetImg2ImgPipeline.__call__(self, prompt, prompt_2, image, control_image, height, width, strength, num_inference_steps, timesteps, guidance_scale, control_guidance_start, control_guidance_end, control_mode, controlnet_conditioning_scale, num_images_per_prompt, generator, latents, prompt_embeds, pooled_prompt_embeds, output_type, return_dict, joint_attention_kwargs, callback_on_step_end, callback_on_step_end_tensor_inputs, max_sequence_length)
921 controlnet_cond_scale = controlnet_cond_scale[0]
922 cond_scale = controlnet_cond_scale * controlnet_keep[i]
--> 924 controlnet_block_samples, controlnet_single_block_samples = self.controlnet(
925 hidden_states=latents,
926 controlnet_cond=control_image,
927 controlnet_mode=control_mode,
928 conditioning_scale=cond_scale,
929 timestep=timestep / 1000,
930 guidance=guidance,
931 pooled_projections=pooled_prompt_embeds,
932 encoder_hidden_states=prompt_embeds,
933 txt_ids=text_ids,
934 img_ids=latent_image_ids,
935 joint_attention_kwargs=self.joint_attention_kwargs,
936 return_dict=False,
937 )
939 guidance = (
940 torch.tensor([guidance_scale], device=device) if self.transformer.config.guidance_embeds else None
941 )
942 guidance = guidance.expand(latents.shape[0]) if guidance is not None else None
File /opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py:1511, in Module._wrapped_call_impl(self, *args, **kwargs)
1509 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1510 else:
-> 1511 return self._call_impl(*args, **kwargs)
File /opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py:1520, in Module._call_impl(self, *args, **kwargs)
1515 # If we don't have any hooks, we want to skip the rest of the logic in
1516 # this function, and just call forward.
1517 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1518 or _global_backward_pre_hooks or _global_backward_hooks
1519 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1520 return forward_call(*args, **kwargs)
1522 try:
1523 result = None
File /opt/conda/lib/python3.11/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward(
|
https://github.com/huggingface/diffusers/issues/9979
|
closed
|
[
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | 2024-11-21T03:39:12Z
| 2025-04-23T20:43:51Z
| 10
|
wen020
|
huggingface/diffusers
| 9,976
|
ControlNet broken from_single_file
|
### Describe the bug
controlnet loader from_single_file was originally added via #4084
and method `ControlNet.from_single_file()` works for non-converted controlnets.
but for controlnets in safetensors format that contain already converted state_dict, it errors out.
its not reasonable to expect from user to know what is the internal dict structure of the controlnet safetensors file
before he can use it.
even worse, some of the newer controlnets are distributed as single-file-only and are already in diffusers format
which makes them impossible to load in difufsers.
for example: <https://huggingface.co/Laxhar/noob_openpose/tree/main>
this issue was already mentioned several times, each time closed as "works as designed"
when in reality its just a failure that should be addressed as an issue.
see #8474 #9208 #8614 as examples of previous issues
### Reproduction
scenario-1: works with non-converted controlnet
```python
import torch
from diffusers import ControlNetModel
from huggingface_hub import hf_hub_download
local_path = hf_hub_download(repo_id='Aptronym/SDNext', filename='ControlNet11/controlnet11Models_canny.safetensors')
cn = ControlNetModel.from_single_file(local_path, torch_dtype=torch.float16)
print(cn.__class__)
```
scenario-1: fails for majority of controlnets available on huggingface
```python
import torch
from diffusers import ControlNetModel
from huggingface_hub import hf_hub_download
local_path = hf_hub_download(repo_id='lllyasviel/sd_control_collection', filename='diffusers_xl_canny_small.safetensors')
cn = ControlNetModel.from_single_file(local_path, torch_dtype=torch.float16)
print(cn.__class__)
```
initial failure is nonsense
> OSError: stable-diffusion-v1-5/stable-diffusion-v1-5 does not appear to have a file named config.json.
whats making this worse is that SD15 and SDXL share the same `ControlNet` class which causes some
confusion on the base repo where to lookup config.
e.g,, here we're loading SDXL controlnet and error referrs to SD15 repo.
anyhow, trying to force correct config:
```py
cn = ControlNetModel.from_single_file(local_path, torch_dtype=torch.float16, config='diffusers/controlnet-canny-sdxl-1.0-small')
```
results in even worse nonsense failure during loading of state_dict:
> TypeError: is_floating_point(): argument 'input' (position 1) must be Tensor, not NoneType
### System Info
diffusers=0.32.0.dev0
python==3.12.3
torch==2.5.1+cu124
### Who can help?
@yiyixuxu @sayakpaul @DN6 @asomoza
|
https://github.com/huggingface/diffusers/issues/9976
|
closed
|
[
"bug"
] | 2024-11-20T13:46:14Z
| 2024-11-22T12:22:53Z
| 7
|
vladmandic
|
pytorch/xla
| 8,402
|
Kaggle Notebook: model return loss None on TPU
|
## ❓ Questions and Help
Hi, I recieved loss None when training model. Anyone can help?
Simple reproduct kaggle notebook [link](https://www.kaggle.com/code/liondude/notebook548442067d)
```
import os
import time
import pandas as pd
import numpy as np
from tqdm import tqdm
import datasets
import torch
import torch.nn as nn
import torch.optim as optim
import torch_xla as xla
import torch_xla.core.xla_model as xm
import torch_xla.distributed.xla_multiprocessing as xmp
from torch_xla.distributed.fsdp.utils import apply_xla_patch_to_nn_linear
import torch_xla.distributed.parallel_loader as pl
import torch_xla.core.xla_env_vars as xenv
import torch_xla.debug.metrics as met
import torch_xla.distributed.spmd.xla_sharding as xs
from torch_xla.distributed.spmd.xla_sharding import Mesh
import torch_xla.runtime as xr
import re
from datasets import Dataset, load_dataset
import transformers
from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
from transformers import AutoConfig, AutoProcessor, AutoTokenizer, AutoModelForCausalLM, DataCollatorWithPadding
from peft import PeftModel, PeftConfig, get_peft_model, LoraConfig, TaskType
from transformers import logging as hf_logging
hf_logging.set_verbosity_error()
os.environ["PJRT_DEVICE"] = "TPU"
class CFG:
NUM_EPOCHS = 1
BATCH_SIZE = 24
DROPOUT = 0.05
MODEL_NAME = 'unsloth/Qwen2.5-7B-Instruct'
SEED = 2024
MAX_LENGTH = 4096
NUM_WARMUP_STEPS = 128
LR_MAX = 2e-4
NUM_LABELS = 3
LORA_RANK = 16
LORA_ALPHA = 16
LORA_MODULES = ['o_proj', 'v_proj',"q_proj", "k_proj"]
FLAGS = {'MAX_INPUT': 64,
'LOGGING_STEPS': 10,
'NUM_EPOCHS': 3,
'BATCH_SIZE': 24,
}
MAX_INPUT=128
MODEL = "unsloth/Qwen2.5-7B-Instruct"
def get_dataset():
tokenizer = AutoTokenizer.from_pretrained(CFG.MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = 'right'
tokenizer.add_eos_token = True
# save tokenizer to load offline during inference
tokenizer.save_pretrained('tokenizer')
max_seq_length = 4096
tokenizer_x = AutoTokenizer.from_pretrained(CFG.MODEL_NAME, max_seq_length=max_seq_length)
tokenizer_x.pad_token_id = tokenizer.eos_token_id
df = datasets.load_dataset('stanfordnlp/imdb', split='train')
# df = df['train']
df = df.remove_columns(['label'])
def preprocess(tasks, train_mode=True):
return {"text": 'this is test'}
df = df.map(preprocess, batched = False, remove_columns=df.column_names)
print(df)
def preprocess_function(example):
x = tokenizer(example["text"], truncation=True, max_length=4096, padding='max_length')
return {
"input_ids": x.input_ids,
"labels": 0,
"attention_mask": x.attention_mask
}
data_train = df.map(preprocess_function, batched=False, num_proc=4).remove_columns(['text'])
return data_train, tokenizer, FLAGS
##############################################################################################################################################
def train(data_train, tokenizer, FLAGS):
# print('rank', rank)
N_SAMPLES = len(data_train)
STEPS_PER_EPOCH = N_SAMPLES // CFG.BATCH_SIZE
METRICS = {
'loss': [],
'accuracy': {'y_true': [], 'y_pred': [] }}
device = xm.xla_device()
print('device', device)
num_devices = xr.global_runtime_device_count() #8
model_axis = 1
mesh_shape = (1, num_devices // model_axis, model_axis) # 2x4 on v3-8, 2x2 on v4-8
device_ids = np.array(range(num_devices))
mesh = Mesh(device_ids, mesh_shape, ('dcn', 'data', 'model'))
print('world_size:', xm.xrt_world_size())
rng = torch.Generator().manual_seed(42)
training_loader = torch.utils.data.DataLoader(data_train,
batch_size=FLAGS['BATCH_SIZE'],
collate_fn=DataCollatorWithPadding(tokenizer=tokenizer),
# sampler=train_sampler,
drop_last=True, generator=rng)
sharding_spec = xs.ShardingSpec(mesh, (('dcn', 'data'), None))
xla_train_loader = pl.MpDeviceLoader(training_loader,
device = xm.xla_device(),
input_sharding=sharding_spec,
device_prefetch_size=16
)
base_model = AutoModelForCausalLM.from_pretrained(MODEL, torch_dtype=torch.bfloat16)
base_model.config.pretraining_tp = 1
tokenizer.pad_token = tokenizer.eos_token # If pad_token is not set
base_model.config.pad_token_id = tokenizer.pad_token_id # Ensure the model respects the pad_token
|
https://github.com/pytorch/xla/issues/8402
|
closed
|
[
"question"
] | 2024-11-20T09:50:51Z
| 2025-02-17T14:32:56Z
| null |
hiwamk
|
pytorch/pytorch
| 141,118
|
Dynamo: how to deal with multiple inheritance (nn.Module/MutableMapping)?
|
### 🐛 Describe the bug
TensorDict is a MutableMapping object, and is treated as such by torch.compile:
```python
import torch
from tensordict import TensorDict
td = TensorDict(a=1, b=2, c=True)
@torch.compile(fullgraph=True)
def add1(td):
return TensorDict(**td)+1
add1(td)
```
We also have a `TensorDictParams` primitive that acts a bit like ParameterList: it is a TensorDict but also an nn.Module. That's useful when you want to set a TensorDict in an nn.Module have have the leaf tensors included in the state_dict, or dispatch ops like `module.to(...)` to the tensors it contains. However, `_dynamo` looks at it like an nn.Module and not a MutableMapping
```python
import torch
from tensordict import TensorDictParams, TensorDict
td = TensorDictParams(TensorDict(a=1, b=2, c=True))
@torch.compile(fullgraph=True)
def add1(td):
return TensorDict(**td)+1
add1(td)
```
breaks with
```
File "/Users/vmoens/venv/rl/lib/python3.10/site-packages/torch/_dynamo/variables/dicts.py", line 357, in call_method
dict_vt = BuiltinVariable.call_custom_dict(tx, dict, args[0])
File "/Users/vmoens/venv/rl/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 1432, in call_custom_dict
unimplemented(f"{user_cls.__name__}(): {args} {kwargs}")
File "/Users/vmoens/venv/rl/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 313, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: dict(): (UnspecializedNNModuleVariable(TensorDictParams),) {}
```
My understanding is that `call_custom_dict` looks at the arg an in one case it's a `variables.MutableMappingVariable` which is fine but in the other it's a `UnspecializedNNModuleVariable` which isn't a mutable mapping.
So I guess my question is (other than how can we fix this) how does dynamo look at multiple inheritance? Shouldn't there be a way to tell "look, this isn't a bird or a fish but a fish that can fly"?
(note that in this specific case, `smth(**obj)` will call `obj.keys()` followed by `obj.__getitem__` which are ops that compile is happy about - maybe that's what `call_custom_dict` should be doing?)
Here is a MRE:
```python
import torch
from torch import nn
import collections
# class MyWeirdDict(collections.abc.MutableMapping): # Works
class MyWeirdDict(collections.abc.MutableMapping, nn.Module): # breaks
def __init__(self, **kwargs):
super().__init__()
self._items = kwargs
def keys(self):
return self._items.keys()
def __getitem__(self, item):
return self._items[item]
def __setitem__(self, key, value):
self._items[key] = value
def __delitem__(self, item):
del self._items[item]
def __len__(self):
return len(self._items)
def __iter__(self):
yield from self._items
def __hash__(self):
return hash(id(self))
def items(self):
for k, v in self._items.items():
yield (k, v)
@torch.compile(fullgraph=True)
def to_weird_dict(td):
return MyWeirdDict(**td)
d = MyWeirdDict(a=1, b=2, c=3)
to_weird_dict(d)
```
### Error logs
See above
### Versions
nightlies
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
|
https://github.com/pytorch/pytorch/issues/141118
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-dicts",
"dynamo-nn-modules"
] | 2024-11-20T09:01:58Z
| 2024-12-10T19:22:18Z
| null |
vmoens
|
pytorch/pytorch
| 141,116
|
How to fuse batchnorm to conv2d in the graph exported by torch.export
|
I used the torch.export to export my CNN model in eval mode,but the op batchnorm still exists. how to eliminate it. Is there some options in torch.export.export function or I should write a fusion pass by myself.
Thanks.
code:
```
import torch
import torch.nn as nn
class CNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(in_channels=16, out_channels=16, kernel_size=3, stride=1, padding=1)
self.bn1 = nn.BatchNorm2d(16)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
return x
torch.manual_seed(0)
model=CNN().eval()
input=torch.randn(3,16,224,224)
ep=torch.export.export(model,(input,))
print(ep.graph)
```
graph:
```
graph():
%p_conv1_weight : [num_users=1] = placeholder[target=p_conv1_weight]
%p_conv1_bias : [num_users=1] = placeholder[target=p_conv1_bias]
%p_bn1_weight : [num_users=1] = placeholder[target=p_bn1_weight]
%p_bn1_bias : [num_users=1] = placeholder[target=p_bn1_bias]
%b_bn1_running_mean : [num_users=1] = placeholder[target=b_bn1_running_mean]
%b_bn1_running_var : [num_users=1] = placeholder[target=b_bn1_running_var]
%b_bn1_num_batches_tracked : [num_users=0] = placeholder[target=b_bn1_num_batches_tracked]
%x : [num_users=1] = placeholder[target=x]
%conv2d : [num_users=1] = call_function[target=torch.ops.aten.conv2d.default](args = (%x, %p_conv1_weight, %p_conv1_bias, [1, 1], [1, 1]), kwargs = {})
%_native_batch_norm_legit_no_training : [num_users=1] = call_function[target=torch.ops.aten._native_batch_norm_legit_no_training.default](args = (%conv2d, %p_bn1_weight, %p_bn1_bias, %b_bn1_running_mean, %b_bn1_running_var, 0.1, 1e-05), kwargs = {})
%getitem : [num_users=1] = call_function[target=operator.getitem](args = (%_native_batch_norm_legit_no_training, 0), kwargs = {})
return (getitem,)
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
|
https://github.com/pytorch/pytorch/issues/141116
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 2024-11-20T07:46:28Z
| 2024-11-20T19:06:46Z
| null |
TingfengTang
|
pytorch/ao
| 1,315
|
How to trigger torchao unit tests?
|
We plan to run unit tests when we switch to different torch versions and triton versions.
How should we leverage with torchao's unit tests to make sure new torch version and triton versions are working?
Thanks!
|
https://github.com/pytorch/ao/issues/1315
|
closed
|
[] | 2024-11-19T22:50:34Z
| 2024-12-05T01:43:54Z
| null |
goldhuang
|
huggingface/lerobot
| 515
|
ACT is working, but not Diffusion
|
Hello Team,
your work is so good, I am currently working on creating some nice policies with Lerobot repo, architecture and software. I tried ACT on my robot, it is working fine, able to execute the tasks what it learnt in the evaluation.
I tried training Diffusion policy, multiple times with different params and also the default params, what you provided in the repo. I tried PushT in colab, its working but not in robot. Can you please explain why its not working, or should I change other things??
I forgot to mention, I used 3 cameras for data collection and training for Diffusion
Thank you
EDIT (aliberts): format
|
https://github.com/huggingface/lerobot/issues/515
|
closed
|
[
"question",
"policies",
"stale"
] | 2024-11-19T18:58:28Z
| 2025-11-30T02:37:09Z
| null |
Kacchan16
|
huggingface/transformers.js
| 1,042
|
how can i pass embeddings or context to a text2text-generation model
|
### Question
I downloaded the model to local. I found that there doesn't seem to be an API that allows me to pass embeddings. How can I make this model understand the context?
Then I tried to pass the context content to this model, but the model didn't seem to accept it and output the following words.
The code is like the following:
```js
const model =await pipeline("text2text-generation", "LaMini-Flan-T5-783M")
const result = await model("you are a teacher, who are you?",{})
```
this is model output
```json
[
{
"generated_text": "As an AI language model, I am not a teacher."
}
]
```
I don't know whether it's due to the model itself or that I just haven't found the API for passing the context😕
|
https://github.com/huggingface/transformers.js/issues/1042
|
closed
|
[
"question"
] | 2024-11-19T18:32:45Z
| 2024-11-20T05:34:45Z
| null |
electroluxcode
|
huggingface/transformers.js
| 1,041
|
Full preload example
|
### Question
Hello!
I'm looking for a full "preload model" nodejs example.
Say I do this:
```ts
import { env } from '@huggingface/transformers';
env.allowRemoteModels = false;
env.localModelPath = '/path/to/local/models/';
```
how do I "get" the model to that path? I want to download it when building my docker image
|
https://github.com/huggingface/transformers.js/issues/1041
|
closed
|
[
"question"
] | 2024-11-19T12:34:04Z
| 2024-11-26T12:44:55Z
| null |
benjick
|
pytorch/benchmark
| 2,543
|
How to get benchmark statistics?
|
I'm building a CI to test some models on certain types of devices. I want get benchmark statistics like which model cases failed? which tests were skipped and why? These statistics will be used to generate a table like this:
<table>
<tr>
<th rowspan="2">Devices</th>
<th colspan="2">BERT_pytorch</th>
<th colspan="2">hf_GPT2</th>
</tr>
<tr>
<th>train</th>
<th>eval</th>
<th>train</th>
<th>eval</th>
</tr>
<tr>
<th>CPU</th>
<th>✅</th>
<th>✅</th>
<th>✅</th>
<th>✅</th>
</tr>
<tr>
<th>CUDA</th>
<th>✅</th>
<th>✅</th>
<th>✅</th>
<th>✅</th>
</tr>
<tr>
<th>Foo</th>
<th>❌ (failed)</th>
<th>✅</th>
<th>⚠️ (skipped)</th>
<th>✅</th>
</tr>
</table>
So how can I get benchmark statistics? Is there a recommended way to do this? Can anyone give suggestions? Thanks so much!
|
https://github.com/pytorch/benchmark/issues/2543
|
closed
|
[] | 2024-11-19T09:36:22Z
| 2025-02-11T08:15:40Z
| null |
shink
|
pytorch/torchchat
| 1,388
|
eval doc does not pass test
|
### 🐛 Describe the bug
https://github.com/pytorch/torchchat/pull/1383 enables `run-docs evaluation` to extract a test script from eval documentation,
to run evaluation script. In turn, this extracts the command
```
python3 torchchat.py eval stories15M --tasks wikitext --limit 10
```
from the eval doc as a test to ensure that the doc is in fact correct. This appears to be a correct use of eval to me, yet it fails when running as follows:
https://hud.pytorch.org/pr/pytorch/torchchat/1383#33154706429
```
2024-11-18T18:13:35.1710781Z + python3 torchchat.py eval stories15M --tasks wikitext --limit 10
2024-11-18T18:13:35.1711201Z NumExpr defaulting to 16 threads.
2024-11-18T18:13:35.1711531Z PyTorch version 2.6.0.dev20241002+cu121 available.
2024-11-18T18:13:35.1711768Z
2024-11-18T18:13:35.1711939Z Downloading builder script: 0% 0.00/5.67k [00:00<?, ?B/s]
2024-11-18T18:13:35.1712401Z Downloading builder script: 100% 5.67k/5.67k [00:00<00:00, 37.1MB/s]
2024-11-18T18:13:35.1712808Z Traceback (most recent call last):
2024-11-18T18:13:35.1713182Z File "/pytorch/torchchat/torchchat.py", line 100, in <module>
2024-11-18T18:13:35.1713552Z eval_main(args)
2024-11-18T18:13:35.1713905Z File "/pytorch/torchchat/torchchat/usages/eval.py", line 238, in main
2024-11-18T18:13:35.1714340Z builder_args = BuilderArgs.from_args(args)
2024-11-18T18:13:35.1714667Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-18T18:13:35.1715101Z File "/pytorch/torchchat/torchchat/cli/builder.py", line 169, in from_args
2024-11-18T18:13:35.1715520Z return cls(
2024-11-18T18:13:35.1715827Z run_cmd_or_die(f"docker exec -t {container_name} /exec")
2024-11-18T18:13:35.1716580Z File "/home/ec2-user/actions-runner/_work/torchchat/torchchat/test-infra/.github/scripts/run_with_env_secrets.py", line 39, in run_cmd_or_die
2024-11-18T18:13:35.1717388Z raise RuntimeError(f"Command {cmd} failed with exit code {exit_code}")
2024-11-18T18:13:35.1718153Z RuntimeError: Command docker exec -t c2e4cff2805edb5848301b09ed712578d726414222642162007e0e16e7c48ba1 /exec failed with exit code 1
2024-11-18T18:13:35.1718786Z ^^^^
2024-11-18T18:13:35.1719026Z File "<string>", line 24, in __init__
2024-11-18T18:13:35.1719475Z File "/pytorch/torchchat/torchchat/cli/builder.py", line 76, in __post_init__
2024-11-18T18:13:35.1719926Z raise RuntimeError(
2024-11-18T18:13:35.1720431Z RuntimeError: need to specified a valid checkpoint path, checkpoint dir, gguf path, DSO path, or PTE path
```
### Versions
github runner, environment as configured by pytorch test infra
|
https://github.com/pytorch/torchchat/issues/1388
|
closed
|
[
"documentation"
] | 2024-11-19T05:38:54Z
| 2024-12-10T04:41:51Z
| 2
|
mikekgfb
|
pytorch/ao
| 1,310
|
[NF4] Various bugs in how NF4 handles `.to()` to move to a different device
|
Reproduction
```python
import torch
from torch import nn
from torchao.dtypes.nf4tensor import to_nf4
x = torch.randn(1024, 1024)
x_nf4 = to_nf4(x)
print(x_nf4.cuda()) # this will dequantize NF4 -> unwanted
print(x_nf4.to(device="cuda")) # this will raise error
print(x_nf4.to("cuda")) # this will do the right thing
# .cpu() does not move .nf4 to CPU, because call_from_inner_tensors does not call the method on .nf4
x = torch.randn(1024, 1024).cuda()
x_nf4 = to_nf4(x).cpu()
print(x_nf4.quantized_data.device) # cpu
print(x_nf4.nf4.device) # cuda:0
print(x_nf4.to(torch.float32)) # error due to device mismatch
# not working with nn.Module
linear = nn.Linear(1024, 1024)
linear.weight = nn.Parameter(to_nf4(linear.weight.detach()), requires_grad=False)
linear.cuda() # NF4 weight is not moved to CUDA
# linear.to("cuda") # same problem
print(linear.weight.device) # cuda:0
print(linear.weight.quantized_data.device) # cpu
print(linear.weight.to(torch.float32).device) # cpu
```
Summary:
1. `NF4Tensor.cuda()` will dequantize -> this is unwanted
2. `NF4Tensor.to(device="cuda")` will raise `IndexError`, since `args[1]` does not exist
3. `NF4Tensor.cpu()` does not move `.nf4` attribute -> cannot dequantize
4. Does not work with `nn.Module.to(device)`
- IMO, the semantics `NF4Tensor.to(torch.float32)` will dequantize is the culprit that causes these troubles + it is not consistent with AQT behavor. If `.to(dtype)` does not dequantize (only change appearance dtype), we only need to implement `aten._to_copy` instead of `Tensor.cpu`, `Tensor.to` and myriad of others. Though I understand this design is to make NF4 feels more like a true dtype.
- I think it makes more sense to designate `NF4Tensor.dequantize()` as the method to dequantize the tensor (also consistent with plain Tensor behavior, though plain `Tensor.dequantize()` will always return FP32), instead of the current situation (`NF4Tensor.dequantize()` is a static method for lookup table, while `NF4Tensor.get_original_weight()` does dequant)
- Changing this is BC, so we probably leave it as is.
|
https://github.com/pytorch/ao/issues/1310
|
closed
|
[
"bug"
] | 2024-11-19T04:31:35Z
| 2024-11-26T06:19:03Z
| null |
gau-nernst
|
pytorch/torchchat
| 1,385
|
Update dead link in https://github.com/pytorch/torchchat/blob/main/docs/quantization.md
|
### 🐛 Describe the bug
There is a dead link https://github.com/pytorch/torchchat/blob/main/torchchat/utils/quantize.py#L1260-L1266 in https://github.com/pytorch/torchchat/blob/main/docs/quantization.md like `See the available quantization schemes [here](https://github.com/pytorch/torchchat/blob/main/torchchat/utils/quantize.py#L1260-L1266).`. Could you please help update it to show the quantization schemes examples?
### Versions
#
|
https://github.com/pytorch/torchchat/issues/1385
|
closed
|
[
"documentation",
"Quantization"
] | 2024-11-19T01:34:54Z
| 2024-12-09T22:37:22Z
| 4
|
yanbing-j
|
pytorch/xla
| 8,390
|
[TPU][torch.compile] How to introduce in-place custome Ops through Pallas ?
|
## ❓ Questions and Help
Hi torch.xla team, thank you so much for the great work on making pytoch available on XLA devices! We have had great experience with it so far.
We are exploring the idea of adding custome Pallas kernels in the graph and using it along with `torch.compile(..., backend='openxla')` for TPUs. However, we have hit a limitation that the operator cannot be in-place, which is very important for performance reasons.
I have stripped down a minimal reproduceable example, happy to provide more details:
```
from typing import List, Callable
import jax
import jax.numpy as jnp
from jax.experimental import pallas as pl
from jax.experimental.pallas import tpu as pltpu
import torch
import torch_xla
from torch_xla.experimental import custom_kernel
from functools import partial
def plus_one_kernel(x_ref, o_ref):
o_ref[:] = o_ref[:] + 1
@partial(jax.jit, donate_argnums=[0])
def plus_one_pallas(x: jax.Array):
size = x.shape[0]
return pl.pallas_call(
plus_one_kernel,
grid=(1, 1),
out_shape=jax.ShapeDtypeStruct(x.shape, x.dtype),
input_output_aliases={0:0}
)(x)
@torch.library.custom_op("xla::plus_one_", mutates_args=("x", ))
def plus_one_(x: torch.Tensor) -> None:
plus_one_pt = torch_xla.experimental.custom_kernel.make_kernel_from_pallas(
plus_one_pallas, output_shape_dtype_fn = lambda x: [(x.shape, x.dtype)]
)
plus_one_pt(x)
def fn(x):
torch.ops.xla.dynamo_set_buffer_donor_(x, True)
return plus_one_(x)
fn = torch.compile(fn, backend="openxla")
x = torch.ones(4, dtype=torch.bfloat16, device='xla')
fn(x)
print(x)
```
|
https://github.com/pytorch/xla/issues/8390
|
closed
|
[] | 2024-11-18T19:03:23Z
| 2024-11-18T19:08:50Z
| null |
xinli-sw
|
pytorch/xla
| 8,389
|
Prepare a subsection to educate users on the PyTorch workloads on AI-Hypercomputer
|
## 📚 Documentation
AI-Hypercomputer is where customers and users can find optimized implementation of representative models.
Please add a section in the PyTorchXLA README page (and the html documentation) that introduces this concept and points the users to the following resource: https://github.com/AI-Hypercomputer/tpu-recipes
Keep in mind that the AI-Hypercomputer tpu-recipe repo is WIP and gradually grows in scope.
Read more [context on AI-Hypercomputer ](https://cloud.google.com/blog/products/ai-machine-learning/introducing-cloud-tpu-v5p-and-ai-hypercomputer?e=48754805)
Timeline: would be great to add this documentation to the repo for 2.6 branch cut.
cc @tengyifei
|
https://github.com/pytorch/xla/issues/8389
|
closed
|
[
"documentation"
] | 2024-11-18T18:48:24Z
| 2024-12-10T00:24:25Z
| 1
|
miladm
|
huggingface/transformers.js
| 1,038
|
script.convert tfjs model to onnx support
|
### Question
I'm using tfjs-node to create an image-classifier model;
but I'm stuck with how to convert model.json to a format that can be used by optimum or script.convert to convert it to a onnx file.
I'm able to convert to a graph model using
```
tensorflowjs_converter --input_format=tfjs_layers_model \ --output_format=tfjs_graph_model \ ./saved-model/layers-model/model.json \ ./saved-model/graph-model
```
and then I can convert to an onnx using
```
python3 -m tf2onnx.convert --tfjs ./saved-model/graph-model/model.json --output ./saved-model/model.onnx
```
This works fine when I test in python but I'm unable to use in transformers.js - I probably need to use optimum to convert it?
I tried a number of approaches but was unable to convert to onnx - I then saw script.convert but am having difficulties
- This is an example of the code I'm using to test the model with
```
import onnxruntime as ort
from PIL import Image
import numpy as np
# Load the ONNX model
session = ort.InferenceSession('./saved-model/model.onnx')
# Get input and output names
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
# Load and preprocess the image
img = Image.open('./training_images/shirt/00e745c9-97d9-429d-8c3f-d3db7a2d2991.jpg').resize((128, 128))
img_array = np.array(img).astype(np.float32) / 255.0 # Normalize pixel values to [0, 1]
img_array = np.expand_dims(img_array, axis=0) # Add batch dimension
# Run inference
outputs = session.run([output_name], {input_name: img_array})
print(f"Inference outputs: {outputs}")
```
[Uploading model.onnx.txt…]()
Any guidance on how to go from tfjs model.json to onnx supported by transformers.js would really help me out.
Thanks!
|
https://github.com/huggingface/transformers.js/issues/1038
|
open
|
[
"question"
] | 2024-11-18T15:42:46Z
| 2024-11-19T10:08:28Z
| null |
JohnRSim
|
huggingface/chat-ui
| 1,573
|
Include chat-ui in an existing React application
|
Hello,
Is it possible to integrate / embed chat-ui in an existing application, like a React component?
For example, to add a chat module to an existing website with the UI of chat-ui.
As is the case with Chainlit : https://docs-prerelease.chainlit.io/customisation/react-frontend
|
https://github.com/huggingface/chat-ui/issues/1573
|
open
|
[
"enhancement"
] | 2024-11-18T14:11:58Z
| 2024-11-18T14:15:17Z
| 0
|
martin-prillard
|
huggingface/optimum
| 2,097
|
TFJS support model.json to ONNX conversion
|
### Feature request
Currently using node to create an image-classifier model.json with tfjs
- I don't think Optimum support this format to convert to onnx?
It would be nice to just use optimum and point to model.json.
### Motivation
Currently I'm creating the model converting it to graph and then converting to onnx like this -
```
tensorflowjs_converter --input_format=tfjs_layers_model \ --output_format=tfjs_graph_model \ ./saved-model/layers-model/model.json \ ./saved-model/graph-model
```
```
python3 -m tf2onnx.convert --tfjs ./saved-model/graph-model/model.json --output ./saved-model/model.onnx
```
I'm not sure how to switch to use optimum - do I need to convert model.json to .h5 and then run?
- if I try this I run into huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './path_to_save/model.h5'. Use `repo_type` argument if needed
### Your contribution
N/A
|
https://github.com/huggingface/optimum/issues/2097
|
open
|
[
"exporters",
"tflite"
] | 2024-11-18T12:55:05Z
| 2024-11-19T10:22:35Z
| 0
|
JohnRSim
|
huggingface/optimum-benchmark
| 294
|
How to Use a Local Model When Calling the Python API
|

|
https://github.com/huggingface/optimum-benchmark/issues/294
|
closed
|
[] | 2024-11-18T06:36:24Z
| 2024-12-09T12:23:30Z
| null |
WCSY-YG
|
pytorch/xla
| 8,388
|
Need help validating TPU/XLA devices support for ComfyUI.
|
## ❓ Questions and Help
I'm working on adding initial XLA support to ComfyUI https://github.com/comfyanonymous/ComfyUI/pull/5657 and would greatly appreciate any feedback or validation from the community. Specifically, I'm looking for:
- Testing across different XLA-compatible hardware (e.g., TPUs or GPUs with XLA support).
- Suggestions for optimizing performance with XLA in this context.
- Identifying any compatibility issues or edge cases that might arise during execution.
If you're familiar with integrating XLA into PyTorch workflows or have experience with related pipelines, your input would be invaluable. Thank you in advance for your help!
|
https://github.com/pytorch/xla/issues/8388
|
open
|
[
"question"
] | 2024-11-17T23:09:49Z
| 2025-02-17T18:13:57Z
| null |
radna0
|
huggingface/lerobot
| 511
|
Minimum Requirements - Running Policies in production/ Training Policies
|
I was wondering what types of hardware can policies trained using lerobot can run on. Lets say I wanted to run policies in production on say a raspberry pi. Is it possible to run training on beefier hardware and then deploy policies to lower-end hardware to run? Is it better to record with various cameras or just use the same camera? What is the minimum quality?
You have tutorials on training and evaluating policies but nothing about deploying to production. Would be interesting to see this.
Thank you
|
https://github.com/huggingface/lerobot/issues/511
|
closed
|
[
"question"
] | 2024-11-17T17:34:50Z
| 2025-04-07T16:23:41Z
| null |
rkeshwani
|
huggingface/transformers.js
| 1,035
|
How can I implement partial output in the react demo?
|
### Question
Hello! I am reading the Transformers.js documentation for "[Building a react application](https://huggingface.co/docs/transformers.js/tutorials/react)", but I encountered an issue at [step 4](https://huggingface.co/docs/transformers.js/tutorials/react#step-4-connecting-everything-together).
I don't know how to implement the **partial output** of the translation results, even though the documentation provides the following instructions:
```javascript
let output = await translator(event.data.text, {
tgt_lang: event.data.tgt_lang,
src_lang: event.data.src_lang,
// Allows for partial output
callback_function: x => {
self.postMessage({
status: 'update',
output: translator.tokenizer.decode(x[0].output_token_ids, { skip_special_tokens: true })
});
}
});
```
I have completed all the steps in the tutorial documentation, but I still cannot get the output to work properly. I tried using `console.log` for debugging and found that the `callback_function` is not working, and the main thread is not receiving any messages with the status `update`. I have also not found any information about the `callback_function` in the transformers.js documentation. I apologize for taking up your time, but I sincerely need your help. 🙏
|
https://github.com/huggingface/transformers.js/issues/1035
|
open
|
[
"question"
] | 2024-11-17T11:29:22Z
| 2024-12-02T23:00:13Z
| null |
DikkooXie
|
huggingface/lerobot
| 510
|
Do we have to compulsory use trossen robotics robots for this repo?
|
Or any robot will work fine?
Also one more question.
Do we have to use depth camera or simple camera will work fine?
|
https://github.com/huggingface/lerobot/issues/510
|
closed
|
[
"question",
"robots"
] | 2024-11-17T11:14:52Z
| 2025-04-07T16:27:40Z
| null |
hemangjoshi37a
|
huggingface/diffusers
| 9,942
|
Unable to install pip install diffusers>=0.32.0dev
|
### Describe the bug
I am installing the following version
pip install diffusers>=0.32.0dev
However it does nothing
```
(c:\aitools\CogVideo\cv_venv) C:\aitools\CogVideo>pip install diffusers>=0.32.0dev
(c:\aitools\CogVideo\cv_venv) C:\aitools\CogVideo>
```
I even uninstalled the previous version
```
(c:\aitools\CogVideo\cv_venv) C:\aitools\CogVideo>pip uninstall diffusers
Found existing installation: diffusers 0.31.0
Uninstalling diffusers-0.31.0:
Would remove:
c:\aitools\cogvideo\cv_venv\lib\site-packages\diffusers-0.31.0.dist-info\*
c:\aitools\cogvideo\cv_venv\lib\site-packages\diffusers\*
c:\aitools\cogvideo\cv_venv\scripts\diffusers-cli.exe
Proceed (Y/n)? y
Successfully uninstalled diffusers-0.31.0
```
### Reproduction
Create a conda environment and install using
`pip install diffusers>=0.32.0dev`
So I understand it is not release here
https://pypi.org/project/diffusers/#history
How do I install on Windows 11
I even checked the branch

### Logs
_No response_
### System Info
Python 3.11.10
Windows 11
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/9942
|
closed
|
[
"bug"
] | 2024-11-17T10:26:19Z
| 2024-11-17T12:27:23Z
| 0
|
nitinmukesh
|
huggingface/candle
| 2,622
|
How to compute `Atan2` for tensors?
|
I am trying to implement DeepPhase in candle but I am struggling figuring out how to calculate the phase angles from two tensors using `atan2` operation.
|
https://github.com/huggingface/candle/issues/2622
|
open
|
[] | 2024-11-16T16:45:36Z
| 2024-11-17T14:21:50Z
| null |
cryscan
|
pytorch/xla
| 8,387
|
Can Triton be used with XLA/TPU devices?
|
## ❓ Questions and Help
I see that there are docs for triton support but only for GPU? Is it possible for TPU to use triton?
```[tasklist]
### Tasks
```
|
https://github.com/pytorch/xla/issues/8387
|
closed
|
[] | 2024-11-16T09:46:06Z
| 2024-12-11T06:21:18Z
| 1
|
radna0
|
pytorch/torchchat
| 1,380
|
What is the future plan of model expansion?
|
### 🚀 The feature, motivation and pitch
I see current torchchat only support a few kinds of model, like llama based(liked) architecture, or pre-defined Transformer architecture models. Is there any plan to support other kinds of model architecture in the future? which kinds of model you're considering to add? If there is a new model whose architecture is not in the supporting list, is there a way to run it?
### Alternatives
_No response_
### Additional context
_No response_
### RFC (Optional)
_No response_
|
https://github.com/pytorch/torchchat/issues/1380
|
open
|
[
"enhancement",
"Question",
"triaged"
] | 2024-11-15T23:33:01Z
| 2025-03-31T20:39:15Z
| null |
jenniew
|
huggingface/transformers.js
| 1,032
|
How to identify which models will work with transformers.js?
|
### Question
I've tried multiple models from MTEB dashboard (e.g. `jinaai/jina-embeddings-v3`, `jinaai/jina-embeddings-v2`, `dunzhang/stella_en_400M_v5`), but none of them work.
It's not clear which models will work?
```ts
const generateGteSmallEmbedding = await pipeline(
'feature-extraction',
'dunzhang/stella_en_400M_v5',
);
```
|
https://github.com/huggingface/transformers.js/issues/1032
|
open
|
[
"question"
] | 2024-11-15T22:13:00Z
| 2024-12-22T02:41:43Z
| null |
punkpeye
|
huggingface/datasets
| 7,291
|
Why return_tensors='pt' doesn't work?
|
### Describe the bug
I tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List?

### Steps to reproduce the bug

### Expected behavior
Sorry for this silly question, I'm noob on using this tool. But I think it should return a tensor value as I have used the protocol?
When I tokenize only one sentence using tokenized_input=tokenizer(input, return_tensors='pt' ),it does return in tensor type. Why doesn't it work in map()?
### Environment info
transformers>=4.41.2,<=4.45.0
datasets>=2.16.0,<=2.21.0
accelerate>=0.30.1,<=0.34.2
peft>=0.11.1,<=0.12.0
trl>=0.8.6,<=0.9.6
gradio>=4.0.0
pandas>=2.0.0
scipy
einops
sentencepiece
tiktoken
protobuf
uvicorn
pydantic
fastapi
sse-starlette
matplotlib>=3.7.0
fire
packaging
pyyaml
numpy<2.0.0
|
https://github.com/huggingface/datasets/issues/7291
|
open
|
[] | 2024-11-15T15:01:23Z
| 2024-11-18T13:47:08Z
| 2
|
bw-wang19
|
pytorch/torchtitan
| 679
|
Question about integration with DeepSpeed-Ulysses
|
Hi developers,
Thanks for such a great project that can demonstrate the power of newly released features in torch.
When I want to run llama2 model with 128k long sequence, how can we enable it? I have some experience with DeepSpeed-Ulysses, so the question becomes does torchtitan support sequence parallelism in DeepSpeed-Ulysses?
Thanks!
|
https://github.com/pytorch/torchtitan/issues/679
|
closed
|
[
"question"
] | 2024-11-15T09:56:38Z
| 2024-11-22T00:28:16Z
| null |
zigzagcai
|
pytorch/xla
| 8,385
|
How to write in-place custom ops compatible with torch.compile using pallas
|
## ❓ Questions and Help
I'm trying to implement an in-place operator using pallas, and wrap it as a torch custom op. However, I found it difficult to make it work with `torch.compile`. More specifically, I’m unclear about how to set donation, input-output aliases, and the op schema. It seems having an output aliased with the input will leads to functionalization problems in torch compiler.
Thanks!
My script is like this:
```python
from typing import List, Callable
import os
import jax
import jax.numpy as jnp
from jax.experimental import pallas as pl
from jax.experimental.pallas import tpu as pltpu
import torch
import torch_xla
from torch_xla.experimental import custom_kernel
from functools import partial
import torch_xla.debug.profiler as xp
server = xp.start_server(9012)
profile_logdir = "./profile"
xp.trace_detached('localhost:9012', profile_logdir)
os.environ["XLA_SAVE_TENSORS_FILE"] = "./graph.txt"
os.environ["XLA_FLAGS"] = "--xla_dump_to=./graph_hlo/"
os.environ["XLA_DUMP_HLO_GRAPH"]="1"
M = 4096
N = 1024
def plus_one_kernel(x_ref, o_ref):
o_ref[...] = x_ref[...] + 1
def plus_one_pallas(x: jax.Array):
return pl.pallas_call(
plus_one_kernel,
grid=[2, 2],
in_specs=[pl.BlockSpec([M, N], lambda i, j: (i, j))],
out_specs=pl.BlockSpec([M, N], lambda i, j: (i, j)),
out_shape=jax.ShapeDtypeStruct(x.shape, dtype=jnp.int32),
input_output_aliases={0:0}
)(x)
@torch.library.custom_op("xla::plus_one_", mutates_args={})
def plus_one_(x: torch.Tensor) -> torch.Tensor:
plus_one_pt = torch_xla.experimental.custom_kernel.make_kernel_from_pallas(
plus_one_pallas, output_shape_dtype_fn = lambda x: [(x.shape, x.dtype)]
)
return plus_one_pt(x)
@plus_one_.register_fake
def plus_one_fake(x: torch.Tensor) -> torch.Tensor:
return x
def fn(x):
torch.ops.xla.dynamo_set_buffer_donor_(x, True)
ret = plus_one_(x)
return ret
fn = torch.compile(fn, backend="openxla")
x = torch.ones([M * 2, N * 2], dtype=torch.int32, device='xla')
ret = fn(x)
print(ret)
```
And it seems it does not change the value of `x`.
|
https://github.com/pytorch/xla/issues/8385
|
open
|
[
"pallas"
] | 2024-11-15T08:34:05Z
| 2025-02-15T05:43:45Z
| null |
soodoshll
|
huggingface/speech-to-speech
| 141
|
不想实时录音,传一段音频怎么操作?I don't want to record in real time, how can I upload an audio clip?
|
服务器上启动server
win10 本地启动python listen_and_play.py 后,一会没录,服务端就结束了???
我想传一段音频让他翻译应该怎么搞
|
https://github.com/huggingface/speech-to-speech/issues/141
|
open
|
[] | 2024-11-15T03:58:26Z
| 2024-12-20T04:30:13Z
| null |
dh12306
|
pytorch/torchtitan
| 678
|
Any suggestion for Llama-3.1-70b(128k seq len) deploy mesh with torchtian?
|
Under the 128k long sequence, the activation value memory increases significantly.
CP8 + TP8 seems necessary (they reduce the activation value memory almost linearly), but there is still as much as 50G of activation value memory.
Reccompute the activations of the MLP can reduce it by about 9G, while the recalculation of the ATTENTION layer or MLP up linear seems rather costly.I noticed that the article at https://arxiv.org/pdf/2410.06511 mentioned Full checkpoint was applied to address the activation memory issue,which seems to significantly increase the execution time of recomputation?
Does TorchTitan plan to offload the activation values and reload them during the backward calculation to reduce the activation value memory?
|
https://github.com/pytorch/torchtitan/issues/678
|
closed
|
[
"enhancement",
"question"
] | 2024-11-15T03:36:20Z
| 2025-02-26T06:40:07Z
| null |
medivh-xp
|
huggingface/diffusers
| 9,930
|
[PAG] - Adaptive Scale bug
|
### Describe the bug
I am looking for the purpose of the PAG adaptive scale? Because I was passing a value in it, for example 5.0, and passing 3.0 in the PAG scale, according to the implemented code we will have a negative number and the scale will return 0 and the PAG will not be applied and I did not find an explanation about this parameter in the documentation.
So i found it on an ComfyUI documentation: "_This dampening factor reduces the effect of PAG during the later stages of the denoising process, speeding up the overall sampling. A value of 0.0 means no penalty, while 1.0 completely removes PAG_"
Then I realized that I was passing values above 1.0, however when I pass values of 0.2 it is enough for it not to apply the PAG. I suspect this could be a problem.
If you run the code below, you will see that in the third image where I pass a scale of 0.2 in adaptive_scale it practically invalidates the PAG in the first generation steps.
I propose a possible solution:
After this code:
https://github.com/huggingface/diffusers/blob/5c94937dc7561767892d711e199f874dc35df041/src/diffusers/pipelines/pag/pag_utils.py#L93
We can change for:
```python
if self.do_pag_adaptive_scaling:
signal_scale = self.pag_scale
if t / self.num_timesteps > self.pag_adaptive_scale:
signal_scale = 0
return signal_scale
else:
return self.pag_scale
```
And inside every PAG pipeline, we need change "t" variable for "i" variable is passed with param on this function, to receive the number of current step.
https://github.com/huggingface/diffusers/blob/5c94937dc7561767892d711e199f874dc35df041/src/diffusers/pipelines/pag/pipeline_pag_sd_xl.py#L1253
With this, the logic will not be that the higher the adaptive scale value, the faster the PAG will be disabled, but quite the opposite. The scale will tell you exactly at what point in the process the PAG will be disabled. If the scale exceeds 0.5 in a 30-step generation, the PAG will be disabled from step 15 onwards. The scale applied will be the same until the moment of the cut and will not be a variable scale.
I don't know if this was the original purpose of this parameter, but it works well for me.
### Reproduction
```python
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda"
pipeline_sdxl = AutoPipelineForText2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
enable_pag=True,
pag_applied_layers=["mid"],
torch_dtype=torch.float16
).to(device)
pipeline = AutoPipelineForText2Image.from_pipe(pipeline_sdxl, enable_pag=True).to(device)
pipeline.enable_vae_tiling()
pipeline.enable_model_cpu_offload()
prompt = "an insect robot preparing a delicious meal, anime style"
for i, pag_scale in enumerate([0.0, 3.0, 3.0]):
generator = torch.Generator(device="cpu").manual_seed(0)
images = pipeline(
prompt=prompt,
num_inference_steps=25,
guidance_scale=7.0,
generator=generator,
pag_scale=pag_scale,
pag_adaptive_scale=0.0 if i < 2 else 0.2
).images[0]
images.save(f"./data/result_pag_{i+1}.png")
```
### Logs
```shell
N/A
```
### System Info
- 🤗 Diffusers version: 0.32.0.dev0
- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.11
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): 0.10.1 (cpu)
- Jax version: 0.4.35
- JaxLib version: 0.4.35
- Huggingface_hub version: 0.26.2
- Transformers version: 4.46.2
- Accelerate version: 1.1.1
- PEFT version: 0.13.2
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: 0.0.27.post2
- Accelerator: NVIDIA GeForce RTX 3060 Ti, 8192 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@yiyixuxu , @asomoza
|
https://github.com/huggingface/diffusers/issues/9930
|
open
|
[
"bug",
"stale"
] | 2024-11-15T02:00:19Z
| 2024-12-15T15:03:05Z
| 1
|
elismasilva
|
huggingface/safetensors
| 541
|
[Question] Safetensors seem to block the main thread -- but torch.save does not?
|
I have the following code in my training loop:
```
if rank == 0:
t = Thread(
target=save_file,
args=(model_sd, f"{cfg.model_dir}/model_{step + 1}.safetensors"),
daemon=True
)
t.start()
```
Which saves the checkpoint to disk using safetensors. However, I notice that this blocks the training loop, even though the thread should be running in the background.
When I switch the code to use `torch.save`, there's no issue. What should I do?
|
https://github.com/huggingface/safetensors/issues/541
|
open
|
[] | 2024-11-15T00:37:55Z
| 2025-02-26T09:51:23Z
| 4
|
vedantroy
|
pytorch/xla
| 8,380
|
How are PJRT asynchronous executions throttled by torch_xla?
|
## 🐛 Bug
Here at AWS we have a single PJRT device plugin for both PyTorch and JAX, and recently we've made implements to our device plugin to make it work better with JAX. I.e. now `PJRT_LoadedExecutable_Execute()` is fully asynchronous, we queue up an execution and return immediately, and expect the caller to wait on the `returned_future`, whereas before, execution was synchronous and is completed when `PJRT_LoadedExecutable_Execute()` returns.
As soon as we switched to the new implementation, we noticed that now torch_xla queues up as many executions it can without any throttling in PJRT or torch_xla, which causes us to easily exhaust device memory. It appears that now that there are no internal throttling mechanisms, and only explicit ones which needs to be triggered by user code:
1. when `xm.wait_device_ops()` is called, which calls down to `WaitDeviceOps()`
2. when tensor is read, which internally calls `WaitDeviceOps()`
However, `WaitDeviceOps()` is a heavy hammer because it pauses the world until the entire pipeline is drained. Ideally we do not want to rely on this mechanism for throttling. Also we do not want the user to have to guess when to insert these calls to avoid running out of memory. Some sensible internal throttling mechanism is needed.
The main issue here is that [pjrt_computation_client.cc ](https://github.com/pytorch/xla/blob/master/torch_xla/csrc/runtime/pjrt_computation_client.cc#L744) does not await on the `returned_future` from PJRT. It simply throws it away.
However, according to torch's [lazy_graph_executor](https://github.com/pytorch/pytorch/blob/main/torch/csrc/lazy/core/lazy_graph_executor.h#L164), "only one asynchronous operation can execute at the same time, on a given device." This is controlled by a device lock, which is supposed to be held for the entire duration of the asynchronous execution. However, in torch_xla's [xla_graph_executor.cpp](https://github.com/pytorch/xla/blob/master/torch_xla/csrc/xla_graph_executor.cpp#L826), the device locks acquired by torch are released as soon as `ExecuteComputation()` returns, and `ExecuteComputaton()` does not actually wait for the actual computation to complete. Therefore, torch lazy_graph_executor's throttling mechanism is defeated here.
|
https://github.com/pytorch/xla/issues/8380
|
closed
|
[] | 2024-11-14T18:39:43Z
| 2024-11-27T17:59:21Z
| 7
|
mcuiaws
|
pytorch/torchtitan
| 677
|
Fine-Tuning Llama Model with Large Context and Customized Dataset Using Torchtitan
|
Hi,
I am trying to fine-tune a Llama model with a large context size, and I found that to efficiently shard activations across multiple GPUs, I need to use Torchtitan. Here are some questions related to my setup:
See related issue: [meta-llama/llama-recipes#785](https://github.com/meta-llama/llama-recipes/issues/785)
1. **Custom Dataset Usage**
I created a custom dataset using parquet files and a `custom_dataset.py` file, which is compatible with `llama-recipes`. I'm also using the `DEFAULT_CHATML_CHAT_TEMPLATE`. Could you please provide guidance on how to integrate and use this custom dataset effectively with Torchtitan?
2. **Fine-Tuning with Pretrained Model**
Is it possible to fine-tune the model starting from a pretrained checkpoint? If so, are there specific steps or configurations needed to achieve this with Torchtitan?
3. **Model Support (Llama-3.2-1B)**
I noticed that Torchtitan currently supports training Llama 3 models (8B, 70B) out of the box. What steps would I need to take if I wanted to train `meta-llama/Llama-3.2-1B` specifically?
4. **Large Context and FSDP Limitation**
I am unable to use FSDP because of the large context sizes I’m working with. Any additional guidance on handling large contexts effectively with Torchtitan would be appreciated.
Thank you for your help!
|
https://github.com/pytorch/torchtitan/issues/677
|
closed
|
[
"enhancement",
"question"
] | 2024-11-14T17:29:52Z
| 2024-12-17T16:11:20Z
| null |
Amerehei
|
huggingface/peft
| 2,216
|
How to specify the coefficients of loading lora during inference?
|
https://github.com/huggingface/peft/issues/2216
|
closed
|
[] | 2024-11-14T11:47:00Z
| 2024-11-18T11:30:03Z
| null |
laolongboy
|
|
huggingface/chat-ui
| 1,565
|
Is there any place that uses this environment variable?
|
https://github.com/huggingface/chat-ui/blob/ab349d0634ec4cf68a781fd7afc5e7fdd6bb362f/.env#L59-L65
It seems like it can be deleted.
|
https://github.com/huggingface/chat-ui/issues/1565
|
closed
|
[] | 2024-11-14T11:12:49Z
| 2024-11-14T11:17:04Z
| 2
|
calycekr
|
huggingface/diffusers
| 9,927
|
HeaderTooLarge when train controlnet with sdv3
|
### Describe the bug
Hello, I tried diffuser to train controlnet with sdv3 but it didn't start training and send `safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge` feedback. I don't know how to handle it.
### Reproduction
Follow the README_v3 guide.
### Logs
```shell
(diffusers) [liudongyu@localhost controlnet]$ accelerate launch train_controlnet_sd3.py --pretrained_model_name_or_path=$MODEL_DIR --output_dir=$OUTPUT_DIR --train_data_dir="/home/users/liudongyu/datasets" --resolution=1024 --learning_rate=1e-5 --max_train_steps=20000 --train_batch_size=1 --gradient_accumulation_steps=4
Detected kernel version 3.10.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
11/14/2024 15:16:14 - INFO - __main__ - Distributed environment: DistributedType.NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda
Mixed precision type: no
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type t5 to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
{'max_image_seq_len', 'base_image_seq_len', 'use_dynamic_shifting', 'max_shift', 'base_shift'} was not found in config. Values will be initialized to default values.
Traceback (most recent call last):
File "/home/users/liudongyu/diffuser/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1423, in <module>
main(args)
File "/home/users/liudongyu/diffuser/diffusers/examples/controlnet/train_controlnet_sd3.py", line 982, in main
text_encoder_one, text_encoder_two, text_encoder_three = load_text_encoders(
^^^^^^^^^^^^^^^^^^^
File "/home/users/liudongyu/diffuser/diffusers/examples/controlnet/train_controlnet_sd3.py", line 187, in load_text_encoders
text_encoder_two = class_two.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/users/liudongyu/anaconda3/envs/diffusers/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3789, in from_pretrained
with safe_open(resolved_archive_file, framework="pt") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge
Traceback (most recent call last):
File "/home/users/liudongyu/anaconda3/envs/diffusers/bin/accelerate", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/users/liudongyu/anaconda3/envs/diffusers/lib/python3.11/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/home/users/liudongyu/anaconda3/envs/diffusers/lib/python3.11/site-packages/accelerate/commands/launch.py", line 1168, in launch_command
simple_launcher(args)
File "/home/users/liudongyu/anaconda3/envs/diffusers/lib/python3.11/site-packages/accelerate/commands/launch.py", line 763, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/users/liudongyu/anaconda3/envs/diffusers/bin/python', 'train_controlnet_sd3.py', '--pretrained_model_name_or_path=stabilityai/stable-diffusion-3-medium-diffusers', '--output_dir=sd3-controlnet-out', '--train_data_dir=/home/users/liudongyu/datasets', '--resolution=1024', '--learning_rate=1e-5', '--max_train_steps=20000', '--train_batch_size=1', '--gradient_accumulation_steps=4']' returned non-zero exit status 1.
```
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- 🤗 Diffusers version: 0.31.0.dev0
- Platform: Linux-3.10.0-1160.114.2.el7.x86_64-x86_64-with-glibc2.17
- Running on Google Colab?: No
- Python version: 3.11.10
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.25.2
- Transformers version: 4.45.2
- Accelerate version: 1.0.0
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA A100-PCIE-40GB, 40960 MiB
NVIDIA A100 80GB PCIe, 81920 MiB
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/9927
|
closed
|
[
"bug"
] | 2024-11-14T07:28:03Z
| 2024-11-21T13:02:05Z
| 3
|
Viola-Siemens
|
huggingface/datasets
| 7,290
|
`Dataset.save_to_disk` hangs when using num_proc > 1
|
### Describe the bug
Hi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours.
Specifically, [`Dataset.save_to_disk`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than when using `num_proc=1`
The documentation mentions that "Multiprocessing is disabled by default.", but there is no explanation on how to enable it.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
n_samples = int(4e6)
n_tokens_sample = 100
data_dict = {
'tokens' : np.random.randint(0, 100, (n_samples, n_tokens_sample)),
}
dataset = Dataset.from_dict(data_dict)
dataset.save_to_disk('test_dataset', num_proc=1)
dataset.save_to_disk('test_dataset', num_proc=4)
dataset.save_to_disk('test_dataset', num_proc=8)
```
This results in:
```
>>> dataset.save_to_disk('test_dataset', num_proc=1)
Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [00:17<00:00, 228075.15 examples/s]
>>> dataset.save_to_disk('test_dataset', num_proc=4)
Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [01:49<00:00, 36583.75 examples/s]
>>> dataset.save_to_disk('test_dataset', num_proc=8)
Saving the dataset (8/8 shards): 100%|██████████████| 4000000/4000000 [02:11<00:00, 30518.43 examples/s]
```
With larger datasets it can take hours, but I didn't benchmark that for this bug report.
### Expected behavior
I would expect using `num_proc>1` to be faster instead of slower than `num_proc=1`.
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1
|
https://github.com/huggingface/datasets/issues/7290
|
open
|
[] | 2024-11-14T05:25:13Z
| 2025-11-24T09:43:03Z
| 4
|
JohannesAck
|
pytorch/executorch
| 6,846
|
How to Apply Different Quantization Settings Per Layer in ExecuTorch?
|
Dear @kimishpatel @jerryzh168 @shewu-quic
I want to split a model(eg, Llama-3.2-3B) into multiple layers and apply different quantization settings(qnn_8a8w, qnn_16a4w...) to each layer.
Has such a method been tested in ExecuTorch?
If not, could you suggest how this can be achieved?
Thank you
|
https://github.com/pytorch/executorch/issues/6846
|
open
|
[
"partner: qualcomm",
"triaged",
"module: quantization"
] | 2024-11-14T02:48:39Z
| 2024-12-23T19:32:53Z
| null |
crinex
|
huggingface/trl
| 2,356
|
How to train from scratch? Can you provide the code
|
### System Info
train from scratch
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
train from scratch
### Expected behavior
train from scratch
### Checklist
- [X] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))
- [X] I have included my system information
- [X] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [X] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [X] Any traceback provided is complete
|
https://github.com/huggingface/trl/issues/2356
|
closed
|
[
"❓ question"
] | 2024-11-14T02:39:41Z
| 2024-12-13T23:00:20Z
| null |
sankexin
|
huggingface/sentence-transformers
| 3,054
|
'scale' hyperparameter in MultipleNegativesRankingLoss
|
I am looking through the MultipleNegativesRankingLoss.py code and I have question about the 'scale' hyperparameter. Also known as the 'temperature', the scale is used to stretch or compress the range of output values from the similarity function. A larger scale creates greater distinction between positive and negative examples in terms of similarity score differences. The line below is how the scale is used in the forward function of the loss.
`scores = self.similarity_fct(embeddings_a, embeddings_b) * self.scale`
Currently, the scale is set to 20 for when cosine similarity is used as the distance metric.
Why was 20 selected as the scale for when using cosine similarity on the embeddings? Is this the optimal scale value for cosine similarity? Would this hyperparameter need to be optimized during fine-tuning?
|
https://github.com/huggingface/sentence-transformers/issues/3054
|
closed
|
[
"question"
] | 2024-11-14T00:11:23Z
| 2025-01-16T13:54:45Z
| null |
gnatesan
|
huggingface/diffusers
| 9,924
|
Can we get more schedulers for flow based models such as SD3, SD3.5, and flux
|
It seems advanced schedulers such as DDIM, and the dpm++ 2m does work with flow based model such as SD3, SD3.5, and flux.
However, I only see 2 flow based schedulers in diffusers codebase:
FlowMatchEulerDiscreteScheduler, and'
FlowMatchHeunDiscreteScheduler
I tried to use DPMSolverMultistepScheduler, but it does not generate correct images with flow based models. Help?
|
https://github.com/huggingface/diffusers/issues/9924
|
open
|
[
"wip",
"scheduler"
] | 2024-11-14T00:07:56Z
| 2025-01-14T18:31:12Z
| 40
|
linjiapro
|
pytorch/torchtitan
| 676
|
Very low wps with H200 Gpus
|
Hello, I am running the multinode_trainer.slurm (llama3_70b.toml) on 4 nodes that have 32 H200 Gpus. However, wps is only around ~200. Any ideas what can cause this slowness?
[output.txt](https://github.com/user-attachments/files/17740634/output.txt)
[multinode_trainer.slurm.txt](https://github.com/user-attachments/files/17740601/multinode_trainer.slurm.txt)
|
https://github.com/pytorch/torchtitan/issues/676
|
closed
|
[
"question"
] | 2024-11-13T23:59:00Z
| 2025-02-26T04:16:21Z
| null |
aniltrkkn
|
pytorch/xla
| 8,379
|
Confusing text in bazel.md
|
## 📚 Documentation
The bazil.md file contains the following text:
Bazel brings in [pybind11](https://github.com/pybind/pybind11) embeded python and links against it to provide libpython to the plugin using this mechanism. Python headers are also sourced from there instead of depending on the system version. These are satisfied from the "@pybind11//:pybind11_embed", which sets up compiler options for linking with libpython transitively.
From what I can determine:
- `pybind` is a library of headers that defines an API for C++ and Python code to interact
- libpython is a library that provides the core implementation of the Python interpreter
The text above says "Bazel ... links against pybind to provide libpython to the plugin..."
- What does this mean?
- To what plugin does this refer?
- How does Bazel "provide libpython to the plugin"? Does this mean that Bazel uses the libpython library when building the plugin and the plugin uses the API defined in pybind to call into libpython? Why is it important to state how the plugin communicates with libpython?
The text says: "Python headers are also sourced from there instead of depending on the system version. "
- To where does "there" refer?
The text says: "These are satisfied from the "@pybind11//:pybind11_embed", which sets up compiler options for linking with libpython transitively."
- What does "these" refer to?
- What is "@pybind11//:pybind11_embed"?
- What does it mean to link with libpython transitively?
|
https://github.com/pytorch/xla/issues/8379
|
open
|
[
"documentation",
"build"
] | 2024-11-13T23:11:00Z
| 2025-11-13T00:46:46Z
| 3
|
mikegre-google
|
pytorch/executorch
| 6,813
|
How to convert tokenizer of SmolLM model as accepted by executorch
|
Hi,
I am trying to convert [SmolLm-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct) model to .pte format and then run on an android device.
I have been successful in converting the model but executorch requires the tokenizer in either .bin format or .model format which can then be converted into .bin format. But on huggingface tokenizer.model or tokenizer.bin files are not present.
How would I go about converting the tokenizer.json file into the appropriate format.
cc @mergennachin @byjlw
|
https://github.com/pytorch/executorch/issues/6813
|
open
|
[
"triaged",
"module: extension",
"module: user experience"
] | 2024-11-13T11:19:13Z
| 2025-12-18T20:16:46Z
| null |
Arpit2601
|
huggingface/pytorch-image-models
| 2,332
|
[BUG] How to customize the number of classification heads
|
**Describe the bug**
**To Reproduce**
Steps to reproduce the behavior:
from timm.models import create_model
checkpoint_path = "/nas_mm_2/yinxiaofei.yxf/open_source_model/InternViT-300M-448px/tmp/timm__vit_intern300m_patch14_448.ogvl_dist/model.safetensors"
model = create_model('vit_intern300m_patch14_448',checkpoint_path=checkpoint_path, num_classes = 3)
**Screenshots**
RuntimeError: Error(s) in loading state_dict for VisionTransformer:
Missing key(s) in state_dict: "head.weight", "head.bias".
**Additional context**
If I remove the num_classes = 3 parameter, then this program is completely normal
|
https://github.com/huggingface/pytorch-image-models/issues/2332
|
closed
|
[
"bug"
] | 2024-11-12T08:08:50Z
| 2024-11-12T15:28:42Z
| null |
JarvisFei
|
pytorch/xla
| 8,371
|
TPU Trillium Base Docker Image cannot initialize
|
## TPU initialization is failed
When I started tpu v6e-4 TPU Vm with v2-alpha-tpuv6e base image, with pip enviroment and xla updates I can clearly initialized tpus. However when I start to dockerize my pipelie, it fails to initialize TPUs. I tried so much tpu xla base images but I could not achieve to initialize. This happens everytime get device from torch_xla.core.xla_model.xla_device().
I have checked this base images. I guess v2-alpha-tpuv6e configuration is crucial, is there any related base docker image?
> us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.10_tpuvm_20241028
> us-central1-docker.pkg.dev/deeplearning-images/reproducibility/pytorch-tpu-diffusers:v4
## To Reproduce
# DevDockerfile
```dockerfile
FROM us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.10_tpuvm_20241028
# Set environment variables to avoid prompts during installation
ENV DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED=1
RUN apt-get update && apt-get install -y \
vim \
curl \
git \
bash \
wget \
libopenblas-base \
&& rm -rf /var/lib/apt/lists/*
RUN pip3 install --no-cache-dir --pre torch==2.6.0.dev20241028+cpu torchvision==0.20.0.dev20241028+cpu --index-url https://download.pytorch.org/whl/nightly/cpu
RUN pip install "torch_xla[tpu] @ https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.6.0.dev20241028-cp310-cp310-linux_x86_64.whl" -f https://storage.googleapis.com/libtpu-releases/index.html
RUN pip install torch_xla[pallas] -f https://storage.googleapis.com/jax-releases/jax_nightly_releases.html -f https://storage.googleapis.com/jax-releases/jaxlib_nightly_releases.html
COPY . .
CMD ["python3", "app.py"]
```
#app.py
```python
# Quite simple to reproduce
import torch_xla.core.xla_model as xm
#Hangs in here not initilize tpu.
device = xm.xla_device()
```
Both file are in same directory. Generate docker with
`docker build -f DevDockerfile -t tpu .`
Then run with privileged.
`docker run -ti --rm -p 5000:5000 --privileged tpu`
## Expected behavior
<!-- Tpu cores cannot initialized in docker enviroment. It should initialized as not docker ones-->
## Environment
- Reproducible on XLA backend [CPU/TPU/CUDA]:
- torch_xla version: torch_xla-2.6.0.dev20241028-cp310
|
https://github.com/pytorch/xla/issues/8371
|
open
|
[
"bug",
"xla:tpu"
] | 2024-11-12T07:38:53Z
| 2025-02-18T12:43:11Z
| 9
|
hsebik
|
huggingface/unity-api
| 30
|
[QUESTION]
|
I have a simple game built in unity and I'm using this Hugging face API client for voice parsing. I'm trying to understand when I build and run the game, and want to distribute it to many users, how do I keep the same api key every time so that users can install and run voice control it without any issue?
|
https://github.com/huggingface/unity-api/issues/30
|
closed
|
[
"question"
] | 2024-11-12T02:35:52Z
| 2024-11-20T01:46:16Z
| null |
harshal-14
|
pytorch/vision
| 8,721
|
make processing of arbitrary inputs to transforms.v2 public and document it
|
### 🚀 The feature
Supporting arbitrary input structures in custom transforms is very important in the case of transform compositions:
```python
tr = Compose([RandomCrop((128,128), CustomTransform])
```
This can be done by inheriting from `torchvision.transforms.v2.Transform` and implementing the **private** `._transform` method, which avoids having to unravel the data structure on your own (since this is done anyway in the `.forward` method).
```python
class CustomTransform(Transform):
def __init__(self, *kwargs):
pass
def _transform(self, inpt, params):
if isinstance(inpt, Image):
pass
elif isinstance(inpt, BoundingBoxes):
pass
else:
pass
return transformed_inpt
```
The method has also been described in this blog post [How to Create Custom Torchvision V2 Transforms](https://christianjmills.com/posts/torchvision-custom-v2-transform-tutorial/index.html), but the official torchvision docs do not yet describe it and instead suggest hard-coding the input structure.
Having to implement a **private** method for this (even though the class `Transform` is public) feels very wrong this means that things could break on our side any time. I would appreciate if the `._transform` method was made public -> `.transform` and the `Transform` class would receive proper documentation on how this method should be implemented for custom transforms.
### Motivation, pitch
The `torchvision.transforms.v2` API has now been around for quite some time already and it would be nice to give developers the chance to develop transforms of the same quality and flexibility as the originally implemented ones!
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/vision/issues/8721
|
closed
|
[] | 2024-11-11T13:48:03Z
| 2024-12-09T12:39:09Z
| 3
|
liopeer
|
huggingface/swift-transformers
| 140
|
How to use customized tokenizer?
|
Hello. I am writing this post because I have a question about loading the tokenizer model. I am trying to use a pre-trained tokenizer in a Swift environment. After training, how do I apply the byproduct .model and .vocab files so that I can use the tokenizer I trained in Swift while using the swift-transformer API? I would appreciate it if you could answer.
|
https://github.com/huggingface/swift-transformers/issues/140
|
open
|
[
"tokenization"
] | 2024-11-11T09:36:14Z
| 2025-09-10T13:19:10Z
| null |
cch1219
|
pytorch/audio
| 3,852
|
Can anyone provide a real-time pretrain model for Visual Speech Recognition?
|
### 📚 The doc issue
I don't have the LRS3 dataset, I can't use the author's real time recipe, I would like to ask if I can directly request the trained MODEL? I would like to ask the author if he can provide the trained mods directly, or if there is anyone who has the download point of LRS3, thank you!
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/audio/issues/3852
|
open
|
[] | 2024-11-11T06:19:57Z
| 2024-11-11T06:19:57Z
| 0
|
bernie-122
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.