repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/torchtitan
| 764
|
FSDP 2 doesn't pad tensors?
|
Hi, I ran my model with FSDP 2, one of the linear layers has a dim that's not divisible by the world size (128), and so I got the following error:
```
torch.Size([...]) is not divisible by FSDP world size 128.
```
FSDP 1 circumvents this issue by padding the tensors. Is this not supported by FSDP 2? If not, will it be supported?
|
https://github.com/pytorch/torchtitan/issues/764
|
open
|
[
"question",
"module: fsdp"
] | 2024-12-29T21:55:50Z
| 2025-02-13T01:51:43Z
| null |
cassanof
|
pytorch/torchchat
| 1,446
|
Supply Local Weights to an LLM instead of Downloading Weights from HuggingFace
|
### 🚀 The feature, motivation and pitch
I am having local copy of llama weights and i want to supply those weights to create a chat application.Please include a CLI flag to do so
### Alternatives
_No response_
### Additional context
_No response_
### RFC (Optional)
_No response_
|
https://github.com/pytorch/torchchat/issues/1446
|
closed
|
[
"documentation",
"triaged"
] | 2024-12-29T20:14:26Z
| 2025-01-06T01:54:19Z
| 2
|
sgupta1007
|
pytorch/data
| 1,418
|
torch.node datawriter
|
### 📚 The doc issue
Can we add in the example/migration file related to a `torch.node` datawriter (if already possible with the current API).
See:
https://github.com/pytorch/pytorch/issues/140296#issuecomment-2563190801
### Suggest a potential alternative/fix
_No response_
|
https://github.com/meta-pytorch/data/issues/1418
|
open
|
[] | 2024-12-27T13:49:24Z
| 2024-12-27T13:49:24Z
| 0
|
bhack
|
pytorch/pytorch
| 143,906
|
How to correctly asynchronously copy a GPU tensor to a CPU tensor in another process without introducing blocking?
|
### 🐛 Describe the bug
I am developing a distributed PyTorch application designed to asynchronously transfer data from a GPU process to a CPU process, ensuring that GPU computations remain non-blocking. In my current implementation, I utilize the non-blocking copy_ method to transfer data from a GPU tensor to a CPU tensor and then employ dist.isend to send the data to another rank. However, under certain conditions, this setup leads to a deadlock.
```python
import torch
import torch.distributed as dist
import os
def gpu_to_cpu_and_send(rank, size):
tensor = torch.randn(4096, 8192).cuda(rank) # On specific GPU
print(tensor[-1][-1])
print(f"Rank {rank}: Created tensor on GPU")
cpu_tensor = torch.zeros(4096, 8192)
cpu_tensor.copy_(tensor, non_blocking=True) # Non-blocking GPU to CPU copy
print(f"Rank {rank}: Copied tensor to CPU (non-blocking)")
if rank == 0:
print(f"Rank {rank}: Sending tensor to rank 1")
dist.isend(tensor=cpu_tensor, dst=1) # Sending data to rank 1
print(f"Rank {rank}: Data sent to rank 1")
def receive_data(rank, size):
received_tensor = torch.zeros(4096, 8192)
print(f"Rank {rank}: Waiting to receive data")
dist.recv(tensor=received_tensor, src=0) # Receiving data from rank 0
print(f"Rank {rank}: Received data from rank 0")
print(received_tensor[-1][-1])
def main():
rank = int(os.environ['RANK'])
size = int(os.environ['WORLD_SIZE'])
dist.init_process_group(backend='gloo', rank=rank, world_size=size)
if rank == 0:
gpu_to_cpu_and_send(rank, size)
elif rank == 1:
receive_data(rank, size)
if __name__ == "__main__":
main()
```
### Versions
torchrun --nproc_per_node=2 demo.py
Run with Nvidia GPU.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
|
https://github.com/pytorch/pytorch/issues/143906
|
open
|
[
"needs reproduction",
"oncall: distributed",
"triaged"
] | 2024-12-27T11:22:11Z
| 2025-01-03T18:13:46Z
| null |
zhanghb55
|
huggingface/trl
| 2,523
|
How to solve the situation where the tokenizer of the reward model is inconsistent with the tokenizer of the actor model?
|
https://github.com/huggingface/trl/issues/2523
|
open
|
[
"❓ question"
] | 2024-12-27T09:43:06Z
| 2024-12-28T06:26:16Z
| null |
stephen-nju
|
|
huggingface/peft
| 2,298
|
Qdora support
|
### Feature request
is it possible to use qdora with peft?
### Motivation
qdora is better than qlora and perform like full fine tuning.
### Your contribution
```
peft_config = LoraConfig(
r=8,
lora_alpha=32,
lora_dropout=0.1,
qdora=True # adding qdora
)
```
|
https://github.com/huggingface/peft/issues/2298
|
closed
|
[] | 2024-12-27T04:47:54Z
| 2025-01-03T12:26:58Z
| 2
|
imrankh46
|
huggingface/smolagents
| 2
|
How to call OpenAI-like models through an API?
|
How to call OpenAI-like models through an API?
|
https://github.com/huggingface/smolagents/issues/2
|
closed
|
[] | 2024-12-27T04:34:35Z
| 2024-12-29T21:58:10Z
| null |
win4r
|
huggingface/datasets
| 7,347
|
Converting Arrow to WebDataset TAR Format for Offline Use
|
### Feature request
Hi,
I've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by:
```
import json
from datasets import load_dataset
dataset = load_dataset("pixparse/cc3m-wds")
dataset.save_to_disk("./cc3m_1")
```
now I need to convert it to WebDataset's TAR format for offline data ingestion.
Is there a straightforward method to achieve this conversion without an internet connection? Can I simply convert it by
```
tar -cvf
```
btw, when I tried:
```
import webdataset as wds
from huggingface_hub import get_token
from torch.utils.data import DataLoader
hf_token = get_token()
url = "https://huggingface.co/datasets/timm/imagenet-12k-wds/resolve/main/imagenet12k-train-{{0000..1023}}.tar"
url = f"pipe:curl -s -L {url} -H 'Authorization:Bearer {hf_token}'"
dataset = wds.WebDataset(url).decode()
dataset.save_to_disk("./cc3m_webdataset")
```
error occured:
```
AttributeError: 'WebDataset' object has no attribute 'save_to_disk'
```
Thanks a lot!
### Motivation
Converting Arrow to WebDataset TAR Format
### Your contribution
No clue yet
|
https://github.com/huggingface/datasets/issues/7347
|
closed
|
[
"enhancement"
] | 2024-12-27T01:40:44Z
| 2024-12-31T17:38:00Z
| 4
|
katie312
|
huggingface/transformers.js
| 1,118
|
Trying to use custom finetuned Whisper Model with
|
### Question
@xenova I am trying to use our own Whisper fine tuned model https://huggingface.co/medxcribe/whisper-base.en with
https://huggingface.co/spaces/Xenova/whisper-web. I have uploaded into a seperate repo for reference https://huggingface.co/medxcribe/whisper-base-onnx.en.
We have converted the fine tuned medxcribe/whisper-base.en using this command.
`pip install onnx==1.17.0
pip install onnxruntime==1.20.1
pip install transformers==4.35.2
optimum-cli export onnx --model medxcribe/whisper-base.en whisper_onnx --task automatic-speech-recognition-with-past --opset 14`
But unfortunately while load the Whisper-web, we are stuck with this below error
Can't create a session"
at t.createSessionFinalize (http://localhost:4173/assets/worker-1c2c88a7.js:1789:105945)
at t.createSession (http://localhost:4173/assets/worker-1c2c88a7.js:1789:106543)
at t.createSession (http://localhost:4173/assets/worker-1c2c88a7.js:1789:98867)
at t.OnnxruntimeWebAssemblySessionHandler.loadModel (http://localhost:4173/assets/worker-1c2c88a7.js:1789:101717)
at Object.createSessionHandler (http://localhost:4173/assets/worker-1c2c88a7.js:9:115048)
at dn.create (http://localhost:4173/assets/worker-1c2c88a7.js:1:14653)
at async constructSession (http://localhost:4173/assets/worker-1c2c88a7.js:1810:22248)
at async Promise.all (index 2)
at async WhisperForConditionalGeneration.from_pretrained (http://localhost:4173/assets/worker-1c2c88a7.js:1810:29662)
at async AutoModelForSpeechSeq2Seq.from_pretrained (http://localhost:4173/assets/worker-1c2c88a7.js:1810:77285)
Any suggestions? On a high level there is a problem with the generated Onnx files.
|
https://github.com/huggingface/transformers.js/issues/1118
|
open
|
[
"question"
] | 2024-12-26T20:18:36Z
| 2024-12-26T20:18:36Z
| null |
vijaim
|
huggingface/finetrainers
| 153
|
How to generate result of validation and resolution.
|
Hi author:
I am using your hunyuan finetuning bash to finetune lora on my own dataset with original resolution of 1080p. But I find your model can only run on video with both height and weight can be divided by 32. Can the model also be trained on video with 360p or 720p and why?
|
https://github.com/huggingface/finetrainers/issues/153
|
closed
|
[] | 2024-12-26T15:21:22Z
| 2025-01-10T23:38:39Z
| null |
Aristo23333
|
huggingface/lerobot
| 597
|
Inquiry About Support for RDT-1B Model
|
Hi,
I would like to extend my heartfelt thanks for maintaining such an outstanding codebase. Your dedication and hard work have significantly contributed to advancements in the robotics field, and I truly appreciate the resources and support your community provides.
I am reaching out to inquire whether there are any plans to support the RDT-1B model from the [RoboticsDiffusionTransformer](https://github.com/thu-ml/RoboticsDiffusionTransformer) repository within the LeRobot framework. The RDT-1B model appears to offer promising capabilities for robotics applications, and integrating it could potentially enhance the functionalities and performance of projects built on LeRobot.
Could you please let me know if there are any intentions to incorporate this model in the future, or if there are any existing efforts towards this integration? Additionally, if there are ways the community can assist or contribute to this effort, I would be eager to participate.
Thank you once again for all your contributions and support. I look forward to your response.
|
https://github.com/huggingface/lerobot/issues/597
|
closed
|
[
"question",
"policies",
"stale"
] | 2024-12-26T11:12:58Z
| 2025-10-08T20:52:51Z
| null |
Robert-hua
|
huggingface/diffusers
| 10,383
|
[Request] Optimize HunyuanVideo Inference Speed with ParaAttention
|
Hi guys,
First and foremost, I would like to commend you for the incredible work on the `diffusers` library. It has been an invaluable resource for my projects.
I am writing to suggest an enhancement to the inference speed of the `HunyuanVideo` model. We have found that using [ParaAttention](https://github.com/chengzeyi/ParaAttention) can significantly speed up the inference of HunyuanVideo. ParaAttention provides context parallel attention that works with `torch.compile`, supporting Ulysses Style and Ring Style parallelism. I hope we could add a doc or introduction of how to make `HunyuanVideo` of `diffusers` run faster with `ParaAttention`. Besides `HunyuanVideo`, `FLUX`, `Mochi` and `CogVideoX` are also supported.
Steps to Optimize HunyuanVideo Inference with `ParaAttention`:
# Install ParaAttention:
```bash
pip3 install para-attn
# Or visit https://github.com/chengzeyi/ParaAttention.git to see detailed instructions
```
# Example Script:
Here is an example script to run HunyuanVideo with ParaAttention:
```python
import torch
import torch.distributed as dist
from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel
from diffusers.utils import export_to_video
dist.init_process_group()
# [rank1]: RuntimeError: Expected mha_graph->execute(handle, variant_pack, workspace_ptr.get()).is_good() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
torch.backends.cuda.enable_cudnn_sdp(False)
model_id = "tencent/HunyuanVideo"
transformer = HunyuanVideoTransformer3DModel.from_pretrained(
model_id,
subfolder="transformer",
torch_dtype=torch.bfloat16,
revision="refs/pr/18",
)
pipe = HunyuanVideoPipeline.from_pretrained(
model_id,
transformer=transformer,
torch_dtype=torch.float16,
revision="refs/pr/18",
).to(f"cuda:{dist.get_rank()}")
pipe.vae.enable_tiling(
# Make it runnable on GPUs with 48GB memory
# tile_sample_min_height=128,
# tile_sample_stride_height=96,
# tile_sample_min_width=128,
# tile_sample_stride_width=96,
# tile_sample_min_num_frames=32,
# tile_sample_stride_num_frames=24,
)
from para_attn.context_parallel import init_context_parallel_mesh
from para_attn.context_parallel.diffusers_adapters import parallelize_pipe
from para_attn.parallel_vae.diffusers_adapters import parallelize_vae
mesh = init_context_parallel_mesh(
pipe.device.type,
)
parallelize_pipe(
pipe,
mesh=mesh,
)
parallelize_vae(pipe.vae, mesh=mesh._flatten())
# pipe.enable_model_cpu_offload(gpu_id=dist.get_rank())
# torch._inductor.config.reorder_for_compute_comm_overlap = True
# pipe.transformer = torch.compile(pipe.transformer, mode="max-autotune-no-cudagraphs")
output = pipe(
prompt="A cat walks on the grass, realistic",
height=720,
width=1280,
num_frames=129,
num_inference_steps=30,
output_type="pil" if dist.get_rank() == 0 else "pt",
).frames[0]
if dist.get_rank() == 0:
print("Saving video to hunyuan_video.mp4")
export_to_video(output, "hunyuan_video.mp4", fps=15)
dist.destroy_process_group()
```
Save the above code to `run_hunyuan_video.py` and run it with torchrun:
```bash
torchrun --nproc_per_node=2 run_hunyuan_video.py
```
The generated video on 2xH100:
https://github.com/user-attachments/assets/e67838a7-5261-452e-9bf0-9f186611c3b7
By following these steps, users can leverage `ParaAttention` to achieve faster inference times with `HunyuanVideo` on multiple GPUs.
Thank you for considering this suggestion. I believe it could greatly benefit the community and enhance the performance of `HunyuanVideo`. Please let me know if there are any questions or further clarifications needed.
|
https://github.com/huggingface/diffusers/issues/10383
|
closed
|
[
"roadmap"
] | 2024-12-25T15:07:53Z
| 2025-01-16T18:05:15Z
| 10
|
chengzeyi
|
huggingface/lerobot
| 596
|
How to achieve multiple tasks on the basis of LeRobot ?
|
LeRobot can achieve single tasks (such as inserting, transferring blocks, etc.), how to achieve multiple tasks on the basis of LeRobot (such as first recognizing objects and classifying, and then putting objects in order in boxes, etc.)?"
Please give me some ideas.
|
https://github.com/huggingface/lerobot/issues/596
|
closed
|
[
"question",
"stale"
] | 2024-12-25T12:20:37Z
| 2025-10-17T11:38:20Z
| null |
wangwisdom
|
huggingface/diffusers
| 10,375
|
[low priority] Please fix links in documentation
|
https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video
Both links are broken
Make sure to check out the Schedulers [guide](https://huggingface.co/docs/diffusers/main/en/using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading.md#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
|
https://github.com/huggingface/diffusers/issues/10375
|
closed
|
[] | 2024-12-25T09:04:33Z
| 2024-12-28T20:01:27Z
| 0
|
nitinmukesh
|
huggingface/diffusers
| 10,374
|
Is there any plan to support TeaCache for training-free acceleration?
|
TeaCache is a training-free inference acceleration method for visual generation. TeaCache currently supports HunyuanVideo, CogVideoX, Open-Sora, Open-Sora-Plan and Latte. TeaCache can speedup HunyuanVideo 2x without much visual quality degradation. For example, the inference for a 720p, 129-frame video takes around 50 minutes on a single A800 GPU while TeaCache can sppeedup to 23 minutes. Thanks for your efforts!
https://github.com/LiewFeng/TeaCache.
|
https://github.com/huggingface/diffusers/issues/10374
|
open
|
[
"wip"
] | 2024-12-25T05:00:23Z
| 2025-01-27T01:28:53Z
| 4
|
LiewFeng
|
huggingface/chat-ui
| 1,633
|
docker run is not working
|
I'm running the following:
```
docker run -p 3000:3000 --env-file env.local huggingface/chat-ui
```
The env file has the following set: `HF_TOKEN`, `MONGODB_URL` and `MODELS`. The container prints the following:
```
Listening on 0.0.0.0:3000
```
However, on hitting the `localhost:3000`, I get a blank page with `Not found`.
I can repro this consistently. Can anyone share who is able to use docker and get it to work.
|
https://github.com/huggingface/chat-ui/issues/1633
|
open
|
[
"support"
] | 2024-12-23T08:36:09Z
| 2025-01-06T07:30:46Z
| 1
|
sebastiangonsal
|
huggingface/peft
| 2,293
|
Is it possible to add LoRA on specific head?
|
### Feature request
Could I add LoRA only to some selected heads on the model?
I read some documentation [here](https://huggingface.co/docs/peft/developer_guides/custom_models), but am still not sure about how to implement my goal.
### Motivation
Current LoRA Config can allow users to decide where matrices to add LoRA, a more fine-grained control on which heads to add LoRA would be beneficial for the developers.
### Your contribution
I would appreciate some tips on how to implement this.
|
https://github.com/huggingface/peft/issues/2293
|
closed
|
[] | 2024-12-22T19:57:54Z
| 2025-12-14T10:07:49Z
| 12
|
SpeeeedLee
|
huggingface/datasets
| 7,344
|
HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access SlimPajama-627B or c4 on TPUs
|
### Describe the bug
I am trying to run some trainings on Google's TPUs using Huggingface's DataLoader on [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [c4](https://huggingface.co/datasets/allenai/c4), but I end up running into `429 Client Error: Too Many Requests for URL` error when I call `load_dataset`. The even odder part is that I am able to sucessfully run trainings with the [wikitext dataset](https://huggingface.co/datasets/Salesforce/wikitext). Is there something I need to setup to specifically train with SlimPajama or C4 with TPUs because I am not clear why I am getting these errors.
### Steps to reproduce the bug
These are the commands you could run to produce the error below but you will require a ClearML account (you can create one [here](https://app.clear.ml/login?redirect=%2Fdashboard)) with a queue setup to run on Google TPUs
```bash
git clone https://github.com/clankur/muGPT.git
cd muGPT
python -m train --config-name=slim_v4-32_84m.yaml +training.queue={NAME_OF_CLEARML_QUEUE}
```
The error I see:
```
Traceback (most recent call last):
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/clearml/binding/hydra_bind.py", line 230, in _patched_task_function
return task_function(a_config, *a_args, **a_kwargs)
File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/train.py", line 1037, in main
main_contained(config, logger)
File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/train.py", line 840, in main_contained
loader = get_loader("train", config.training_data, config.training.tokens)
File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/input_loader.py", line 549, in get_loader
return HuggingFaceDataLoader(split, config, token_batch_params)
File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/input_loader.py", line 395, in __init__
self.dataset = load_dataset(
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 2112, in load_dataset
builder_instance = load_dataset_builder(
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1798, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1495, in dataset_module_factory
raise e1 from None
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1479, in dataset_module_factory
).get_module()
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1034, in get_module
else get_data_patterns(base_path, download_config=self.download_config)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 457, in get_data_patterns
return _get_data_files_patterns(resolver)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 248, in _get_data_files_patterns
data_files = pattern_resolver(pattern)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 340, in resolve_pattern
for filepath, info in fs.glob(pattern, detail=True).items()
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 409, in glob
return super().glob(path, **kwargs)
File "/home/clankur/.clearml/venvs-builds/3.10/lib/python3.10/site-packages/fsspec/spec.py", line 602, in glob
allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 429, in find
out = self._ls_tree(path, recursive=True, refresh=refresh, revision=resolved_path.revision, **kwargs)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 358, in _ls_tree
self._ls_tree(
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 375, in _ls_tree
for path_info in tree:
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3080, in list_repo_tree
for path_info in paginate(path=tree_url, headers=headers, params={"recursive": recursive, "expand": expand}):
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/utils/_pagination.py", line 46, in paginate
hf_raise_for_status(r)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/cerebras/SlimPajama-627B/tree/2d0accdd58c5d5511943ca1f5ff0e3eb5e293543?recursive=True&
|
https://github.com/huggingface/datasets/issues/7344
|
closed
|
[] | 2024-12-22T16:30:07Z
| 2025-01-15T05:32:00Z
| 2
|
clankur
|
huggingface/diffusers
| 10,345
|
safetensor streaming in from_single_file_loading()
|
can we add support for streaming safetensors while loading using `from_single_file`.
source:https://github.com/run-ai/runai-model-streamer
example:
```python
from runai_model_streamer import SafetensorsStreamer
file_path = "/path/to/file.safetensors"
with SafetensorsStreamer() as streamer:
streamer.stream_file(file_path)
for name, tensor in streamer.get_tensors():
tensor.to('CUDA:0')
```
|
https://github.com/huggingface/diffusers/issues/10345
|
closed
|
[
"stale"
] | 2024-12-22T13:27:46Z
| 2025-01-21T15:07:58Z
| 2
|
AbhinavJangra29
|
pytorch/xla
| 8,516
|
how to release tpu memory after del diffusers pipeline
|
## ❓ Questions and Help
i create a pipeline
`
pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.bfloat16).to(torch_xla.core.xla_model.xla_device())
pipeline.to('cpu')
pipeline = StableAudioPipeline.from_pretrained("stabilityai/stable-audio-open-1.0", torch_dtype=torch.bfloat16).to(torch_xla.core.xla_model.xla_device()) #which cause tpu memory problem
`
i want to ask how to release tpu memory. is there any tpu version of torch.cuda.empty_cache()?
|
https://github.com/pytorch/xla/issues/8516
|
closed
|
[
"duplicate",
"question",
"xla:tpu"
] | 2024-12-22T11:03:38Z
| 2025-02-13T13:40:42Z
| null |
ghost
|
pytorch/torchchat
| 1,436
|
If scripts need `bash`, don't say to use `sh`
|
### 🐛 Describe the bug
On Debian systems, sh isn't bash, it's [dash](https://en.wikipedia.org/wiki/Almquist_shell#Dash). I haven't tested every script, but https://github.com/pytorch/torchchat/blob/main/docs/quantization.md says to run `sh torchchat/utils/scripts/build_torchao_ops.sh`, but this script fails unless run with bash on my Raspberry Pi 5.
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241218+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (aarch64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.36
Python version: 3.11.2 (main, Sep 14 2024, 03:00:30) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.6.51+rpt-rpi-2712-aarch64-with-glibc2.36
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: ARM
Model name: Cortex-A76
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 4
Socket(s): -
Cluster(s): 1
Stepping: r4p1
CPU(s) scaling MHz: 100%
CPU max MHz: 2400.0000
CPU min MHz: 1500.0000
BogoMIPS: 108.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp
L1d cache: 256 KiB (4 instances)
L1i cache: 256 KiB (4 instances)
L2 cache: 2 MiB (4 instances)
L3 cache: 2 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.6.0.dev20241218+cpu
[pip3] torchao==0.8.0+git2f97b095
[pip3] torchtune==0.5.0.dev20241218+cpu
[pip3] torchvision==0.22.0.dev20241218
[conda] Could not collect
|
https://github.com/pytorch/torchchat/issues/1436
|
closed
|
[
"bug",
"documentation",
"actionable",
"Quantization",
"triaged"
] | 2024-12-22T06:43:48Z
| 2024-12-23T06:49:43Z
| 2
|
swolchok
|
pytorch/ao
| 1,456
|
[Bug] Unable to Obtain Quantized Weights Independently
|
**Description**
Thank you so much for your excellent work! I have been trying out a few demos to better understand your project.
While running [this demo](https://github.com/pytorch/ao/tree/main/torchao/quantization#a16w8-int8-weightonly-quantization), I attempted to independently print the quantized weight values, scale, and zero points. I noticed that the latter two can be accessed directly, but the quantized weight values cannot. I wanted to confirm whether this might be a bug.
I’ve attached my code snippet and output log below for your reference:
**Code snippet:**
```python
import torch
import torchao
from torchao.quantization import quantize_, int8_weight_only
print(f'Torch version: {torch.__version__}')
print(f'TorchAO version: {torchao.__version__}')
model = torch.nn.Sequential(torch.nn.Linear(2, 4)).cuda().to(torch.bfloat16)
quantize_(model, int8_weight_only())
for name, param in model.named_parameters():
if "weight" in name:
print('Weight Param Overview')
print("parameter shape:", param.shape)
print("parameter values:\n", param)
print('Weight detail:')
print('\nparam.tensor_impl.data:\n', param.tensor_impl.data)
print('\nparam.tensor_impl.data.data:\n', param.tensor_impl.data.data)
print('\nparam.tensor_impl.data.data.data:\n', param.tensor_impl.data.data.data)
print('\nparam.tensor_impl.scale:\n', param.tensor_impl.scale)
print('\nparam.tensor_impl.zero_point:\n', param.tensor_impl.zero_point)
```
**Output log:**
```bash
Torch version: 2.5.1+cu121
TorchAO version: 0.7.0
Weight Param Overview
parameter shape: torch.Size([4, 2])
parameter values:
AffineQuantizedTensor(tensor_impl=PlainAQTTensorImpl(data=tensor([[ 127, -2],
[-127, 6],
[-127, -78],
[ 127, -68]], device='cuda:0', dtype=torch.int8)... , scale=tensor([0.0036, 0.0049, 0.0028, 0.0037], device='cuda:0', dtype=torch.bfloat16)... , zero_point=tensor([0, 0, 0, 0], device='cuda:0')... , _layout=PlainLayout()), block_size=(1, 2), shape=torch.Size([4, 2]), device=cuda:0, dtype=torch.bfloat16, requires_grad=False)
Weight detail:
param.tensor_impl.data:
PlainAQTTensorImpl(data=tensor([[ 127, -2],
[-127, 6],
[-127, -78],
[ 127, -68]], device='cuda:0', dtype=torch.int8)... , scale=tensor([0.0036, 0.0049, 0.0028, 0.0037], device='cuda:0', dtype=torch.bfloat16)... , zero_point=tensor([0, 0, 0, 0], device='cuda:0')... , _layout=PlainLayout())
param.tensor_impl.data.data:
PlainAQTTensorImpl(data=tensor([[ 127, -2],
[-127, 6],
[-127, -78],
[ 127, -68]], device='cuda:0', dtype=torch.int8)... , scale=tensor([0.0036, 0.0049, 0.0028, 0.0037], device='cuda:0', dtype=torch.bfloat16)... , zero_point=tensor([0, 0, 0, 0], device='cuda:0')... , _layout=PlainLayout())
param.tensor_impl.data.data.data:
PlainAQTTensorImpl(data=tensor([[ 127, -2],
[-127, 6],
[-127, -78],
[ 127, -68]], device='cuda:0', dtype=torch.int8)... , scale=tensor([0.0036, 0.0049, 0.0028, 0.0037], device='cuda:0', dtype=torch.bfloat16)... , zero_point=tensor([0, 0, 0, 0], device='cuda:0')... , _layout=PlainLayout())
param.tensor_impl.scale:
tensor([0.0036, 0.0049, 0.0028, 0.0037], device='cuda:0', dtype=torch.bfloat16)
param.tensor_impl.zero_point:
tensor([0, 0, 0, 0], device='cuda:0')
```
From the print output, it can be seen that when I output `param.tensor_impl.data`, the output still includes the `scale` and `zero_point`. However, outputting `param.tensor_impl.scale` and `param.tensor_impl.zero_point` allows me to retrieve their values independently.
If you need any additional information from me, please feel free to let me know. Thanks again!
|
https://github.com/pytorch/ao/issues/1456
|
closed
|
[
"question",
"triaged"
] | 2024-12-22T02:55:18Z
| 2024-12-24T06:53:03Z
| null |
Mingbo-Lee
|
huggingface/accelerate
| 3,309
|
deepspeed zero3 how to save custom model?
|
DeepSpeedEngine(
(module): LLMDecoder(
(model): Qwen2ForSequenceClassification(
(model): Qwen2Model(
(embed_tokens): Embedding(151936, 1536)
(layers): ModuleList(
(0-27): 28 x Qwen2DecoderLayer(
(self_attn): Qwen2SdpaAttention(
(q_proj): Linear(in_features=1536, out_features=1536, bias=True)
(k_proj): Linear(in_features=1536, out_features=256, bias=True)
(v_proj): Linear(in_features=1536, out_features=256, bias=True)
(o_proj): Linear(in_features=1536, out_features=1536, bias=False)
(rotary_emb): Qwen2RotaryEmbedding()
)
(mlp): Qwen2MLP(
(gate_proj): Linear(in_features=1536, out_features=8960, bias=False)
(up_proj): Linear(in_features=1536, out_features=8960, bias=False)
(down_proj): Linear(in_features=8960, out_features=1536, bias=False)
(act_fn): SiLU()
)
(input_layernorm): Qwen2RMSNorm((0,), eps=1e-06)
(post_attention_layernorm): Qwen2RMSNorm((0,), eps=1e-06)
)
)
(norm): Qwen2RMSNorm((0,), eps=1e-06)
(rotary_emb): Qwen2RotaryEmbedding()
)
(score): Linear(in_features=1536, out_features=1, bias=False)
)
)
)
Hello, the above is my model structure. In short, I use a custom LLMDecoder, which has a variable named model which is a Qwen2ForSequenceClassification object.
In this case, how should I save the model in deepspeed zero3?
The following code is not suitable for my model structure, how should I modify it?
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(
args.output_dir,
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
state_dict=accelerator.get_state_dict(model),
)
|
https://github.com/huggingface/accelerate/issues/3309
|
closed
|
[] | 2024-12-21T17:01:17Z
| 2025-01-30T15:06:45Z
| null |
NLPJCL
|
pytorch/xla
| 8,515
|
multi_queries_paged_attention_kernel fails with Llama3 70B on a TPU-v4-16 with sequence length of 256
|
I'm running Llama3 70B with vllm on a TPU-v4-16, when using the flash attention kernel i'm able to go up to 16k, but using multi_queries_paged_attention with sequence length 256, it seems that the page table is taking too much smem.
@vanbasten23 @WoosukKwon any idea how to address this (i'm familiar with pallas programming)?
maybe something along the lines of this? https://github.com/vllm-project/vllm/blob/02222a0256f60319f5bcd56d1d036a943d6334f8/vllm/attention/backends/pallas.py#L260
```
Loading safetensors checkpoint shards: 100% Completed | 30/30 [02:03<00:00, 4.13s/it]
INFO 12-21 14:11:07 ray_tpu_executor.py:276] # TPU blocks: 19032, # CPU blocks: 6552
INFO 12-21 14:11:07 tpu_model_runner.py:274] Compiling the model with different input shapes...
(RayWorkerWrapper pid=777, ip=10.130.0.186) INFO 12-21 14:11:08 tpu_model_runner.py:274] Compiling the model with different input shapes...
(RayWorkerWrapper pid=1005) INFO 12-21 14:07:13 tpu.py:27] Cannot use _Backend.FLASH_ATTN backend on TPU. [repeated 6x across cluster]
(RayWorkerWrapper pid=1005) INFO 12-21 14:07:13 selector.py:163] Using Pallas backend. [repeated 6x across cluster]
(RayWorkerWrapper pid=1005) WARNING 12-21 14:07:13 tpu_worker.py:62] Starting to init distributed environment with config: ParallelConfig(pipeline_parallel_size=1, tensor_parallel_size=8, worker_use_ray=False, max_parallel_loading_workers=None, disable_custom_all_reduce=False, tokenizer_pool_config=None, ray_workers_use_nsight=False, p
lacement_group=<ray.util.placement_group.PlacementGroup object at 0x7f05501350f0>, distributed_executor_backend='ray', worker_cls='vllm.worker.tpu_worker.TPUWorker', sd_worker_cls='auto', world_size=8, rank=3) [repeated 6x across cluster]
(RayWorkerWrapper pid=1005) INFO 12-21 14:07:13 parallel_state.py:954] world_size=8 rank=3 local_rank=3 distributed_init_method=tcp://10.130.0.185:57577 backend=gloo [repeated 6x across cluster]
(RayWorkerWrapper pid=1005) INFO 12-21 14:07:13 parallel_state.py:959] attempting to initialize distributed environment [repeated 6x across cluster]
(RayWorkerWrapper pid=1135, ip=10.130.0.186) init_world_group: local_rank=3 [repeated 12x across cluster]
(RayWorkerWrapper pid=1135, ip=10.130.0.186) init_world_group: backend='gloo' [repeated 6x across cluster]
(RayWorkerWrapper pid=1135, ip=10.130.0.186) init_model_parallel_group bla bla: local_rank=3 [repeated 26x across cluster]
(RayWorkerWrapper pid=1135, ip=10.130.0.186) init_model_parallel_group bla bla: backend='gloo' [repeated 13x across cluster]
(RayWorkerWrapper pid=1005) self.cpu_group=<torch.distributed.distributed_c10d.ProcessGroup object at 0x7f051028d330> [repeated 6x across cluster]
INFO 12-21 14:13:02 tpu_model_runner.py:284] batch_size: 1, seq_len: 16
|
https://github.com/pytorch/xla/issues/8515
|
open
|
[
"performance",
"pallas",
"xla:tpu"
] | 2024-12-21T14:23:04Z
| 2025-02-13T13:43:19Z
| 2
|
OhadRubin
|
huggingface/diffusers
| 10,334
|
Sana broke on MacOS. Grey images on MPS, NaN's on CPU.
|
### Describe the bug
Just started to play with Sana, was excited when I saw it was coming to Diffusers as the NVIDIA supplied code was full of CUDA only stuff.
Ran the example code, changing cuda to mps and got a grey image.

Removed the move to MPS to run it on the CPU and the script failed with
```
image_processor.py:147: RuntimeWarning: invalid value encountered in cast
```
that suggests the latents had NaN's on the CPU.
### Reproduction
```py
import torch
from diffusers import SanaPipeline
pipe = SanaPipeline.from_pretrained(
"Efficient-Large-Model/Sana_1600M_1024px_diffusers", torch_dtype=torch.float32
)
pipe.to("mps")
pipe.text_encoder.to(torch.bfloat16)
pipe.transformer = pipe.transformer.to(torch.float16)
image = pipe(prompt='a cyberpunk cat with a neon sign that says "Sana"')[0]
image[0].save("output.png")
```
removed `pipe.to("mps")` to run on the CPU.
### Logs
```shell
*** MPS run ***
(Diffusers) $ python sana_test.py
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████| 2/2 [00:10<00:00, 5.03s/it]
Loading pipeline components...: 100%|█████████████████████████████████████████████████████| 5/5 [00:10<00:00, 2.18s/it]
Setting `clean_caption=True` requires the Beautiful Soup library but it was not found in your environment. You can install it with pip:
`pip install beautifulsoup4`. Please note that you may need to restart your runtime after installation.
Setting `clean_caption` to False...
The 'batch_size' argument of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'max_batch_size' argument instead.
The 'batch_size' attribute of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'self.max_batch_size' attribute instead.
Setting `clean_caption=True` requires the Beautiful Soup library but it was not found in your environment. You can install it with pip:
`pip install beautifulsoup4`. Please note that you may need to restart your runtime after installation.
Setting `clean_caption` to False...
100%|███████████████████████████████████████████████████████████████████████████████████| 20/20 [00:49<00:00, 2.48s/it]
(Diffusers) $
***CPU run***
(Diffusers) $ python sana_test.py
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████| 2/2 [00:06<00:00, 3.13s/it]
Loading pipeline components...: 100%|█████████████████████████████████████████████████████| 5/5 [00:07<00:00, 1.41s/it]
Setting `clean_caption=True` requires the Beautiful Soup library but it was not found in your environment. You can install it with pip:
`pip install beautifulsoup4`. Please note that you may need to restart your runtime after installation.
Setting `clean_caption` to False...
The 'batch_size' argument of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'max_batch_size' argument instead.
The 'batch_size' attribute of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'self.max_batch_size' attribute instead.
Setting `clean_caption=True` requires the Beautiful Soup library but it was not found in your environment. You can install it with pip:
`pip install beautifulsoup4`. Please note that you may need to restart your runtime after installation.
Setting `clean_caption` to False...
100%|███████████████████████████████████████████████████████████████████████████████████| 20/20 [20:14<00:00, 60.74s/it]
/Volumes/SSD2TB/AI/Diffusers/lib/python3.11/site-packages/diffusers/image_processor.py:147: RuntimeWarning: invalid value encountered in cast
images = (images * 255).round().astype("uint8")
(Diffusers) $
```
### System Info
- 🤗 Diffusers version: 0.32.0.dev0
- Platform: macOS-15.2-arm64-arm-64bit
- Running on Google Colab?: No
- Python version: 3.11.10
- PyTorch version (GPU?): 2.6.0.dev20241219 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.25.0
- Transformers version: 4.47.1
- Accelerate version: 0.34.2
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: Apple M3
- Using GPU in script?: both
- Using distributed or parallel set-up in script?: no
### Who can help?
@pcuenca
|
https://github.com/huggingface/diffusers/issues/10334
|
closed
|
[
"bug",
"stale"
] | 2024-12-21T11:26:40Z
| 2025-01-27T01:26:43Z
| 8
|
Vargol
|
huggingface/peft
| 2,292
|
Cannot import name 'EncoderDecoderCache' from 'transformers'
|
### System Info
transformer==4.39.3;peft==0.14.0
Maybe this is from transformer's update,so which version can i use.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
from src import models
from src.utils import IImage, resize
import numpy as np
from src.methods import rasg, sd, sr
from PIL import Image
from peft import get_peft_model, LoraConfig, TaskType
inp_model = models.load_inpainting_model('ds8_inp', device='cpu', cache=True)
lora_config = LoraConfig(
task_type=TaskType.IMAGE_GENERATION,
inference_mode=True,
r=8,
lora_alpha=16,
lora_dropout=0.05,
)
new_model = get_peft_model(inp_model.unet, lora_config)
print(new_model.state_dict().keys())
### Expected behavior
/root/miniconda3/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
Traceback (most recent call last):
File "/root/autodl-tmp/workspace/HD-Painter/paratest.py", line 6, in <module>
from peft import get_peft_model, LoraConfig, TaskType
File "/root/miniconda3/lib/python3.10/site-packages/peft/__init__.py", line 22, in <module>
from .auto import (
File "/root/miniconda3/lib/python3.10/site-packages/peft/auto.py", line 32, in <module>
from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING
File "/root/miniconda3/lib/python3.10/site-packages/peft/mapping.py", line 25, in <module>
from .mixed_model import PeftMixedModel
File "/root/miniconda3/lib/python3.10/site-packages/peft/mixed_model.py", line 29, in <module>
from .peft_model import PeftModel
File "/root/miniconda3/lib/python3.10/site-packages/peft/peft_model.py", line 37, in <module>
from transformers import Cache, DynamicCache, EncoderDecoderCache, PreTrainedModel
ImportError: cannot import name 'Cache' from 'transformers' (/root/miniconda3/lib/python3.10/site-packages/transformers/__init__.py)
|
https://github.com/huggingface/peft/issues/2292
|
closed
|
[] | 2024-12-21T09:00:04Z
| 2025-03-31T06:50:20Z
| 4
|
Huang-jia-xuan
|
pytorch/torchtitan
| 758
|
Checkpoint conversion
|
Hey,
I am trying to evaluate a model trained with torchtitan using the lm eval harness. I am using the VLLM backend. Is there any straightforward way to convert a torchtitan model in the pytorch .pt format to, e.g., a huggingface model to be used in VLLM/lm eval harness? Within the torchtune repo, I was able to find [some code for VLMs](https://github.com/pytorch/torchtune/blob/main/recipes/eleuther_eval.py), but (a) that seems to be hardcoded for LLMs, (b) uses a new inference backend instead of e.g. relying on VLLM, and (c) I feel like there might be an easy way to convert torchtitan checkpoints rather than coming up with such an involved solution.
How did you evaluate downstream task accuracy with torchtitan models?
Thank you very much for your help.
|
https://github.com/pytorch/torchtitan/issues/758
|
closed
|
[
"question",
"module: checkpoint"
] | 2024-12-20T17:57:58Z
| 2025-08-21T02:59:53Z
| null |
MaxiBoether
|
pytorch/xla
| 8,510
|
Input tensor is not an XLA tensor on AWS Trainium instance
|
Hi team, I'm currently testing my training job on AWS Trainium instance. I encountered error `Input tensor is not an XLA tensor: torch.FloatTensor` when using pytorch Conv1d/Linear module. I’ve confirmed that the input tensor has been moved to xla as I explicitly called `.to(xm.xla_device())` when passing the input tensor to the module forward method. However, I found out the error was actually caused by the weight and bias generated within those pytorch module, eg here: https://github.com/pytorch/pytorch/blob/main/torch/nn/modules/conv.py#L375, I printed the device location for self.weght and self.bias and they are on cpu. I have to modify the source Conv1d code to resolve the issue, eg:
```
def _conv_forward(self, input: Tensor, weight: Tensor, bias: Optional[Tensor]):
input = input.to(self.device)
weight = weight.to(self.device)
if bias is not None:
bias = bias.to(self.device)
if self.padding_mode != 'zeros':
return F.conv1d(
F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
weight, bias, self.stride, _single(0), self.dilation, self.groups
)
return F.conv1d(input, weight, bias, self.stride, self.padding, self.dilation, self.groups)
```
Does anyone know how to make sure those are on the xla device?
|
https://github.com/pytorch/xla/issues/8510
|
closed
|
[] | 2024-12-20T17:51:33Z
| 2025-01-08T21:59:14Z
| 4
|
JmeanJmy
|
pytorch/torchtitan
| 757
|
[question]can't disable CP for specific (unsupported) SDPA op
|
## Problem
currently the API of context parallel have five problems.
1. only support apply CP to whole model. if we have some cross attn in prep part of model with unsupported shape, it's impossible to apply CP since `_context_parallel` always override all SDPA and need to wrap whole backward.
2. no shard/unshard with gradient support. when I try to apply CP to transformer blocks only and remain other SDPA replicate, the `context_parallel_unshard` in pytorch has `no_grad` decorator.
3. weight gradients inside CP region is divided by size of CP mesh because we reduce them in DP+CP, this may work for optimizer with norm support, but make unit test harder to write, we have to scale them back to get same gradients as model without CP.
4. The length of the sequence must be divisible by the number of CP (CP * 2 for robin).
5. replicate input of CP region may contain wrong gradient because its gradient may be `Partial`, we have to check every replicate input and use `to_local(grad_placements=[Partial()])`.
To resolve problem 1 above, I remove `context_parallel` context to disable SDPA override, only enable `_enable_cp_dispatcher` context, then we can enable CP SDPA iff all inputs are converted to DTensor. problem 2 is easy to resolve, just write some auto grad functions.
here is my questions:
1. is there a better way to support `CP region`?
2. do you have any plan to support `CP region` officially and resolve issues above?
|
https://github.com/pytorch/torchtitan/issues/757
|
open
|
[
"enhancement",
"module: context parallel"
] | 2024-12-20T11:00:23Z
| 2025-03-12T10:30:52Z
| 3
|
FindDefinition
|
huggingface/sentence-transformers
| 3,141
|
How to load ModernBERT model correctly?
|
Hi Teams,
I want to ask how to properly load [ModernBERT](https://huggingface.co/blog/modernbert) using SentenceTransformer?
The main difficulty I met is about the weight loading of prediction head as defined [here](https://github.com/huggingface/transformers/blob/f42084e6411c39b74309af4a7d6ed640c01a4c9e/src/transformers/models/modernbert/modeling_modernbert.py#L1121-L1123) where `ModernBertPredictionHead` is not included in the `AutoModelClass`. I tried to use the following code:
```python
import torch
from sentence_transformers import SentenceTransformer,models
model_name_or_path = "answerdotai/ModernBERT-base"
modules = []
modules.append(models.Transformer(model_name_or_path))
## head
modules.append(models.Dense(768,768,activation_function=torch.nn.GELU()))
modules.append(models.Dense(768,768,activation_function=torch.nn.Identity()))
## pooling
modules.append(models.Pooling(768,pooling_mode="mean"))
## classifier
modules.append(models.Dense(768,1))
model = SentenceTransformer(modules=modules,device="cpu")
```
However, it seems that `Dense` before `Pooling` is not supported and would throw an error:
```
KeyError: 'sentence_embedding'
```
|
https://github.com/huggingface/sentence-transformers/issues/3141
|
closed
|
[] | 2024-12-20T06:52:44Z
| 2024-12-24T03:08:47Z
| null |
Hannibal046
|
huggingface/picotron
| 15
|
Difference between picotron and nanotron
|
What is the difference between picotron and [nanotron](https://github.com/huggingface/nanotron)? Why huggingface team rolled out two hybrid-parallelism framework?
|
https://github.com/huggingface/picotron/issues/15
|
closed
|
[
"question"
] | 2024-12-19T12:48:57Z
| 2024-12-20T10:17:25Z
| null |
cailun01
|
huggingface/diffusers
| 10,302
|
Using FP8 for inference without CPU offloading can introduce noise.
|
### Describe the bug
If I use ```pipe.enable_model_cpu_offload(device=device)```, the model can perform inference correctly after warming up. However, if I comment out this line, the inference results are noisy.
### Reproduction
```python
from diffusers import (
FluxPipeline,
FluxTransformer2DModel
)
from transformers import T5EncoderModel, CLIPTextModel,CLIPTokenizer,T5TokenizerFast
from optimum.quanto import freeze, qfloat8, quantize
import torch
from diffusers import FlowMatchEulerDiscreteScheduler, AutoencoderKL
dtype = torch.bfloat16
bfl_repo = f"black-forest-labs/FLUX.1-dev"
device = "cuda"
scheduler = FlowMatchEulerDiscreteScheduler.from_pretrained(bfl_repo, subfolder="scheduler", torch_dtype=dtype)
text_encoder = CLIPTextModel.from_pretrained(bfl_repo, subfolder="text_encoder", torch_dtype=dtype)
tokenizer = CLIPTokenizer.from_pretrained(bfl_repo, subfolder="tokenizer", torch_dtype=dtype, clean_up_tokenization_spaces=True)
text_encoder_2 = T5EncoderModel.from_pretrained(bfl_repo, subfolder="text_encoder_2", torch_dtype=dtype)
tokenizer_2 = T5TokenizerFast.from_pretrained(bfl_repo, subfolder="tokenizer_2", torch_dtype=dtype, clean_up_tokenization_spaces=True)
vae = AutoencoderKL.from_pretrained(bfl_repo, subfolder="vae", torch_dtype=dtype)
transformer = FluxTransformer2DModel.from_single_file("https://huggingface.co/Kijai/flux-fp8/blob/main/flux1-dev-fp8.safetensors", torch_dtype=dtype)
quantize(transformer, weights=qfloat8)
freeze(transformer)
quantize(text_encoder_2, weights=qfloat8)
freeze(text_encoder_2)
pipe = FluxPipeline(
scheduler=scheduler,
text_encoder=text_encoder,
tokenizer=tokenizer,
text_encoder_2=text_encoder_2,
tokenizer_2=tokenizer_2,
vae=vae,
transformer=transformer
).to(device, dtype=dtype) # edit
# pipe.enable_model_cpu_offload(device=device)
params = {
"prompt": "a cat",
"num_images_per_prompt": 1,
"num_inference_steps":1,
"width": 64,
"height": 64,
"guidance_scale": 7,
}
image = pipe(**params).images[0] # wamup
params = {
"prompt": "a cat",
"num_images_per_prompt": 1,
"num_inference_steps":25,
"width": 512,
"height": 512,
"guidance_scale": 7,
}
image = pipe(**params).images[0]
image.save("1.jpg")
```
### Logs
_No response_
### System Info
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.5.1+cu121 with CUDA 1201 (you have 2.4.1+cu121)
Python 3.10.15 (you have 3.10.13)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- 🤗 Diffusers version: 0.32.0.dev0
- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.13
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.26.2
- Transformers version: 4.46.2
- Accelerate version: 0.31.0
- PEFT version: 0.14.0
- Bitsandbytes version: not installed
- Safetensors version: 0.4.3
- xFormers version: 0.0.28.post3
- Accelerator: NVIDIA GeForce RTX 3090, 24576 MiB
NVIDIA GeForce RTX 3090, 24576 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@yiyixuxu @DN6
|
https://github.com/huggingface/diffusers/issues/10302
|
open
|
[
"bug"
] | 2024-12-19T12:39:06Z
| 2025-03-10T14:18:58Z
| 6
|
todochenxi
|
huggingface/candle
| 2,674
|
[Question] How to create a autograd function like in PyTorch? How to customize forward and backward process?
|
https://github.com/huggingface/candle/issues/2674
|
open
|
[] | 2024-12-19T07:02:04Z
| 2024-12-19T07:02:15Z
| null |
VanderBieu
|
|
huggingface/blog
| 2,551
|
How to process and visualize the segment output tokens?
|
How to process the segment tokens and generate segmentation masks? what the output means?

|
https://github.com/huggingface/blog/issues/2551
|
open
|
[] | 2024-12-19T03:11:15Z
| 2024-12-19T03:11:15Z
| null |
00mmw
|
pytorch/ao
| 1,437
|
Segmentation Fault Running Int8 Quantized Model on GPU
|
Hi! We got into segmentation fault error when trying to run model inference on gpu. Below is a minimal example from the tutorial ([link](https://pytorch.org/docs/stable/quantization.html#post-training-static-quantization)):
```
import torch
import time
# define a floating point model where some layers could be statically quantized
class M(torch.nn.Module):
def __init__(self):
super().__init__()
# QuantStub converts tensors from floating point to quantized
self.quant = torch.ao.quantization.QuantStub()
self.conv = torch.nn.Conv2d(1, 1, 1)
self.relu = torch.nn.ReLU()
# DeQuantStub converts tensors from quantized to floating point
self.dequant = torch.ao.quantization.DeQuantStub()
def forward(self, x):
# manually specify where tensors will be converted from floating
# point to quantized in the quantized model
x = self.quant(x)
x = self.conv(x)
x = self.relu(x)
# manually specify where tensors will be converted from quantized
# to floating point in the quantized model
x = self.dequant(x)
return x
# create a model instance
model_fp32 = M()
# model must be set to eval mode for static quantization logic to work
model_fp32.eval()
input_fp32 = torch.randn(4, 1, 1024, 1024)
time_s = time.time()
with torch.no_grad():
out = model_fp32(input_fp32)
time_e = time.time()
model_fp32.qconfig = torch.ao.quantization.get_default_qconfig('fbgemm')
model_fp32_fused = torch.ao.quantization.fuse_modules(model_fp32, [['conv', 'relu']])
model_fp32_prepared = torch.ao.quantization.prepare(model_fp32_fused)
model_fp32_prepared(input_fp32)
model_int8 = torch.ao.quantization.convert(model_fp32_prepared)
# run the model, relevant calculations will happen in int8
res = model_int8(input_fp32)
model_int8 = model_int8.to('cuda:0')
input_fp32 = input_fp32.to('cuda:0')
with torch.no_grad():
out = model_int8(input_fp32)
```
Output:
```
Segmentation fault (core dumped)
```
Inference on CPU is fine for the int8 model. Could someone please advise on the potential reason? Thank you!
|
https://github.com/pytorch/ao/issues/1437
|
closed
|
[
"question",
"triaged"
] | 2024-12-18T19:51:48Z
| 2025-01-23T19:16:09Z
| null |
wendywangwwt
|
pytorch/TensorRT
| 3,331
|
❓ [Question] Jetson AGX Orin build and install torch_tensorrt wheel file Failed
|
## ❓ Question
I follow this [tutorial](https://pytorch.org/TensorRT/getting_started/installation.html) to install Torch-TensorRT, but in the last step:
```
cuda_version=$(nvcc --version | grep Cuda | grep release | cut -d ',' -f 2 | sed -e 's/ release //g')
export TORCH_INSTALL_PATH=$(python -c "import torch, os; print(os.path.dirname(torch.__file__))")
export SITE_PACKAGE_PATH=${TORCH_INSTALL_PATH::-6}
export CUDA_HOME=/usr/local/cuda-${cuda_version}/
# replace the MODULE.bazel with the jetpack one
cat toolchains/jp_workspaces/MODULE.bazel.tmpl | envsubst > MODULE.bazel
# build and install torch_tensorrt wheel file
python setup.py --use-cxx11-abi install --user
```
some errors happened:
```
Run this command to start an interactive shell in an identical sandboxed environment:
(exec env - \
LD_LIBRARY_PATH=/usr/lib/gcc/aarch64-linux-gnu/11:/usr/local/cuda-12.6/lib64: \
PATH=/home/lab223/.cache/bazelisk/downloads/sha256/5a4cc979353671e438b9469b833924c2361e25a580cc278a75877aedc27c1c53/bin:/usr/lib/gcc/aarch64-linux-gnu/11:/home/lab223/anaconda3/envs/rnw/bin:/home/lab223/anaconda3/condabin:/usr/local/cuda-12.6/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin \
PWD=/proc/self/cwd \
TMPDIR=/tmp \
/home/lab223/.cache/bazel/_bazel_lab223/install/128438993754f9753a1e4f56fdd76124/linux-sandbox -t 15 -w /dev/shm -w /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/46/execroot/_main -w /tmp -M /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/46/_hermetic_tmp -m /tmp -S /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/46/stats.out -D /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/46/debug.out -- /bin/sh -i)
ERROR: /home/lab223/TensorRT/core/conversion/var/BUILD:20:11: Compiling core/conversion/var/Var.cpp failed: I/O exception during sandboxed execution: /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/58/execroot/_main/bazel-out/aarch64-opt/bin/external/_main~_repo_rules~libtorch/_virtual_includes/ATen/ATen/core/DeprecatedTypePropertiesRegistry.h (???????)
ERROR: /home/lab223/TensorRT/core/conversion/converters/BUILD:59:11: Compiling core/conversion/converters/NodeConverterRegistry.cpp failed: I/O exception during sandboxed execution: /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/57/execroot/_main/bazel-out/aarch64-opt/bin/external/_main~_repo_rules~libtorch/_virtual_includes/ATen/ATen/ops/cudnn_batch_norm_ops.h (???????)
ERROR: /home/lab223/TensorRT/core/conversion/converters/BUILD:39:11: Compiling core/conversion/converters/converter_util.cpp failed: I/O exception during sandboxed execution: /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/sandbox/linux-sandbox/56/execroot/_main/external/_main~_repo_rules~libtorch/include/ATen/ops/native_dropout_backward_cpu_dispatch.h (???????)
Target //:libtorchtrt failed to build
INFO: Elapsed time: 1000.299s, Critical Path: 574.06s
INFO: 7984 processes: 7938 internal, 46 linux-sandbox.
ERROR: Build did NOT complete successfully
```
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): **2.5.0**
- CPU Architecture: **arm64(Jetson AGX Orin)**
- OS (e.g., Linux): **Linux**
- How you installed PyTorch: **pip**
- Build command you used (if compiling from source): **python setup.py --use-cxx11-abi install --user**
- Are you using local sources or building from archives: **building from archives**
- Python version: **3.10.15**
- CUDA version: **12.6**
- GPU models and configuration: -
- Any other relevant information: Install torch_tensorrt in the model's anaconda virtual environment
## Additional context
It seems a I/O exception.But Jetson still has 11GB of space.please help me!thanks!!!!
|
https://github.com/pytorch/TensorRT/issues/3331
|
open
|
[
"question"
] | 2024-12-18T18:55:56Z
| 2024-12-18T20:30:20Z
| null |
breknddone
|
huggingface/transformers
| 35,316
|
How to use a custom Image Processor?
|
I want to use the processor in the form of `auto_map` but when using `AutoProcessor.from_pretrained`, I am unable to load the custom `ImageProcessor`.
The root cause lies in the use of the `transformers_module` to initialize the class in `ProcessorMixin`.
https://github.com/huggingface/transformers/blob/c7e48053aab09ad11efa2ad12513e9ab56f29563/src/transformers/processing_utils.py#L1018
Even though I have overridden the _get_arguments_from_pretrained method, this issue still exists in the `__init__`.
https://github.com/huggingface/transformers/blob/c7e48053aab09ad11efa2ad12513e9ab56f29563/src/transformers/processing_utils.py#L383
Perhaps I could avoid inheriting from ProcessorMixin, but I would like to know if there is a more elegant way to achieve this functionality?
|
https://github.com/huggingface/transformers/issues/35316
|
closed
|
[] | 2024-12-18T12:04:33Z
| 2024-12-19T02:53:43Z
| null |
glamourzc
|
huggingface/diffusers
| 10,281
|
Request to implement FreeScale, a new diffusion scheduler
|
### Model/Pipeline/Scheduler description
FreeScale is a tuning-free method for higher-resolution visual generation, unlocking the 8k image generation for pre-trained SDXL! Compared to direct inference by SDXL, FreeScale brings negligible additional memory and time costs.


### Open source status
- [X] The model implementation is available.
- [X] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
- Project: http://haonanqiu.com/projects/FreeScale.html
- Paper: https://arxiv.org/abs/2412.09626
- Code: https://github.com/ali-vilab/FreeScale
- Hugging Face Demo: https://huggingface.co/spaces/MoonQiu/FreeScale
The code changes of FreeScale are not complicated, but I do not know how to integrate them into diffusers smoothly. If you have questions about FreeScale, please ask me(@arthur-qiu).
|
https://github.com/huggingface/diffusers/issues/10281
|
open
|
[
"stale",
"consider-for-modular-diffusers"
] | 2024-12-18T06:32:34Z
| 2025-01-17T15:02:49Z
| 1
|
arthur-qiu
|
huggingface/diffusers
| 10,280
|
Safetensors loading uses mmap with multiple processes sharing the same fd cause slow gcsfuse performance
|
### Describe the bug
When I use `StableDiffusionPipeline.from_single_file` to load a safetensors model, I noticed that the loading speed is extremely slow when the file is loaded from GCSFuse (https://cloud.google.com/storage/docs/cloud-storage-fuse/overview).
The reason is that the loader creates multiple processes but they all share the same fd and its file handle. As each process reads different offset of the file, it makes the GCSFuse perform really badly because those reads appear to be random read jumping between offsets. For example:
```
connection.go:420] <- ReadFile (inode 2, PID 77, handle 1, offset 529453056, 262144 bytes)
connection.go:420] <- ReadFile (inode 2, PID 78, handle 1, offset 531812352, 262144 bytes)
connection.go:420] <- ReadFile (inode 2, PID 79, handle 1, offset 534171648, 262144 bytes)
connection.go:420] <- ReadFile (inode 2, PID 50, handle 1, offset 527351808, 4096 bytes)
```
The question I have is why the loading multiple processes share the same fd in the first place? As `mmap` is already used, even the multiple processes don't share the same fd, the kernel will still map the virtual memory for each process back to the same the page cache naturally, so there is no need to share the fd across the fd.
If they don't share the fd, GCSFuse will perform much better. Therefore, can we disable the fd sharing?
### Reproduction
Simply using GCSFuse to serve a file to `StableDiffusionPipeline.from_single_file`
### Logs
_No response_
### System Info
N/A
### Who can help?
@yiyixuxu @asomoza
|
https://github.com/huggingface/diffusers/issues/10280
|
closed
|
[
"bug"
] | 2024-12-18T06:02:41Z
| 2025-01-10T10:11:05Z
| 4
|
wlhee
|
pytorch/xla
| 8,497
|
API guide code snippets don't work
|
## 📚 Documentation
Trying to follow the example here: https://github.com/pytorch/xla/blob/master/API_GUIDE.md#running-on-a-single-xla-device
The Python code snippet doesn't work, as `MNIST()`, `nn`, and `optim` are all undefined.
|
https://github.com/pytorch/xla/issues/8497
|
closed
|
[
"bug",
"documentation"
] | 2024-12-17T23:14:45Z
| 2025-05-20T15:55:40Z
| 6
|
richardsliu
|
huggingface/optimum-neuron
| 750
|
Document how to use Qwen 2.5
|
### Feature request
Qwen 2.5 7B Instruct on EC2 with HF DL AMI
Qwen 2.5 7B Instruct on Sagemaker with HF DLC Neuronx TGI
Maybe something for the code version too?
Dependency of adding the model to the cache
### Motivation
increase AMI and DLC usage
### Your contribution
doc
|
https://github.com/huggingface/optimum-neuron/issues/750
|
closed
|
[
"Stale"
] | 2024-12-17T16:03:25Z
| 2025-01-22T08:04:54Z
| null |
pagezyhf
|
pytorch/serve
| 3,375
|
503 InternalServerException, prediction failed
|
### 🐛 Describe the bug
Hello, my inference request is returning a 503 InternalServerException, prediction failed. How can I resolve this issue? Below are the specific request, inference response, and torchserve logs. Additional note: I am using Docker to run the service, and the inference works fine with the gRPC API, but not with the HTTP request.
### Error logs


### Installation instructions
docker
### Model Packaging

### config.properties
_No response_
### Versions

### Repro instructions

### Possible Solution
_No response_
|
https://github.com/pytorch/serve/issues/3375
|
closed
|
[] | 2024-12-17T04:02:49Z
| 2024-12-17T08:43:24Z
| 1
|
Jax29
|
pytorch/torchtitan
| 743
|
Model init with HuggingFace model
|
I am writing a simple script to run FSDP2 (`fully_shard`) on the `pythia-1b` model available on HuggingFace. I am currently running the model on 1 node with 2 devices. I was following the meta-device initialisation from the [FSDP2 docs](https://github.com/pytorch/torchtitan/blob/main/docs/fsdp.md). However, I think there is something wrong with my implementation since the peak memory usage with FSDP is same as without FSDP (~ 1GB). Further, I get an OOM on my device when I try with `pythia-2.8b` model. Following is a snippet on how I am initialising the model on a meta device using HuggingFace APIs:
```
model_name = "EleutherAI/pythia-14m"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
config = AutoConfig.from_pretrained(model_name)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config)
for module in model.modules():
if isinstance(module, GPTNeoXLayer):
fully_shard(module)
model = fully_shard(model, reshard_after_forward=True)
model = load_checkpoint_and_dispatch(
model, path_to_safe_tensors
)
```
This is not very straightforward since the shards expect `DTensors` when the weights are being loaded via `load_checkpoint_and_dispatch`. I am looking for some suggestions on what would be a good way to make FSDP2 work with HuggingFace models. I dont think accelerate supports FSDP2 yet.
|
https://github.com/pytorch/torchtitan/issues/743
|
open
|
[
"bug",
"question",
"module: checkpoint",
"huggingface integration"
] | 2024-12-16T05:45:04Z
| 2025-04-22T18:38:22Z
| null |
neeldani
|
pytorch/torchtitan
| 742
|
Low bit Optimizers & FA-3
|
1. hi have there been any tests with fa-3 and low bit optimizers from torchao like FP8adam for 8bit adam? i see divergence in training when resuming a FA-2 checkpoint with FA-3 or when using 8BITADAMW
|
https://github.com/pytorch/torchtitan/issues/742
|
open
|
[
"bug",
"question"
] | 2024-12-16T03:56:22Z
| 2025-01-07T00:55:59Z
| null |
asahni-sc
|
huggingface/accelerate
| 3,294
|
How to run accelerate with PYTORCH_ENABLE_MPS_FALLBACK
|
### System Info
```Shell
MacOS
transformers>=4.35.1
datasets[audio]>=2.14.7
accelerate>=0.24.1
matplotlib
wandb
tensorboard
Cython
- `Accelerate` version: 1.2.1
- Platform: macOS-14.7.1-arm64-arm-64bit
- `accelerate` bash location: .venv/bin/accelerate
- Python version: 3.12.3
- Numpy version: 2.0.2
- PyTorch version (GPU?): 2.5.1 (False)
- PyTorch XPU available: False
- PyTorch NPU available: False
- PyTorch MLU available: False
- PyTorch MUSA available: False
- System RAM: 64.00 GB
- `Accelerate` default config:
Not found
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
How to set `PYTORCH_ENABLE_MPS_FALLBACK` environment variable when running a script with accelerate. The accelerate is not picking up the PYTORCH_ENABLE_MPS_FALLBACK environment variable when running a script, no matter where this variable is set. I tried to set this variable in the script, in the command line and in the `./zshenv`, and still PyTorch is complaining it does not see this variable.
### Expected behavior
expected the PYTORCH_ENABLE_MPS_FALLBACK variable be visible in the sub-process/thread.
|
https://github.com/huggingface/accelerate/issues/3294
|
closed
|
[] | 2024-12-15T07:03:41Z
| 2025-01-23T15:06:57Z
| null |
mirodil-ml
|
pytorch/audio
| 3,863
|
How to install or download avutil-<VERSION>.dll and others on Windows Python venv not Conda!
|
I am reading this page and there is only information for conda
I am not using conda but using Python venv
So how to install or where to get these dll files?
https://pytorch.org/audio/stable/installation.html#optional-dependencies
`When searching for FFmpeg installation, TorchAudio looks for library files which have names with version numbers. That is, libavutil.so.<VERSION> for Linux, libavutil.<VERSION>.dylib for macOS, and avutil-<VERSION>.dll for Windows. Many public pre-built binaries follow this naming scheme, but some distributions have un-versioned file names. If you are having difficulties detecting FFmpeg, double check that the library files you installed follow this naming scheme, (and then make sure that they are in one of the directories listed in library search path.)`
I can't find anywhere these DLL files are distributed
This is causing me to get this error
```
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "R:\MMAudio_v1\MMAudio\gradio_demo.py", line 60, in video_to_audio
clip_frames, sync_frames, duration = load_video(video, duration)
File "R:\MMAudio_v1\MMAudio\mmaudio\eval_utils.py", line 178, in load_video
reader = StreamingMediaDecoder(video_path)
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\torio\io\_streaming_media_decoder.py", line 526, in __init__
self._be = ffmpeg_ext.StreamingMediaDecoder(os.path.normpath(src), format, option)
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\torio\_extension\utils.py", line 25, in __getattr__
self._import_once()
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\torio\_extension\utils.py", line 39, in _import_once
self.module = self.import_func()
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\torio\_extension\utils.py", line 143, in _init_ffmpeg
ext = _find_ffmpeg_extension(ffmpeg_vers)
File "R:\MMAudio_v1\MMAudio\venv\lib\site-packages\torio\_extension\utils.py", line 122, in _find_ffmpeg_extension
raise ImportError(
ImportError: Failed to intialize FFmpeg extension. Tried versions: ['6', '5', '4', '']. Enable DEBUG logging to see more details about the error.
```
|
https://github.com/pytorch/audio/issues/3863
|
closed
|
[] | 2024-12-14T13:15:01Z
| 2024-12-14T13:48:42Z
| null |
FurkanGozukara
|
pytorch/tutorials
| 3,186
|
Writing a gradient tutorial, focused on leaf vs non leaf tensors.
|
There is no tutorial that specifically talks about requires_grad, retain_grad, and leaf tensor/ non-leaf tensors and how they interact with each other. Can I write a tutorial specifically talking about this topic? This will be useful when gradients are used in unusual places, as is the case for the deep dream algorithm.
cc: @albanD
|
https://github.com/pytorch/tutorials/issues/3186
|
closed
|
[
"advanced",
"tutorial-proposal",
"docathon-h1-2025",
"hard"
] | 2024-12-14T06:44:48Z
| 2025-08-20T23:30:53Z
| 5
|
JitheshPavan
|
huggingface/diffusers
| 10,223
|
Where should I obtain the lora-sdxl-dreambooth-id in Inference
|
### Describe the bug
I tried to upload the download link from the README file generated during training, but an error indicated it was incorrect. Where should I obtain the lora-id for Inference?
### Reproduction
README.md:
---
base_model: /data/ziqiang/czc/diffusers/examples/dreambooth/model
library_name: diffusers
license: openrail++
instance_prompt: a photo of sks dog
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - daniu111/output
<Gallery />
## Model description
These are daniu111/output LoRA adaption weights for /data/ziqiang/czc/diffusers/examples/dreambooth/model.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: /data/ziqiang/czc/diffusers/examples/dreambooth/model/vae.
## Trigger words
You should use a photo of sks dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](daniu111/output/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
Inference:
from huggingface_hub.repocard import RepoCard
from diffusers import DiffusionPipeline
import torch
lora_model_id = <"lora-sdxl-dreambooth-id">
card = RepoCard.load(lora_model_id)
base_model_id = card.data.to_dict()["base_model"]
pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
pipe.load_lora_weights(lora_model_id)
image = pipe("A picture of a sks dog in a bucket", num_inference_steps=25).images[0]
image.save("sks_dog.png")
"The lora-dreambooth-sdxl-id seems to need to be uploaded, but I don't know where to obtain this ID."
### Logs
_No response_
### System Info
- 🤗 Diffusers version: 0.32.0.dev0
- Platform: Linux-5.4.0-198-generic-x86_64-with-glibc2.31
- Running on Google Colab?: No
- Python version: 3.12.4
- PyTorch version (GPU?): 2.4.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.26.2
- Transformers version: 4.46.3
- Accelerate version: 1.1.1
- PEFT version: 0.7.0
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: 0.0.27.post2
- Accelerator: NVIDIA RTX A6000, 49140 MiB
NVIDIA RTX A6000, 49140 MiB
NVIDIA RTX A6000, 49140 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@hlky
|
https://github.com/huggingface/diffusers/issues/10223
|
open
|
[
"bug",
"stale"
] | 2024-12-14T06:34:56Z
| 2025-02-07T15:03:24Z
| 5
|
Zarato2122
|
pytorch/torchchat
| 1,424
|
Misaligned AOTI input; potential perf gains by fixing?
|
### 🐛 Describe the bug
Picked up in https://github.com/pytorch/torchchat/pull/1367, and worked around via https://github.com/pytorch/pytorch/pull/143236, it appears the input to the torchchat AOTI runner is not 16 byte aligned.
While the PR from pytorch/pytorch eases this constraint, this may be indicative of potential perf losses (common of misalignment)
hattip to @malfet for suggesting line of investigation
### Versions
https://github.com/pytorch/torchchat/commit/bb72b096b14f0c9753070f3523e43ed58aa55178
|
https://github.com/pytorch/torchchat/issues/1424
|
open
|
[
"bug",
"actionable",
"Compile / AOTI",
"triaged"
] | 2024-12-14T01:11:30Z
| 2024-12-17T23:35:29Z
| 1
|
Jack-Khuu
|
pytorch/xla
| 8,492
|
How to do multi-machine SPMD/FSDPv2 training with TPU?
|
## ❓ Questions and Help
I saw https://github.com/pytorch/xla/issues/6362 but there's no example training script found? For example, if I have multiple TPU v3-8 VMs, how would I achieve this with SPMD/FSDPv2?
I'm currently sending the commands to all TPU VMs this way:
```
python3.10 podrun --include-local -- hostname
```
|
https://github.com/pytorch/xla/issues/8492
|
closed
|
[
"question",
"distributed"
] | 2024-12-13T18:47:39Z
| 2025-05-05T12:34:29Z
| null |
radna0
|
huggingface/lerobot
| 575
|
Gello dataset converter
|
I made a converter for the [Gello](https://wuphilipp.github.io/gello_site/) dataset format (pickles containing dicts with all the observations).
If this is of interest, I am willing to contribute it back here.
The current code can be found [here](https://github.com/tlpss/lerobot/blob/tlpss-dev/lerobot/common/datasets/push_dataset_to_hub/gello_pkl_format.py). It needs some cleanup and maybe a convenient way to specify the mapping of dict keys in case you have a different number of cameras or other sensors. Wanted to see if there is any interest in this, before I make the effort to clean it up.
|
https://github.com/huggingface/lerobot/issues/575
|
closed
|
[
"enhancement",
"question",
"dataset",
"stale"
] | 2024-12-13T15:47:58Z
| 2025-10-08T08:50:40Z
| null |
tlpss
|
huggingface/diffusers
| 10,207
|
KolorsPipeline does not support from_single_file
|
from diffusers import KolorsPipeline
KolorsPipeline.from_single_file("models/kolrs-8steps.safetensors")
How does KolorsPipeline load a single file model?
|
https://github.com/huggingface/diffusers/issues/10207
|
open
|
[
"stale",
"single_file"
] | 2024-12-13T09:44:46Z
| 2025-01-12T15:02:46Z
| 3
|
Thekey756
|
huggingface/sentence-transformers
| 3,134
|
How to set a proper batchsize when using CachedMultipleNegativesRankingLoss?
|
When using the [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss), I tried different batchsize(per_device_train_batch_size) setting, and found that 512 was the maximum. When batchsize was greater than 512, GPU memory OOM was happened.
As stated in the document of [CachedMultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss):
> GradCache is a smart way to solve this problem. It achieves the goal by dividing the computation into two stages of embedding and loss calculation, which both can be scaled by mini-batches. As a result, memory of constant size (e.g. that works with batch size = 32) can now process much larger batches (e.g. 65536).
So, I tried CachedMultipleNegativesRankingLoss, and the mini_batch_size of CachedMultipleNegativesRankingLoss can go as high as 2048. mini_batch_size greather than 2048 will cause GPU memory OOM.
Nevertheless, When setting the mini_batch_size as 2048, I can still increase the global batchsize(per_device_train_batch_size). Generally speaking, larger batchsize will achieve better performance in the constrastive learning settings. So, I tried different batchsize(per_device_train_batch_size), and found it can be as large as 1048576 and it won't cause GPU memory OOM (but the GPU utilization is 100%). So, I am wondering how to set a proper batchsize(per_device_train_batch_size), can it be infinite big?
|
https://github.com/huggingface/sentence-transformers/issues/3134
|
open
|
[] | 2024-12-13T09:25:34Z
| 2024-12-27T13:46:17Z
| null |
awmoe
|
huggingface/sentence-transformers
| 3,133
|
How to avoid the long time waiting before start training?
|
Dear developer,
Thanks for the great sentence-transformers library!
I am finetuning the [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) using my own data following the tutorial from: https://sbert.net/docs/sentence_transformer/training_overview.html
I first finetuned it with a toy dataset containing only hundreds of triplet sentence samples, and everything was ok, and the finetuning was very fast.
After that, I finetuned it with the formal big dataset containing 100 million triplet sentence samples. I found that it had to wait a long time (about 60 minutes) to start training. And when the data is bigger, the waiting time is longer.
Specifically:
1. It first spent 5 minutes to `Generating train split`.
2. Then spent 30 minutes to dataset mapping.
3. After that, it printed `Detected kernel version 4.18.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.`.
4. and waiting about 60 minutes to start the real training.
During the 60 minutes, I found that the GPU was working but the GPU utilization rate was relatively low (30%) and the GPU memory was not used. What's more, during the 60 minutes, no any log information was printed. Was it doing something like data preparation or tokenization? Could you tell me what was it doing, and how to avoid this long waiting time?
After the 60-minute waiting, it started the real training, and the GPU utilization rate was as high as 80%, and the GPU memory was used around 70GB on H100. What's more, the training progress bar was printing similar as `x/y [69:08:34<130:13:54, 1.09it/s]`. So that I knew it was training.
I also have another dataset which is 10 times larger than 100 million triplet sentence samples, I worry that I have to wait days to starting the training if I use the huge dataset.
Could you tell me what was it doing during the 60-minute waiting, and how to avoid this long waiting time?
Thank you very much and look forward to your reply.
|
https://github.com/huggingface/sentence-transformers/issues/3133
|
open
|
[] | 2024-12-13T09:10:32Z
| 2024-12-25T03:46:50Z
| null |
awmoe
|
pytorch/torchtitan
| 735
|
[question]FSDP2 have more peak active memory/reserved memory than FSDP1
|
## Environment
OS: Ubuntu
GPU: 8x GPU
torch: torch-2.6.0.dev20241212+cu124
DDP: 4-way Tensor Parallel * 2-way FSDP
## Problem
I'm using FSDP+TP in my model and follow torchtitan code. when I switch fsdp1 to fsdp2, the memory usage showed by `nvidia-smi` increases by 10GB, also the peak active memory is greatly larger than fsdp1. is this expected? Which metric should be cared in `memory_summary` to avoid OOM?
here is the result from `torch.cuda.memory_summary()`. Following tables are generated when **first step is end**.
* fsdp2
```
|===========================================================================|
| PyTorch CUDA memory summary, device ID 0 |
|---------------------------------------------------------------------------|
| CUDA OOMs: 0 | cudaMalloc retries: 0 |
|===========================================================================|
| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |
|---------------------------------------------------------------------------|
| Allocated memory | 13975 MiB | 18803 MiB | 2142 GiB | 2128 GiB |
| from large pool | 13959 MiB | 18790 MiB | 2140 GiB | 2127 GiB |
| from small pool | 16 MiB | 17 MiB | 1 GiB | 1 GiB |
|---------------------------------------------------------------------------|
| Active memory | 13975 MiB | 39454 MiB | 2142 GiB | 2128 GiB |
| from large pool | 13959 MiB | 39437 MiB | 2140 GiB | 2127 GiB |
| from small pool | 16 MiB | 18 MiB | 1 GiB | 1 GiB |
|---------------------------------------------------------------------------|
| Requested memory | 13792 MiB | 39306 MiB | 2138 GiB | 2125 GiB |
| from large pool | 13775 MiB | 39289 MiB | 2137 GiB | 2124 GiB |
| from small pool | 16 MiB | 18 MiB | 1 GiB | 1 GiB |
|---------------------------------------------------------------------------|
| GPU reserved memory | 45590 MiB | 45590 MiB | 45590 MiB | 0 B |
| from large pool | 45566 MiB | 45566 MiB | 45566 MiB | 0 B |
| from small pool | 24 MiB | 24 MiB | 24 MiB | 0 B |
|---------------------------------------------------------------------------|
| Non-releasable memory | 377331 KiB | 7818 MiB | 1017 GiB | 1017 GiB |
| from large pool | 375788 KiB | 7813 MiB | 1016 GiB | 1016 GiB |
| from small pool | 1543 KiB | 10 MiB | 1 GiB | 1 GiB |
|---------------------------------------------------------------------------|
| Allocations | 4735 | 4738 | 34212 | 29477 |
| from large pool | 1504 | 1507 | 15954 | 14450 |
| from small pool | 3231 | 3348 | 18258 | 15027 |
|---------------------------------------------------------------------------|
| Active allocs | 4735 | 4738 | 34212 | 29477 |
| from large pool | 1504 | 1507 | 15954 | 14450 |
| from small pool | 3231 | 3348 | 18258 | 15027 |
|---------------------------------------------------------------------------|
| GPU reserved segments | 304 | 304 | 304 | 0 |
| from large pool | 292 | 292 | 292 | 0 |
| from small pool | 12 | 12 | 12 | 0 |
|---------------------------------------------------------------------------|
| Non-releasable allocs | 15 | 135 | 15054 | 15039 |
| from large pool | 13 | 89 | 9160 | 9147 |
| from small pool | 2 | 65 | 5894 | 5892 |
|---------------------------------------------------------------------------|
| Oversize allocations | 0 | 0 | 0 | 0 |
|---------------------------------------------------------------------------|
| Oversize GPU segments | 0 | 0 | 0 | 0 |
|===========================================================================|
```
* fsdp1
```
|===========================================================================|
| PyTorch CUDA memory summary, device ID 0 |
|---------------------------------------------------------------------------|
| CUDA OOMs: 0 | cudaMalloc retries: 0 |
|===========================================================================|
| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |
|---------------------------------------------------------------------------|
| Allocated memory | 13947 MiB | 18561 MiB | 2156 GiB | 2142 GiB |
| from large pool | 13937 MiB | 18556 MiB | 2155 GiB | 2141 GiB |
|
|
https://github.com/pytorch/torchtitan/issues/735
|
closed
|
[
"question"
] | 2024-12-13T08:42:49Z
| 2024-12-18T11:31:23Z
| null |
FindDefinition
|
pytorch/torchtitan
| 734
|
using fsdp2 wrapper Flux(text to image) model , gradient is inconsistent with fsdp1
|
i use register_full_backward_hook print grad when backward like this way:
```
def print_grad_hook(name):
def hook(module, grad_input, grad_output):
print(f"Layer Name: {name},Grad input: {grad_input},Grad output: {grad_output}")
return hook
for name, layer in model.named_children():
layer.register_full_backward_hook(print_grad_hook(name))
```
but i discover last layer's grad is inconsistent between fsdp1 and fsdp2.('Grad output ' is consistent)
```
fsdp1 grad:
Layer Name: proj_out,Grad input: (tensor([[[-1.4901e-08, 2.2445e-07, 5.4250e-08, ..., 3.7812e-07,
4.0606e-07, -3.8184e-07]]], device='cuda:0'),),Grad output: (tensor([[[-2.3991e-06, 2.3693e-06, 1.3947e-05, ...,
4.0233e-07, 8.0466e-07]]], device='cuda:0', dtype=torch.bfloat16),)
fsdp2 grad:
Layer Name: proj_out,Grad input: (tensor([[[-0.0000e+00, 2.3842e-07, 5.9605e-08, ..., 8.9407e-07,
4.1723e-07, -3.5763e-07]]], device='cuda:0'),),Grad output: (tensor([[[-2.3991e-06, 2.3693e-06, 1.3947e-05, ...,
4.0233e-07, 8.0466e-07]]], device='cuda:0', dtype=torch.bfloat16),)
```
Below is my code to wrapper flux model,Currently I'm not using compile and activation checkpointing
```
for layer_id, transformer_block in model.transformer_blocks.named_children():
if pp_enabled:
# For PP, do not reshard after forward to avoid per-microbatch
# all-gathers, which can be expensive and non-overlapped
reshard_after_forward = False
else:
# As an optimization, do not reshard after forward for the last
# transformer block since FSDP would prefetch it immediately
reshard_after_forward = True
fully_shard(
transformer_block,
**fsdp_config,
reshard_after_forward=reshard_after_forward,
)
for layer_id, transformer_block in model.single_transformer_blocks.named_children():
if pp_enabled:
# For PP, do not reshard after forward to avoid per-microbatch
# all-gathers, which can be expensive and non-overlapped
reshard_after_forward = False
else:
# As an optimization, do not reshard after forward for the last
# transformer block since FSDP would prefetch it immediately
reshard_after_forward = int(layer_id) < len(model.single_transformer_blocks) - 1
fully_shard(
transformer_block,
**fsdp_config,
reshard_after_forward=reshard_after_forward,
)
fully_shard(model, **fsdp_config, reshard_after_forward=not pp_enabled)
```
|
https://github.com/pytorch/torchtitan/issues/734
|
closed
|
[
"question"
] | 2024-12-13T07:59:32Z
| 2025-08-21T02:58:13Z
| null |
yanmj0601
|
huggingface/lighteval
| 447
|
[BUG] how to eval large scale model use 1dp+8pp?
|
## Describe the bug
I tired to eval a large scale model use1dp+8pp with accelerate. I use the command like the following:
```
accelerate launch --multi_gpu --num_processes=1 run_evals_accelerate.py \
--model_args="pretrained=<path to model on the hub>" \
--model_parallel \
--tasks <task parameters> \
--output_dir output_dir
```
The error is ```ValueError: You need to use at least 2 processes to use --multi_gpu```
How to solve this problem?
## Version info
lighteval-0.3.0
|
https://github.com/huggingface/lighteval/issues/447
|
closed
|
[
"bug"
] | 2024-12-13T03:56:36Z
| 2025-01-02T11:20:20Z
| null |
mxjmtxrm
|
pytorch/vision
| 8,803
|
OpenGL interoperability
|
### 🚀 The feature
Zero-copy transfer of data between PyTorch and OpenGL on GPU by including "OpenGL interoperability" from CUDA in torchvision.
### Motivation, pitch
I am working on a real-time machine learning graphics project which uses OpenGL both as an intermediate processing step in the model and to visualize the output. Right now transfer of data between PyTorch and OpenGL is a problem for both training and inference.
Without any additional packages i can copy data from PyTorch CUDA to CPU and then back to OpenGL on GPU, this is very simple but slow.
I can instead use some cuda bindings for python and a separate CUDA Toolkit installation to avoid the data transfer but this is quite complex and there are many competing ways and tools for doing this which makes it hard to navigate.
### Alternatives
_No response_
### Additional context
The 2 main ways I have been using OpenGL from python are with the packages `moderngl` and `PyOpenGL`.
|
https://github.com/pytorch/vision/issues/8803
|
open
|
[] | 2024-12-12T16:04:11Z
| 2024-12-12T16:04:11Z
| 0
|
cajoek
|
huggingface/diffusers
| 10,196
|
How to finetune Flux-dev full params, 80G OOM ...
|
I am using the [train_dreambooth_flux](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_flux.py) script to fine-tune the `flux-dev` model with full parameters using DeepSpeed Stage 2. However, I am still encountering out-of-memory issues on an 80GB GPU. Are there any solutions available to address this problem? Thanks!
|
https://github.com/huggingface/diffusers/issues/10196
|
open
|
[
"training"
] | 2024-12-12T09:24:18Z
| 2025-08-20T13:19:20Z
| null |
huangjun12
|
huggingface/chat-ui
| 1,627
|
Cookie “hf-chat” has been rejected because there is an existing “secure” cookie.
|
## Bug description
I use `ghcr.io/huggingface/chat-ui-db:latest` to host `ChatUI` in docker. If `PUBLIC_ORIGIN="http://localhost"` in `.env.local` and visit `ChatUI` through `http://localhost:3000`, it works well. Then I try to replace `localhost` by my domain name `qiangwulab.sjtu.edu.cn`. For the sake of testing, I modify `/etc/hosts` so that `qiangwulab.sjtu.edu.cn` is resolved to `127.0.0.1`. I visit `ChatUI` through `http://qiangwulab.sjtu.edu.cn:3000`. It does not work with a similar page as in https://github.com/huggingface/chat-ui/issues/1057. The firefox console shows
```
Cookie “hf-chat” has been rejected because a non-HTTPS cookie can’t be set as “secure”.
```
https://github.com/huggingface/chat-ui/issues/1057 says that I should use `ALLOW_INSECURE_COOKIES=true`. It still does not work, and the firefox console shows
```
Cookie “hf-chat” has been rejected because there is an existing “secure” cookie.
```
`ALLOW_INSECURE_COOKIES=true` seems to be Legacy. Thus, I also tried `COOKIE_SAMESITE="lax"` and `COOKIE_SECURE=false`. The effect is the same. The firefox console shows
```
Cookie “hf-chat” has been rejected because there is an existing “secure” cookie.
```
Is it possible to use `http` for domain name other than `localhost`?
## Steps to reproduce
<!-- Steps to reproduce the issue -->
## Screenshots
<!-- If applicable, add screenshots to help explain your problem. -->
## Context
### Logs
<!-- Add any logs that are relevant to your issue. Could be browser or server logs. Wrap in code blocks. -->
```
// logs here if relevant
```
### Specs
- **OS**: ubuntu 24.04
- **Browser**: firefox
- **chat-ui commit**: ghcr.io/huggingface/chat-ui-db:latest
### Config
<!-- Add the environment variables you've used to setup chat-ui, making sure to redact any secrets. -->
## Notes
<!-- Anything else relevant to help the issue get solved -->
|
https://github.com/huggingface/chat-ui/issues/1627
|
open
|
[
"bug"
] | 2024-12-12T07:04:26Z
| 2024-12-12T07:04:26Z
| 0
|
ljw20180420
|
pytorch/xla
| 8,486
|
2 questions for the composite op feature
|
## ❓ Questions and Help
Glad to see that the [composite op feature](https://github.com/pytorch/xla/blob/master/docs/source/features/stablehlo.md#preserving-high-level-pytorch-operations-in-stablehlo-by-generating-stablehlocomposite) is added to Torch-XLA. I have tried this feature and got some questions, hope to get answers/suggestions here:
1. Some redundant IRs (start from `custom_call`) can't be erased after created the composite op, e.g. `Gelu`:
```python
import torch
import torch_xla
import torch_xla.core.xla_model as xm
from torch_xla import stablehlo
from torch_xla.experimental.mark_pattern_utils import StableHLOCompositeBuilder
class Example(torch.nn.Module):
def __init__(self):
super(Example, self).__init__()
self.gelu = torch.nn.GELU(approximate="none")
self.composite_op = StableHLOCompositeBuilder("composite.gelu", {"approximate": "none"})
def forward(self, x):
x = self.composite_op.mark_inputs(x)
y = self.gelu(x)
y = self.composite_op.mark_outputs(y)
return y
x = torch.randn(10, device=xm.xla_device())
model = Example().to(xm.xla_device())
print(model(x))
input_args = (x, )
exported = torch.export.export(model, input_args)
# print(exported.graph)
stablehlo_gm = stablehlo.exported_program_to_stablehlo(exported)
stablehlo = stablehlo_gm.get_stablehlo_text()
print(stablehlo)
```
The generated StableHLO is:
```mlir
module @IrToHlo.16 attributes {mhlo.cross_program_prefetches = [], mhlo.input_output_alias = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} {
func.func @main(%arg0: tensor<10xf32>) -> tensor<10xf32> {
%cst = stablehlo.constant dense<0.707106769> : tensor<10xf32>
%0 = stablehlo.multiply %arg0, %cst : tensor<10xf32>
%1 = stablehlo.custom_call @mhlo.erf(%0) {mhlo.attributes = {}, mhlo.version = 1 : i64} : (tensor<10xf32>) -> tensor<10xf32>
%2 = stablehlo.composite "composite.gelu" %arg0 {composite_attributes = {approximate = "none"}, decomposition = @composite.gelu.impl} : (tensor<10xf32>) -> tensor<10xf32>
return %2 : tensor<10xf32>
}
func.func private @composite.gelu.impl(%arg0: tensor<10xf32>) -> tensor<10xf32> {
%cst = stablehlo.constant dense<1.000000e+00> : tensor<10xf32>
%cst_0 = stablehlo.constant dense<0.707106769> : tensor<10xf32>
%cst_1 = stablehlo.constant dense<5.000000e-01> : tensor<10xf32>
%0 = stablehlo.multiply %arg0, %cst_1 : tensor<10xf32>
%1 = stablehlo.multiply %arg0, %cst_0 : tensor<10xf32>
%2 = stablehlo.custom_call @mhlo.erf(%1) {mhlo.attributes = {}, mhlo.version = 1 : i64} : (tensor<10xf32>) -> tensor<10xf32>
%3 = stablehlo.add %2, %cst : tensor<10xf32>
%4 = stablehlo.multiply %0, %3 : tensor<10xf32>
return %4 : tensor<10xf32>
}
}
```
The `erf` op in `main` is useless and not erased. I have checked the [composite op pass](https://github.com/pytorch/xla/blob/master/torch_xla/csrc/runtime/stablehlo_composite_helper.cc#L514-L519), it left these useless ops to later `canonicalizer` instead of erasing directly, but the `canonicalizer` didn't handle it... I guess it's caused by the custom call side-effect.
**The question**: Can the composite op pass erase these ops directly? Is any special reason to avoid the erasing operation here?
2. Composite op feature can't work in training. Even the proposal of this feature is for inference now (work for export API), I tried to enabled it in training locally, but I found that it reported a warning:
> UserWarning: xla::mark_tensor: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /data4/home/luteng/code/pytorch/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:62.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Then the backward graph is not generated.
**The question**: Is any plan to support composite op feature in training? It seems the missing part is only to add the Autograd for `mark_tensor`, but I'm just a XLA developer and not familiar with PyTorch, I don't know how to add it...
|
https://github.com/pytorch/xla/issues/8486
|
closed
|
[
"question",
"stablehlo"
] | 2024-12-12T02:37:57Z
| 2025-05-05T12:32:51Z
| null |
Zantares
|
pytorch/ao
| 1,403
|
ImportError: cannot import name 'weight_only_quant_qconfig' from 'torchao.quantization' (R:\CogVideoX_v3\CogVideo\venv\Lib\site-packages\torchao\quantization\__init__.py)
|
I am trying to use [CogVideoX1.5-5B-I2V](https://huggingface.co/THUDM/CogVideoX1.5-5B-I2V) with following
I am on Windows
Everything installed but still getting this error - version 0.7.0
```
Traceback (most recent call last):
File "R:\CogVideoX_v3\CogVideo\inference\gradio_composite_demo\app.py", line 40, in <module>
from torchao.quantization import quantize_, int8_weight_only, weight_only_quant_qconfig
ImportError: cannot import name 'weight_only_quant_qconfig' from 'torchao.quantization' (R:\CogVideoX_v3\CogVideo\venv\Lib\site-packages\torchao\quantization\__init__.py)
Press any key to continue . . .
```
```
import torch
from diffusers import AutoencoderKLCogVideoX, CogVideoXTransformer3DModel, CogVideoXImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
from transformers import T5EncoderModel
from torchao.quantization import quantize_, int8_weight_only
quantization = int8_weight_only
text_encoder = T5EncoderModel.from_pretrained("THUDM/CogVideoX1.5-5B-I2V", subfolder="text_encoder",
torch_dtype=torch.bfloat16)
quantize_(text_encoder, quantization())
transformer = CogVideoXTransformer3DModel.from_pretrained("THUDM/CogVideoX1.5-5B-I2V", subfolder="transformer",
torch_dtype=torch.bfloat16)
quantize_(transformer, quantization())
vae = AutoencoderKLCogVideoX.from_pretrained("THUDM/CogVideoX1.5-5B-I2V", subfolder="vae", torch_dtype=torch.bfloat16)
quantize_(vae, quantization())
# Create pipeline and run inference
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
"THUDM/CogVideoX1.5-5B-I2V",
text_encoder=text_encoder,
transformer=transformer,
vae=vae,
torch_dtype=torch.bfloat16,
)
pipe.enable_model_cpu_offload()
pipe.vae.enable_tiling()
pipe.vae.enable_slicing()
prompt = "A little girl is riding a bicycle at high speed. Focused, detailed, realistic."
image = load_image(image="input.jpg")
video = pipe(
prompt=prompt,
image=image,
num_videos_per_prompt=1,
num_inference_steps=50,
num_frames=81,
guidance_scale=6,
generator=torch.Generator(device="cuda").manual_seed(42),
).frames[0]
export_to_video(video, "output.mp4", fps=8)
```
|
https://github.com/pytorch/ao/issues/1403
|
closed
|
[
"question",
"triaged"
] | 2024-12-11T23:43:15Z
| 2024-12-12T01:45:57Z
| null |
FurkanGozukara
|
huggingface/diffusers
| 10,190
|
How to use fluxfill to repalce background?
|
I want to use fluxfill to change the background, but I find that the prompt words are almost useless, and the output image is more like the original image.
I have tested multiple guidance_scale parameters, but found that the resulting image is more related to the original image, and less related to the prompt word.
|
https://github.com/huggingface/diffusers/issues/10190
|
closed
|
[] | 2024-12-11T10:48:27Z
| 2025-05-23T12:12:28Z
| null |
babyta
|
huggingface/sentence-transformers
| 3,132
|
How to train a model with DDP for TSDAE
|
hello, I want to train a model using TSDAE method.
Is there any way to train with DDP(Multi-GPU)?
I already read your sample code.
But I'm not sure how to apply DenoisingAutoEncoderDataset in SentenceTransformerTrainer.
([[v3] Training refactor - MultiGPU, loss logging, bf16, etc](https://github.com/UKPLab/sentence-transformers/pull/2449))
|
https://github.com/huggingface/sentence-transformers/issues/3132
|
closed
|
[] | 2024-12-11T10:39:30Z
| 2024-12-11T14:04:32Z
| null |
OnAnd0n
|
pytorch/TensorRT
| 3,317
|
❓ [Question] Jetson AGX Orin Install in Jetpack 6.1 Build did NOT complete successfully
|
## ❓ Question
I follow this [tutorial](https://pytorch.org/TensorRT/getting_started/installation.html) to install Torch-TensorRT, but in the last step:
```
# build and install torch_tensorrt wheel file
python setup.py --use-cxx11-abi install --user
```
some errors happened:
```
using CXX11 ABI build
Jetpack version: 6.1
building libtorchtrt cmd=['/usr/bin/bazel', 'build', '//:libtorchtrt', '--compilation_mode=opt', '--distdir=third_party/dist_dir/x86_64-linux-gnu', '--config=linux', '--platforms=//toolchains:jetpack_6.1']
DEBUG: /home/lab223/.cache/bazel/_bazel_lab223/3fb6c16c20f38dfc11e57e77e6eea473/external/rules_python~/python/private/python.bzl:46:10: WARNING: Ignoring toolchain 'python_3_11' from module 'rules_pkg': Toolchain 'python_3_11' from module 'torch_tensorrt' already registered Python version 3.11 and has precedence
INFO: Analyzed target //:libtorchtrt (127 packages loaded, 13849 targets configured).
ERROR: /home/lab223/TensorRT/core/util/BUILD:60:11: Compiling core/util/Exception.cpp failed: (Exit 1): gcc failed: error executing CppCompile command (from target //core/util:exception) /home/lab223/anaconda3/envs/rnw/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG ... (remaining 25 arguments skipped)
Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
gcc: fatal error: cannot execute 'cc1plus': execvp: No such file or directory
compilation terminated.
Target //:libtorchtrt failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 8.444s, Critical Path: 4.05s
INFO: 329 processes: 329 internal.
ERROR: Build did NOT complete successfully
```
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): **2.5.0**
- CPU Architecture: **arm64(Jetson AGX Orin)**
- OS (e.g., Linux): **Linux**
- How you installed PyTorch: **pip**
- Build command you used (if compiling from source): **python setup.py --use-cxx11-abi install --user**
- Are you using local sources or building from archives: **building from archives**
- Python version: **3.10.15**
- CUDA version: **12.6**
- GPU models and configuration: -
- Any other relevant information: Install torch_tensorrt in the model's anaconda virtual environment
## Additional context
please help me!thanks!!!!
|
https://github.com/pytorch/TensorRT/issues/3317
|
open
|
[
"question"
] | 2024-12-11T09:21:09Z
| 2024-12-18T19:16:46Z
| null |
breknddone
|
huggingface/diffusers
| 10,180
|
Can't load multiple loras when using Flux Control LoRA
|
### Describe the bug
I was trying out the FluxControlPipeline with the Control LoRA introduced in #9999 , but had issues loading in multiple loras.
For example, if I load the depth lora first and then the 8-step lora, it errors on the 8-step lora, and if I load the 8-step lora first and then the depth lora, it errors when loading the depth lora.
### Reproduction
```
from diffusers import FluxControlPipeline
from huggingface_hub import hf_hub_download
import torch
control_pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda")
control_pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora")
control_pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"))
```
### Logs
```shell
AttributeError Traceback (most recent call last)
Cell In[6], line 8
5 control_pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda")
7 control_pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora")
----> 8 control_pipe.load_lora_weights(
9 hf_hub_download(
10 "ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"
11 ),
12 adapter_name="HyperFlux",
13 )
File ~/.venv/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py:1856, in FluxLoraLoaderMixin.load_lora_weights(self, pretrained_model_name_or_path_or_dict, adapter_name, **kwargs)
1849 transformer_norm_state_dict = {
1850 k: state_dict.pop(k)
1851 for k in list(state_dict.keys())
1852 if "transformer." in k and any(norm_key in k for norm_key in self._control_lora_supported_norm_keys)
1853 }
1855 transformer = getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer
-> 1856 has_param_with_expanded_shape = self._maybe_expand_transformer_param_shape_or_error_(
1857 transformer, transformer_lora_state_dict, transformer_norm_state_dict
1858 )
1860 if has_param_with_expanded_shape:
1861 logger.info(
1862 "The LoRA weights contain parameters that have different shapes that expected by the transformer. "
1863 "As a result, the state_dict of the transformer has been expanded to match the LoRA parameter shapes. "
1864 "To get a comprehensive list of parameter names that were modified, enable debug logging."
1865 )
File ~/.venv/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py:2316, in FluxLoraLoaderMixin._maybe_expand_transformer_param_shape_or_error_(cls, transformer, lora_state_dict, norm_state_dict, prefix)
2314 if isinstance(module, torch.nn.Linear):
2315 module_weight = module.weight.data
-> 2316 module_bias = module.bias.data if hasattr(module, "bias") else None
2317 bias = module_bias is not None
2319 lora_A_weight_name = f"{name}.lora_A.weight"
AttributeError: 'NoneType' object has no attribute 'data'
```
### System Info
- 🤗 Diffusers version: 0.32.0.dev0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.12
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.26.5
- Transformers version: 4.47.0
- Accelerate version: 1.2.0
- PEFT version: 0.14.0
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA H100 80GB HBM3, 81559 MiB
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@a-r-r-o-w @sayakpaul
|
https://github.com/huggingface/diffusers/issues/10180
|
closed
|
[
"bug",
"help wanted",
"lora"
] | 2024-12-10T21:40:24Z
| 2024-12-20T09:00:33Z
| 11
|
jonathanyin12
|
huggingface/transformers
| 35,186
|
How to convert my Mask2Former model (ResNet-50 backbone) to Hugging Face transformer
|
### System Info
```shell
- `transformers` version: 4.34.0
- Platform: Linux-6.8.0-31-generic-x86_64-with-glibc2.17
- Python version: 3.8.20
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.5
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I found the following script, but it only supports conversion for Mask2Former model (swin backbone) https://github.com/huggingface/transformers/blob/main/src/transformers/models/mask2former/convert_mask2former_original_pytorch_checkpoint_to_pytorch.py
May I ask for some guidance on how to adjust the script so that it can support ResNet-50 architecture?
### Expected behavior
```shell
Convert my Mask2Former model (ResNet-50 backbone) to Hugging Face transformer
```
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine.
|
https://github.com/huggingface/transformers/issues/35186
|
closed
|
[] | 2024-12-10T19:17:22Z
| 2025-01-18T08:03:21Z
| null |
yujunwei04
|
huggingface/datasets
| 7,318
|
Introduce support for PDFs
|
### Feature request
The idea (discussed in the Discord server with @lhoestq ) is to have a Pdf type like Image/Audio/Video. For example [Video](https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py) was recently added and contains how to decode a video file encoded in a dictionary like {"path": ..., "bytes": ...} as a VideoReader using decord. We want to do the same with pdf and get a [pypdfium2.PdfDocument](https://pypdfium2.readthedocs.io/en/stable/_modules/pypdfium2/_helpers/document.html#PdfDocument).
### Motivation
In many cases PDFs contain very valuable information beyond text (e.g. images, figures). Support for PDFs would help create datasets where all the information is preserved.
### Your contribution
I can start the implementation of the Pdf type :)
|
https://github.com/huggingface/datasets/issues/7318
|
open
|
[
"enhancement"
] | 2024-12-10T16:59:48Z
| 2024-12-12T18:38:13Z
| 6
|
yabramuvdi
|
huggingface/diffusers
| 10,172
|
Raise an error when `len(gligen_images )` is not equal to `len(gligen_phrases)` in `StableDiffusionGLIGENTextImagePipeline`
|
To whom it may concern,
I found that when using `StableDiffusionGLIGENTextImagePipeline`, there is no error raised when `len(gligen_images )` is not equal to `len(gligen_phrases)`. And when I dig into the source code, it seems that these two features are zipped together in a for loop during the preprocessing. I guess this will cause the longer one to be clipped unintentionally. (If my understanding is wrong, feel free to correct me.) Is there any possibility to raise an error or at least warning? Thanks in advance.
Source Code: https://github.com/huggingface/diffusers/blob/v0.31.0/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L689
|
https://github.com/huggingface/diffusers/issues/10172
|
closed
|
[] | 2024-12-10T14:25:48Z
| 2024-12-11T08:59:44Z
| 1
|
abcdefg133hi
|
huggingface/lerobot
| 568
|
Do I need two SO 100 arms to get started?
|
I have printed and assembled one arms, the follower version. Do I need two arms to record datasets and do testing?
|
https://github.com/huggingface/lerobot/issues/568
|
closed
|
[
"question",
"robots"
] | 2024-12-10T13:31:50Z
| 2025-10-08T08:45:58Z
| null |
rabhishek100
|
pytorch/ao
| 1,397
|
"Where is the overloaded function for torch.nn.functional.linear(aqt, original_weight_tensor, bias)? "
|
Here is an example
int8_dynamic_activation_int8_weight
aqt:
AffineQuantizedTensor(tensor_impl=PlainAQTTensorImpl(data=tensor([[ 5, -2, 24, ..., 17, 73, 54],
[ -30, -19, -53, ..., -9, -33, 55],
[ -7, -20, -28, ..., 47, 71, -15],
...,
[ 36, 8, 40, ..., 13, -10, 45],
[ -38, -12, 47, ..., -22, 0, -29],
[ 20, -127, 52, ..., 18, 27, -36]], dtype=torch.int8)... , scale=tensor([0.0293, 0.0233, 0.0271, 0.0234, 0.0209, 0.0227, 0.0247, 0.0328, 0.0270,
0.0215, 0.0245, 0.0209, 0.0325, 0.0232, 0.0238, 0.0267, 0.0237, 0.0202,
0.0249, 0.0239, 0.0255, 0.0246, 0.0225, 0.0288, 0.0194, 0.0215, 0.0224,
0.0210, 0.0253, 0.0189, 0.0240, 0.0228, 0.0208, 0.0211, 0.0295, 0.0275,
0.0200, 0.0250, 0.0202, 0.0269, 0.0266, 0.0203, 0.0223, 0.0246, 0.0212,
0.0217, 0.0246, 0.0203, 0.0219, 0.0237, 0.0216, 0.0191, 0.0213, 0.0227,
0.0330, 0.0194, 0.0226, 0.0162, 0.0203, 0.0284, 0.0218, 0.0208, 0.0254,
0.0220, 0.0357, 0.0288, 0.0290, 0.0235, 0.0218, 0.0188, 0.0279, 0.0232,
0.0238, 0.0195, 0.0256, 0.0255, 0.0204, 0.0198, 0.0211, 0.0219, 0.0262,
0.0253, 0.0246, 0.0177, 0.0209, 0.0216, 0.0253, 0.0261, 0.0215, 0.0257,
0.0240, 0.0197, 0.0206, 0.0270, 0.0243, 0.0218, 0.0261, 0.0350, 0.0238,
0.0243])... , zero_point=tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0.])... , _layout=PlainLayout()), block_size=[1, 200], shape=torch.Size([100, 200]), device=cpu, dtype=torch.float32, requires_grad=False)
original_weight_tensor:
AffineQuantizedTensor(tensor_impl=PlainAQTTensorImpl(data=tensor([[ 127, 0, 0, ..., 0, 0, 0],
[ 127, 0, 0, ..., 0, 0, 0],
[ 127, 0, 0, ..., 0, 0, 0],
...,
[ 47, 36, -70, ..., 49, 71, 5],
[ 117, -2, -91, ..., -112, 9, -81],
[ -67, -91, 114, ..., 51, 11, -126]], dtype=torch.int8)... , scale=tensor([7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01, 7.8431e+01,
2.3313e-02, 2.3492e-02, 2.3277e-02, 2.3458e-02, 2.3438e-02, 2.3528e-02,
2.3352e-02, 2.3522e-02, 2.3500e-02, 2.3332e-02, 2.3376e-02, 2.3481e-02,
2.3275e-02, 2.3509e-02, 2.3453e-02, 2.3460e-02, 2.3525e-02, 2.3489e-02,
2.3482e-02, 2.3436e-02, 2.3499e-02, 2.3523e-02, 2.3519e-02, 2.3320e-02,
2.3503e-02, 2.3453e-02, 2.3514e-02, 2.3496e-02, 2.3330e-02, 2.3444e-02,
2.3483e-02, 2.3428e-02, 2.3495e-02, 2.3445e-02, 2.3437e-02, 2.3505e-02,
2.3338e-02, 2.3517e-0
|
https://github.com/pytorch/ao/issues/1397
|
open
|
[] | 2024-12-10T10:05:42Z
| 2024-12-11T06:41:30Z
| null |
Lenan22
|
pytorch/torchtitan
| 724
|
Issue: Loss Discrepancy Between FSDP1 and FSDP2 with AdamW Optimizer
|
We observed a loss discrepancy between FSDP1 and FSDP2 while training with the AdamW optimizer. Are you aware of any known issues with the AdamW optimizer and FSDP2 that might contribute to this behavior?
|
https://github.com/pytorch/torchtitan/issues/724
|
closed
|
[
"question"
] | 2024-12-09T19:45:45Z
| 2025-08-21T02:57:39Z
| null |
Teng-xu
|
pytorch/torchtitan
| 723
|
Context parallelism understanding
|
Hi
We are recently testing the CP parallelism strategy, for a 2D configuration: FSDP+CP.
From what we know, CP is to slice the sequence length, as attention kernel needs to compute the attention for the whole sequence, which means each GPU needs to gather all the sharded KV cache using some collective communication kernels.
However, we didn't see any such kind of kernels, only found the All-Gather for parameters in pre-forward phase.

Is there anything that we misunderstood? please add your comments for better understanding.
Thanks.
|
https://github.com/pytorch/torchtitan/issues/723
|
open
|
[
"question",
"module: context parallel"
] | 2024-12-09T03:07:27Z
| 2024-12-20T21:45:48Z
| null |
jinsong-mao
|
huggingface/transformers
| 35,152
|
how to load the weight of decoder.embed_tokens.weight seperately from the shared weight?
|
### System Info
- `transformers` version: 4.46.3
- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.17
- Python version: 3.8.20
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA RTX A4000
### Who can help?
@ArthurZucker @muellerzr @SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
when i use t5 1.1 on seq2seq task, which has 59744 source vocab size and only 32 target vocab size. And To correctly use softmax to calculate each token's probality and score on 32 candidates so I set model.lm_head as below:
```python
torch.nn.Linear(config.d_model,target_vocab_size=32,bias=False).
And everything looks good when the model is training. But after training, I load the safetensor as below:
checkpoint_path = "./resultstest/checkpoint-100"
config = T5Config.from_pretrained("./onlychangelmhead/checkpoint-100/config.json")
model = T5ForConditionalGeneration(config)
model.lm_head = torch.nn.Linear(config.d_model,target_vocab_size,bias=False)
state_dict = load_file(f"{checkpoint_path}/model.safetensors")
model.load_state_dict(state_dict, strict=True)
```
And the issue comes as:
```
Traceback (most recent call last):
File "bs_based_on_massdic_failed.py", line 110, in <module>
model.load_state_dict(state_dict, strict=True)
File "/home/zhi/anaconda3/envs/peptide_completion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2215, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for T5ForConditionalGeneration:
Missing key(s) in state_dict: "encoder.embed_tokens.weight".
size mismatch for decoder.embed_tokens.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([59744, 768]).
```
when I try to print the safetensors' shape it shows that the `lm_head. weight` looks fine as size of `[32, 768]`, but with no `decoder.embeded_tokens` or the way I load the safetensor can not load the embeded_tokens's weight from shared weight properly(I guess). So how can I fix that problem to correctly feat this model on my exact target vocab size as 32 but not same as the source vocab's size. It would be very appreciate if you can reply. Best.
### Expected behavior
Use t5 1.1 to feat on 32 target vocab size task. And load the safetensor properly.
|
https://github.com/huggingface/transformers/issues/35152
|
closed
|
[
"bug"
] | 2024-12-08T15:46:55Z
| 2025-01-22T08:03:52Z
| null |
SoSongzhi
|
pytorch/ao
| 1,390
|
AO and Automated Mixed Precision
|
Can we clarify in the readme what are the best practices to use ao at inference with a pytorch AMP trainer model/checkpoint?
|
https://github.com/pytorch/ao/issues/1390
|
open
|
[
"topic: documentation",
"question"
] | 2024-12-08T13:52:15Z
| 2025-03-17T20:46:24Z
| null |
bhack
|
huggingface/datasets
| 7,311
|
How to get the original dataset name with username?
|
### Feature request
The issue is related to ray data https://github.com/ray-project/ray/issues/49008 which it requires to check if the dataset is the original one just after `load_dataset` and parquet files are already available on hf hub.
The solution used now is to get the dataset name, config and split, then `load_dataset` again and check the fingerprint. But it's unable to get the correct dataset name if it contains username. So how to get the dataset name with username prefix, or is there another way to query if a dataset is the original one with parquet available?
@lhoestq
### Motivation
https://github.com/ray-project/ray/issues/49008
### Your contribution
Would like to fix that.
|
https://github.com/huggingface/datasets/issues/7311
|
open
|
[
"enhancement"
] | 2024-12-08T07:18:14Z
| 2025-01-09T10:48:02Z
| null |
npuichigo
|
huggingface/lerobot
| 555
|
To bulid my own policy, but have errors TypeError: '>' not supported between instances of 'int' and 'dict'
|
I improved the act policy in lerobot framework and created a new policy named myact. I mainly did the following:
Create the my_act folder in the lerobot/common/policies/ path
Create 'configuration_my_act.py' and 'modeling_my_act.py' in the + my_act folder
Create lerobot/configs/policy/myact yaml, which is modified to ` name: myact `
But when I'm done, run the following command and get an error:
xvfb-run python lerobot/scripts/train.py \
hydra.run.dir=mypolicy/train/AlohaInsertion-v0\
policy=myact \
dataset_repo_id=lerobot/aloha_sim_insertion_human \
env=aloha \
env.task=AlohaInsertion-v0
INFO 2024-12-07 17:01:50 n/logger.py:106 Logs will be saved locally.
INFO 2024-12-07 17:01:50 ts/train.py:337 make_dataset
Fetching 56 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 56/56 [00:00<00:00, 9842.48it/s]
INFO 2024-12-07 17:01:56 ts/train.py:350 make_env
INFO 2024-12-07 17:01:56 /__init__.py:88 MUJOCO_GL is not set, so an OpenGL backend will be chosen automatically.
INFO 2024-12-07 17:01:57 /__init__.py:96 Successfully imported OpenGL backend: %s
INFO 2024-12-07 17:01:57 /__init__.py:31 MuJoCo library version is: %s
INFO 2024-12-07 17:02:03 ts/train.py:353 make_policy
Error executing job with overrides: ['policy=act', 'dataset_repo_id=lerobot/aloha_sim_insertion_human', 'env=aloha', 'env.task=AlohaInsertion-v0']
Traceback (most recent call last):
File "/root/autodl-tmp/lerobot/lerobot/scripts/train.py", line 677, in train_cli
train(
File "/root/autodl-tmp/lerobot/lerobot/scripts/train.py", line 354, in train
policy = make_policy(
File "/root/autodl-tmp/lerobot/lerobot/common/policies/factory.py", line 105, in make_policy
policy = policy_cls(policy_cfg, dataset_stats)
File "<string>", line 26, in __init__
File "/root/autodl-tmp/lerobot/lerobot/common/policies/act/configuration_act.py", line 158, in __post_init__
if self.n_action_steps > self.chunk_size:
TypeError: '>' not supported between instances of 'int' and 'dict'
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
At this time, I also reported this error when I ran lerobot's act strategy. Do you know how to solve, thank you!
|
https://github.com/huggingface/lerobot/issues/555
|
closed
|
[
"enhancement",
"question"
] | 2024-12-07T09:10:35Z
| 2025-04-07T16:08:38Z
| null |
zhouzhq2021
|
huggingface/diffusers
| 10,144
|
Why mochi diffusers video output is worse than mochi official code?
|
### Describe the bug
The quality of video is worse.
### Reproduction
Run the code with official prompt
### Logs
_No response_
### System Info
diffusers@main
### Who can help?
@a-r-r-o-w @yiyixuxu
|
https://github.com/huggingface/diffusers/issues/10144
|
closed
|
[
"bug",
"stale"
] | 2024-12-07T05:53:57Z
| 2025-01-07T15:38:38Z
| 10
|
foreverpiano
|
huggingface/peft
| 2,264
|
Guidance Needed on Two-Stage Fine-Tuning with LoRA(SFT and DPO) for Model Adaptation
|
# I am planning to perform a two-stage fine-tuning process and need some guidance on how to proceed.
## First Stage
1. Load Base Model: I start by loading the base model, qwen1.5 32B.
2. Apply LoRA Fine-Tuning: I then apply LoRA fine-tuning to this base model and obtain a new model state.
3. Save Adapter Model: This fine-tuned model state is saved as adapter_model.safetensors, named qwen1.5_lora_sft.
## Second Stage
1. Load the Model from the First Stage: I load both qwen1.5 32B and qwen1.5_lora_sft. It's crucial that qwen1.5_lora_sft integrates correctly with the base model qwen1.5 32B.
2. . Continue Fine-Tuning: On this model, which already includes the LoRA adapter, I continue to apply LoRA and DPO for further fine-tuning.
3. Save the New Adapter Model: After fine-tuning, I need to save the new adapter state, which includes adjustments from both the original LoRA and the new DPO.
## My questions are:
1. How to load the model from the base model(qwen1.5 32B) with the lora module qwen1.5_lora_sft
2. How to Continue Fine-Tuning from the First Stage model, and save the lora model after dpo training with the base model(qwen1.5 32B) and only one qwen1.5_lora_sft_dpo module.( adapter_model_sft_dpo.safetensors)
## What I had now
1. base model, qwen1.5 32B model path
2. qwen1.5_lora_sft module path: adapter_model.safetensors
## What I Need
1. qwen1.5_lora_sft _dpo module: adapter_model_sft_dpo.safetensors
## This is
train a base_model to get LoRA_weights_1
base_model_1 = merge(base_model and LoRA_weights_1)
train base_model_1 to get LoRA_weights_2
base_model_2 = merge(base_model_1 and LoRA_weights_2)
how to split the base_model_2 into base_model and LoRA_weights_1_2
Thinks!
|
https://github.com/huggingface/peft/issues/2264
|
closed
|
[] | 2024-12-06T13:35:20Z
| 2025-01-06T10:50:09Z
| 5
|
none0663
|
huggingface/transformers
| 35,118
|
How to load local transformers?
|
transformers==4.47.0.dev0
I want to use my local transformers. And I tried to set `sys.insert(0,'xxx/transformers/src')` and `PYTHONPATH=xxx/transformers/src`, but they doesn't work.
PLZ, tell me why.
|
https://github.com/huggingface/transformers/issues/35118
|
closed
|
[] | 2024-12-06T10:07:57Z
| 2024-12-12T04:05:08Z
| null |
yiyexy
|
pytorch/xla
| 8,466
|
Useful Q8 Kernels For TPUs/XLA Support
|
## ❓ Questions and Help
I'm looking at this repo here [KONAKONA666/q8_kernels](https://github.com/KONAKONA666/q8_kernels).
The Q8 functions are being used [is located here](https://github.com/KONAKONA666/q8_kernels/tree/main/q8_kernels/functional), the [cuda kernels here](https://github.com/KONAKONA666/q8_kernels/tree/main/csrc), and I was curious if any of these have already been implemented or integrated elsewhere? I'm not particularly familiar with porting custom kernels
|
https://github.com/pytorch/xla/issues/8466
|
open
|
[
"question",
"fp8"
] | 2024-12-06T07:01:59Z
| 2025-02-13T15:17:36Z
| null |
radna0
|
huggingface/lerobot
| 552
|
Rounding to int32 makes robot less precise. Do we have a solid reason for doing this?
|
### System Info
```Shell
Latest LeRobot. MacOS
```
### Information
- [X] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
1) Run teleoperation
2) Measure preciseness with rounding and without.
at lerobot/common/robot_devices/robots/manipulator.py

### Expected behavior
Smooth movement
|
https://github.com/huggingface/lerobot/issues/552
|
closed
|
[
"bug",
"question",
"stale"
] | 2024-12-05T16:31:49Z
| 2025-10-08T13:08:50Z
| null |
1g0rrr
|
huggingface/tokenizers
| 1,696
|
How to determine the splicing logic in post_processor based on the sentence to be tokenized?
|
For example,
```python
def post_processor(self, token_ids_0, token_ids_1=None):
if "cls" in token_ids_0:
return processors.TemplateProcessing(
single=f"{cls} $A {sep}",
pair=f"{cls} $A {sep} $B {cls}",
special_tokens=[
(cls, cls_token_id),
(sep, sep_token_id),
],
)
else:
return processors.TemplateProcessing(
single=f"{sep} $A {cls}",
pair=f"{sep} $A {cls} $B {sep}",
special_tokens=[
(cls, cls_token_id),
(sep, sep_token_id),
],
)
```
Thx~
|
https://github.com/huggingface/tokenizers/issues/1696
|
open
|
[] | 2024-12-05T14:05:13Z
| 2024-12-05T14:05:13Z
| null |
gongel
|
huggingface/peft
| 2,262
|
Could you provide example code for AdaLoRA finetuning decoder-only model?
|
### Feature request
The current [example of AdaLoRA](https://github.com/huggingface/peft/blob/b2922565c4c4445706a87cf7b988c828b451fe61/examples/conditional_generation/peft_adalora_seq2seq.py) is on **facebook/bart-base**. Since AdaLoRA requires hand-crafted calculations on loss, would it be possible to provide me some hints on how can this be done when it comes to decoder-only (e.g., Llama-Instruct) LM?
Specificially, I would like to mask out the loss calculation on the instruction part or system prompt, focusing only on the assistant response.
### Motivation
AdaLoRA requires hand-crafted calculations on loss, which becomes complex when desired to mask out some system/instructino tokens.
### Your contribution
N.A.
|
https://github.com/huggingface/peft/issues/2262
|
closed
|
[] | 2024-12-05T12:03:31Z
| 2025-01-18T15:03:29Z
| 4
|
SpeeeedLee
|
pytorch/xla
| 8,454
|
how to auto convert back to bfloat16 after conv1 and conv2
|
## ❓ Questions and Help
I have an tensor with dtype torch.bfloat16, in kaggle v3-8, after the conv1 and conv2 operation the return type is torch.float32. Any way (environent varable or so) to convert the return type back to torch.bfloat16?
|
https://github.com/pytorch/xla/issues/8454
|
open
|
[
"question"
] | 2024-12-05T09:59:35Z
| 2025-02-13T14:35:46Z
| null |
ghost
|
huggingface/diffusers
| 10,129
|
Does StableDiffusion3 have an image2image pipeline with ControlNet?
|
I want to use `ControlNet` with `StableDiffusion3`, providing a prompt, an original image, and a control image as inputs. However, I found that the `StableDiffusion3ControlNetPipeline` only supports prompts and control images as inputs. The `StableDiffusionControlNetImg2ImgPipeline` allows for providing a prompt, an original image, and a control image simultaneously, but it is not compatible with the `StableDiffusion3` model. Is there a `StableDiffusion3ControlNetImg2ImgPipeline` available?
|
https://github.com/huggingface/diffusers/issues/10129
|
closed
|
[
"New pipeline/model",
"contributions-welcome"
] | 2024-12-05T09:40:03Z
| 2025-01-02T20:02:33Z
| 1
|
ZHJ19970917
|
huggingface/diffusers
| 10,128
|
Is there any plan to support fastercache?
|
Expect to support fastercache, https://github.com/Vchitect/FasterCache
|
https://github.com/huggingface/diffusers/issues/10128
|
closed
|
[
"wip",
"performance"
] | 2024-12-05T09:11:19Z
| 2025-03-21T04:05:06Z
| 4
|
songh11
|
huggingface/datasets
| 7,306
|
Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values).
|
### Describe the bug
When creating a dataset from a list of datapoints, information is lost of the individual items.
Specifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below.
-> What is the best way to create a dataset from a list of datapoints?
---
e.g.:
**When running this code:**
```python
from datasets import load_dataset, Dataset
commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True)
datapoint = next(iter(commonvoice_data))
out = [datapoint]
new_data = Dataset.from_list(out) #this loses datatype information
new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information
```
**We get the following**:
---
1. `datapoint`: (the original datapoint)
```
'audio': {'path': 'it_test_0/common_voice_it_23606167.mp3', 'array': array([0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
2.21619011e-05, 2.72628222e-05, 0.00000000e+00]), 'sampling_rate': 48000}
```
Original Dataset Features:
```
>>> commonvoice_data.features
'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None)
```
- Here we see column "audio", has the proper values (both `path` & and `array`) and has the correct datatype (Audio).
----
2. new_data[0]:
```
# Cannot be printed (as it prints the entire array).
```
New Dataset 1 Features:
```
>>> new_data.features
'audio': {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)}
```
- Here we see that the column "audio", has the correct values, but is not the Audio datatype anymore.
---
3. new_data2[0]:
```
'audio': {'path': None, 'array': array([0., 0., 0., ..., 0., 0., 0.]), 'sampling_rate': 48000},
```
New Dataset 2 Features:
```
>>> new_data2.features
'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None),
```
- Here we see that the column "audio", has the correct datatype, but all the array & path values were lost!
### Steps to reproduce the bug
## Run:
```python
from datasets import load_dataset, Dataset
commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True)
datapoint = next(iter(commonvoice_data))
out = [datapoint]
new_data = Dataset.from_list(out) #this loses datatype information
new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information
```
### Expected behavior
## Expected:
```datapoint == new_data[0]```
AND
```datapoint == new_data2[0]```
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.26.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
|
https://github.com/huggingface/datasets/issues/7306
|
open
|
[] | 2024-12-05T09:07:53Z
| 2024-12-05T09:09:38Z
| 0
|
ai-nikolai
|
huggingface/lerobot
| 549
|
Low accuracy for act policy on pushT env
|
The highest success rate is 44%, as n_decoder_layers=7. Are there any other tricks for this?
|
https://github.com/huggingface/lerobot/issues/549
|
closed
|
[
"question",
"policies",
"stale"
] | 2024-12-05T06:18:06Z
| 2025-10-19T02:32:37Z
| null |
KongCDY
|
huggingface/Google-Cloud-Containers
| 128
|
Can we use Multi-LORA CPU
|
Hi,
Im currently following this doc: https://huggingface.co/docs/google-cloud/en/examples/gke-tgi-multi-lora-deployment
After got a bug: "Can’t scale up due to exceeded quota" and do some research, I suspect that my free trial (300$) account is not able to increase GPU quota (even I have activated my account to not be trial anymore and have to contact sale)
Is there anyway I can run this with cpu instead.
Thank you
|
https://github.com/huggingface/Google-Cloud-Containers/issues/128
|
open
|
[
"question"
] | 2024-12-05T05:42:51Z
| 2024-12-12T10:06:43Z
| null |
AndrewNgo-ini
|
huggingface/peft
| 2,260
|
Is it possible to support the transformer engine when using Lora in Megatron?
|
### Feature request
I am currently using the Megatron framework and want to use Lora for training. I saw that the Megatron format is supported at https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/tp_layer.py RowParallelLinear and ColumnParallelLinear do the adaptation. But if I use the transformer engine, the corresponding TELayerNormColumnParallelLinear and TERowParallelLinear will not be adapted.
### Motivation
This will better support Megatron framework using LoRA.
### Your contribution
I don't have a PR.
|
https://github.com/huggingface/peft/issues/2260
|
closed
|
[] | 2024-12-05T03:24:15Z
| 2025-01-12T15:03:29Z
| 3
|
liulong11
|
huggingface/diffusers
| 10,120
|
memory consumption of dreambooth+SD3
|
Hi, I am running dreambooth SD3 with a single A100 GPU, I reduced resolution to 256; but it still need more memory than a single A100 has? I am wondering is this huge memory consumption normal?
```
!python train_dreambooth_sd3.py \
--pretrained_model_name_or_path="stabilityai/stable-diffusion-3-medium-diffusers" \
--instance_data_dir="erhu" \
--output_dir="trained-sd3" \
--mixed_precision="fp16" \
--instance_prompt="a photo of erhu" \
--resolution=256 \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--learning_rate=1e-4 \
--report_to="wandb" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=300 \
--validation_prompt="A photo of erhu on the grass" \
--validation_epochs=25 \
--use_8bit_adam \
--seed="0" \
--push_to_hub
```
`torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 36.00 MiB. GPU 0 has a total capacity of 39.56 GiB of which 2.81 MiB is free. Process 16368 has 39.55 GiB memory in use. Of the allocated memory 38.05 GiB is allocated by PyTorch, and 1021.72 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management `
Thanks
|
https://github.com/huggingface/diffusers/issues/10120
|
closed
|
[
"bug",
"stale",
"training"
] | 2024-12-04T19:39:04Z
| 2025-01-27T01:30:18Z
| 5
|
KolvacS-W
|
pytorch/xla
| 8,451
|
Is it possible to execute jax code in torch_xla?
|
## Is it possible to execute jax code in torch_xla?
After reading the docs, I realized that customized kernels via Jax Pallas can be adopted as kernels. I wonder if it is possible to execute jax code in torch_xla. It seems torch_xla._XLAC._xla_tpu_custom_call only accept custom kernels. Is there a way to execute jax ir code?
|
https://github.com/pytorch/xla/issues/8451
|
closed
|
[] | 2024-12-04T17:54:55Z
| 2024-12-08T12:24:51Z
| 2
|
lime-j
|
huggingface/diffusers
| 10,112
|
Detail-Daemon diffusers
|
**Describe the solution you'd like.**
Detail-Daemon: https://github.com/Jonseed/ComfyUI-Detail-Daemon
How to implement Detail-Daemon in diffusers, as seen in https://github.com/Jonseed/ComfyUI-Detail-Daemon. Will there be a better official component in the future?
|
https://github.com/huggingface/diffusers/issues/10112
|
open
|
[
"wip",
"consider-for-modular-diffusers"
] | 2024-12-04T09:14:39Z
| 2025-01-03T18:01:24Z
| 10
|
NicholasCao
|
pytorch/gloo
| 399
|
How to specify ai_family explicitly
|
we note that gloo supports ipv4 and ipv6 by setting ai_family = AF_UNSPEC and deciding a real one at runtime. However, in our cluster, we got an exception about ai_family mismatching. Our cluster contains both ipv4 and ipv6 network stacks. How can we specify ai_family explicitly?
We run pyroch, and get below exception.
RuntimeError: [enforce fail at ../third_party/gloo/gloo/transport/tcp/[device.cc:276](http://device.cc:276/)] ss1.ss_family == ss2.ss_family. 2 vs 10
|
https://github.com/pytorch/gloo/issues/399
|
open
|
[] | 2024-12-04T08:30:50Z
| 2025-02-10T09:06:52Z
| null |
NEWPLAN
|
huggingface/lerobot
| 547
|
How to make a custom LeRobotDataset with v2?
|
Hi folks, thanks for the amazing open source work!
I am trying to make a custom dataset to use with the LeRobotDataset format.
The readme says to copy the example scripts here which I've done, and I have a working format script of my own.
https://github.com/huggingface/lerobot/blob/8e7d6970eaf5a64b8af6ec45586d201b8ca9ef16/README.md?plain=1#L323
but when it comes time to create the dataset, the `push_dataset_to_hub.py` uses `LeRobotDataset.from_preloaded` which is no longer supported in [dataset V2](https://github.com/huggingface/lerobot/pull/461)
https://github.com/huggingface/lerobot/blob/8e7d6970eaf5a64b8af6ec45586d201b8ca9ef16/lerobot/scripts/push_dataset_to_hub.py#L216
So I'm just wondering what the proper way of loading your own custom local dataset is?
Thank you in advance for your help!
|
https://github.com/huggingface/lerobot/issues/547
|
closed
|
[
"question",
"dataset",
"stale"
] | 2024-12-04T08:00:19Z
| 2025-10-08T08:28:34Z
| null |
alik-git
|
huggingface/lerobot
| 545
|
Poor success rate in complex scenarios
|
Hi I used Moss robot to play with and train ACT policy, when it comes to one lego piece, it can finish grabbing task at high success rate after recording 50+ episodes with different pose & location variants, but generalization on multi-piece random location is not promising.
When I started to add complexity (for example 6 pieces with different colors like the picture below), and place the lego pieces a little bit randomly, record one episode continuously until all the pieces are grabbed (other than 1 piece 1 episode). furthermore, were recorded with order

Here is what I found:
1. The trained policy can not work if the gripping sequence is randomized, in other words it has to keep a fixed spacial order e.g. from upper left to down right.
2. The trained policy can not work if the [location, color, pose] combination was not seen in training dataset, especially location combos
3. At first I suspected only iPhone and Mac fixed cameras can not give enough depth perception, so I bought a wide-angle USB camera mounted it on the gripper, as a result success rate didn't get higher.

4. Enlarging dataset size to 120+ episodes didn't give obvious change.
I was wondering how to improve this task, is the method I used to record data wrong or due to the generalization of ACT is limited?
Looking forward to hearing answers or experience
|
https://github.com/huggingface/lerobot/issues/545
|
closed
|
[
"question",
"policies",
"stale"
] | 2024-12-04T06:20:31Z
| 2025-10-08T08:28:45Z
| null |
mydhui
|
huggingface/frp
| 14
|
where is the code of frpc-gradio-0.3
|
https://github.com/huggingface/frp/issues/14
|
closed
|
[] | 2024-12-04T05:37:34Z
| 2025-03-11T00:55:39Z
| null |
BoyuanJiang
|
|
pytorch/tutorials
| 3,174
|
💡 [REQUEST] - Tutorial for exporting popular class of models, showing the unique challenges faced and how to address them
|
### 🚀 Describe the improvement or the new tutorial
The gaming community cares about certain classes of models like pose estimation, instance segmentation, video classification. When we try to export OSS implementations of these models, we run into unique challenges with `torch.export`
Currently, we have tutorials showing usage of export and talking about the core export-related concepts to keep in mind with simple examples. We also have `ExportDB` which has information on unsupported constructs with simple examples. However, practically, when running export on many models, its not very clear how does once go about addressing the issues.
This tutorial aims to do the reverse. Pick 4 models which are popular, try to export them, show the errors we run into and how do we solve them. The problems being solved are generic enough to be applicable to a range of models.
### Existing tutorials on this topic
https://pytorch.org/docs/stable/export.html
https://pytorch.org/tutorials/intermediate/torch_export_tutorial.html
https://pytorch.org/docs/stable/generated/exportdb/index.html
### Additional context
https://github.com/pytorch/pytorch/issues/138111
https://github.com/pytorch/pytorch/issues/138120
_No response_
cc @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
|
https://github.com/pytorch/tutorials/issues/3174
|
closed
|
[
"module: export"
] | 2024-12-03T20:35:42Z
| 2025-01-21T18:22:54Z
| null |
agunapal
|
pytorch/xla
| 8,430
|
Request for Wheel with Older GLIBC
|
## ❓ Questions and Help
Hi, I have installed torch-xla from https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-2.5.0-cp311-cp311-manylinux_2_28_x86_64.whl. "manylinux_2_28" indicates that it is compiled with GLIBC 2.28. However, when I installed and tried to import torch_xla, it said GLIBC 2.29 was not found. Upgrading GLIBC on the server is not possible. I kindly request any help on this. It will be very helpful if there is a pre-compiled wheel that can run on a server with GLIBC 2.28 (e.g., RedHat 8)
I have pytorch-2.5.1-cu118 installed in python 3.11. My system is RHEL 8.
|
https://github.com/pytorch/xla/issues/8430
|
open
|
[
"question",
"build"
] | 2024-12-03T17:50:24Z
| 2025-02-13T14:47:34Z
| null |
ASU-ScopeX-Lab
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.