repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers
| 33,489
|
passing past_key_values as a tuple is deprecated, but unclear how to resolve
|
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.44.2
- Platform: Linux-5.4.0-167-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: NA
- Using GPU in script?: yes
- GPU type: NVIDIA A40
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments
from trl import SFTTrainer, SFTConfig
from accelerate import Accelerator
from peft import LoraConfig
import math, os, random
from datetime import datetime
# Select rows to train on
initial_rows = 50000
annealing_rows = 10000
eval_rows = 10000 # Only 10000 rows for evaluation
batch_size = 8
ga = 4
learning_rate=1e-3
def setup_environment():
os.environ['WANDB_DISABLED'] = 'true'
return Accelerator()
def load_model_and_tokenizer():
model_name = "Trelis/80M-0.0090-cosmopedia"
model_kwargs = {
"torch_dtype": torch.bfloat16,
}
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM-360M-Instruct")
model = AutoModelForCausalLM.from_pretrained(model_name, **model_kwargs)
return model, tokenizer
def load_and_preprocess_train_dataset(start_idx, num_rows):
dataset = load_dataset("TIGER-Lab/WebInstructSub", split="train",
streaming=True
)
dataset = dataset.skip(start_idx).take(num_rows)
def format_instruction(example):
return {
"messages": [
{"role": "user", "content": example["question"]},
{"role": "assistant", "content": example["answer"]}
]
}
formatted_dataset = dataset.map(format_instruction)
return formatted_dataset
def format_instruction_for_trainer(example):
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM-360M-Instruct")
return tokenizer.apply_chat_template(
example["messages"],
truncation=True,
padding="max_length",
max_length=2048,
tokenize=False,
)
def load_and_preprocess_eval_dataset():
dataset = load_dataset("TIGER-Lab/WebInstructSub", split="train")
# Get the total number of rows in the dataset
total_rows = len(dataset)
# Generate a list of random indices
random_indices = random.sample(range(total_rows), eval_rows)
# Select the random rows
dataset = dataset.select(random_indices)
def format_instruction(example):
return {
"messages": [
{"role": "user", "content": example["question"]},
{"role": "assistant", "content": example["answer"]}
]
}
formatted_dataset = dataset.map(format_instruction, remove_columns=dataset.column_names)
return formatted_dataset
def main():
accelerator = setup_environment()
model, tokenizer = load_model_and_tokenizer()
print(model.device)
# Combined training dataset (streaming)
total_rows = initial_rows + annealing_rows
train_dataset = load_and_preprocess_train_dataset(0, total_rows)
# Evaluation dataset (non-streaming, last 1000 rows)
eval_dataset = load_and_preprocess_eval_dataset()
# Calculate steps
num_epochs = 1
total_steps = (total_rows * num_epochs) // (batch_size * ga)
initial_steps = (initial_rows * num_epochs) // (batch_size * ga)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
run_name = f"SFT-{total_rows}rows-lr{learning_rate}-{timestamp}"
training_args = SFTConfig(
output_dir=f"./Trelis_local/80M-0.015-cosmopedia-SFT-{run_name}",
run_name=run_name,
logging_dir=f"./logs/{run_name}",
eval_strategy="steps",
save_strategy="steps",
report_to="tensorboard",
num_train_epochs=num_epochs,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
warmup_steps=20,
logging_steps=int(total_steps * 0.1),
eval_steps=int(total_steps * 0.1),
save_steps=int(total_steps * 0.1),
learning_rate=learning_rate,
bf16=True,
max_steps=total_steps,
gra
|
https://github.com/huggingface/transformers/issues/33489
|
closed
|
[
"bug"
] | 2024-09-14T13:58:18Z
| 2025-11-29T04:50:43Z
| null |
RonanKMcGovern
|
pytorch/PiPPy
| 1,142
|
How to train a model with pippy
|
It seems that the examples here are all examples of inference, where are the examples of training?
|
https://github.com/pytorch/PiPPy/issues/1142
|
open
|
[] | 2024-09-14T09:27:38Z
| 2024-11-20T07:18:01Z
| null |
sunkun1997
|
pytorch/data
| 1,317
|
StatefulDataloader is slower than Dataloader. Is there any best practice of StatefulDataloader?
|
### 📚 The doc issue
Hello,
Thank you for your awesome implementation of StatefulDataloader.
I use the compare the speed of Dataloader and StatefulDataloader, the StatefulDataloader is much slower than Dataloader. For example, Dataloader costs 10ms per iter, but StatefulDataloader costs about 2s per iter.
Is there any best practice of StatefulDataloader?
cc @andrewkho
### Suggest a potential alternative/fix
_No response_
|
https://github.com/meta-pytorch/data/issues/1317
|
closed
|
[] | 2024-09-13T09:43:45Z
| 2024-09-13T09:50:08Z
| 1
|
by2101
|
pytorch/torchtitan
| 577
|
DDP (replicate) + TP?
|
Currently, when there are two device meshes (`tp` and `dp`), torchtitan should choose FSDP as the **only** backend for DP. Ref:
https://github.com/pytorch/torchtitan/blob/d2a4904f58accc683c17c66a360026cb3c8109af/torchtitan/parallelisms/parallelize_llama.py#L97-L98
However, the `replicate` should support >1D mesh and be used with TP enabled. [Ref](https://github.com/pytorch/pytorch/blob/7dc1788396fc9e2860c0c236e0c0e108e96b83c8/torch/distributed/_composable/replicate.py#L218-L237).
**Q1:** Why does torchtitan not support DDP (replicate) + TP? Is it only an implementation choice?
I have [handwritten DDP + TP in torchtitan](https://github.com/pytorch/torchtitan/compare/main...yzs981130:torchtitan:yzs/ddp_tp) and surprisingly found that the loss never goes down. It seems there are no gradients after `loss.backward()`.

To reproduce, use the branch above and run `run_llama_train.sh` on an 8-GPU machine.
**Q2:** Is it a bug or an intended feature that DDP+TP is not used, and that results in missing gradients?
And collect_env:
```
Collecting environment information...
PyTorch version: 2.5.0.dev20240903+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 9.13 (stretch) (x86_64)
GCC version: (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
Clang version: Could not collect
CMake version: version 3.21.2
Libc version: glibc-2.24
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.56.bsk.2-amd64-x86_64-with-glibc2.24
Is CUDA available: True
CUDA runtime version: 12.6.20
CUDA_MODULE_LOADING set to: LAZY
...
Nvidia driver version: 560.28.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
...
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240903+cu118
[pip3] torchaudio==2.5.0.dev20240903+cu118
[pip3] torchdata==0.8.0
[pip3] torchvision==0.20.0.dev20240903+cu118
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.0.dev20240903+cu118 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20240903+cu118 pypi_0 pypi
[conda] torchdata 0.8.0 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240903+cu118 pypi_0 pypi
```
P.S.
- Torch 2.4.0 shares the similar abnormal results
- Using `DistributedDataParallel` (class) rather than `replicate` behaves well
Thanks in advance!
|
https://github.com/pytorch/torchtitan/issues/577
|
closed
|
[
"question"
] | 2024-09-13T08:10:05Z
| 2025-03-19T21:22:12Z
| null |
yzs981130
|
pytorch/xla
| 8,000
|
[RFC] `torch_xla` Backward Compatibility Proposal
|
Recently, we have started the process to reduce the torch_xla API footprint in favor of torch API to improve the usability. This RFC focuses on the process to deprecate any functions.
## Backward compatibility
We propose to offer a 6 months (2 releases) grace period before completely removing the deprecated API. As is shown in the graph below:
<img width="1052" alt="Screenshot 2024-09-12 at 1 47 03 PM" src="https://github.com/user-attachments/assets/9d91f784-8915-4908-9778-eed28a3ecd22">
Developers should follow the illustrated timeline with the following action:
- Before version X-1 branch cut, developers check in API changes and wrap the function to be deprecated with the warning message. The API to be deprecated should still be usable but it should print out the warning message once if any code is calling into the function. In this way, starting from version X and version X+1, we should see the deprecated message that mentions `API xxx will be deprecated in release X+2`.
- Before version X+2 branch cut, developers completely delete the deprecated functions along with the warning deprecated message.
If we follow the timeline, the deprecated API should still be usable for two releases, in which we guarantee backward compatibility.
For each deprecated API, mention it in the release X’s release note including what’s the suggested new APIs and when to completely deprecate the old one.
## Actions to take for deprecation:
### Github actions for API deprecation
Before deprecate any APIs, create a github issue to include the following details:
- Function to be deprecated and whether we have a new API as a replacement.
- Proposed timeline before completely deprecating the function. We need to guarantee the deprecated message lasts for at least 2 releases.
### How to mark function to be deprecated
Here is the example on the code changes if we want to deprecate `torch_xla/core/xla_model.py:xrt_world_size()` with ` torch_xla/runtime.py:world_size()`. There are two ways to mark a function as deprecated:
- Use deprecated function (full example [PR](https://github.com/pytorch/xla/pull/7679)):
```python
# In torch_xla/core/xla_model.py:
from torch_xla.experimental.deprecation import deprecated
from . import xla_model as this_module
xrt_world_size = deprecated(this_module, torch_xla.runtime.world_size,
'xrt_world_size() will be removed in release 2.7.')
# Remember to comment out or remove the original xrt_world_size in the file.
"""
def xrt_world_size():
...
"""
# In torch_xla/runtime.py
def world_size():
...
```
- Use @mark_deprecated decorator:
```python
# In torch_xla/core/xla_model.py:
from torch_xla.experimental.deprecation import mark_deprecated
@mark_deprecated(torch_xla.runtime.world_size, extra_msg='xrt_world_size() will be removed in release 2.7.')
def xrt_world_size():
...
# In torch_xla/[runtime.py](http://runtime.py/), define the new function:
def world_size():
...
```
|
https://github.com/pytorch/xla/issues/8000
|
open
|
[
"documentation",
"2.5 release"
] | 2024-09-12T20:58:55Z
| 2025-07-11T17:38:19Z
| 4
|
zpcore
|
huggingface/lerobot
| 436
|
Image storage format
|
I am quite interested in using `LeRobotDataset` for large scale training. I am interested to get more context on the options for storing images so I am aware of the implications this might have:
- Did you by chance study if the mp4 video compression has any negative effects on the image quality in terms of model performance (or any studies you based your decision on)
- I see atm lerobot supports storing images either in `.mp4` or `.pt`, but not in `arrow` or `parquet` format as many other HF datasets do. Is there any specific reason you didn't add support for `arrow` / `parquet` which also provide memory mapping? Any ideas how pytorch would compare to `arrow` / `parquet` when using datasets of 100s of millions of examples?
|
https://github.com/huggingface/lerobot/issues/436
|
closed
|
[
"question",
"dataset",
"stale"
] | 2024-09-12T16:38:21Z
| 2025-10-23T02:29:14Z
| null |
nikonikolov
|
huggingface/lerobot
| 435
|
Open-X datasets
|
Thanks for the great work! I am interested in converting more of the open-x datasets to `LeRobotDataset`.
- I was wondering if there was any particular reason the entire open-x wasn't added already, e.g. some difficulties you encountered with some specific datasets?
- Do you have any tips where I should be extra careful when converting from RLDS to `LeRobotDataset` or it's generally as easy as calling the conversion script?
|
https://github.com/huggingface/lerobot/issues/435
|
closed
|
[
"enhancement",
"question",
"dataset"
] | 2024-09-12T16:29:40Z
| 2025-10-08T08:25:55Z
| null |
nikonikolov
|
huggingface/lerobot
| 432
|
some questions about real world env
|
### System Info
```Shell
all software cfg match author's project
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [X] My own task or dataset (give details below)
### Reproduction
I am planning to control my own robot left-arm. I've almost figure out all the parts if lerobot-dataset, then I want to make my own dataset respect to the aloha_sim_transfer_cube_human rather than "korch ALOHA teleop hardware system".
my questions are:
1) Must I keep such a high fps like 50 when collect data from camera and arm actions?
2) actions comes from human control on the arm, and state comes from reading operation, but how should I set the time gap between action and state?
### Expected behavior
answers from anyone
|
https://github.com/huggingface/lerobot/issues/432
|
closed
|
[
"question"
] | 2024-09-12T09:53:23Z
| 2025-10-08T08:27:48Z
| null |
NNsauce
|
huggingface/chat-ui
| 1,463
|
Some bugs
|
## Bug description
There are several issues that I have with the site, such as slow performance both on mobile and PC. When trying to select specific parts of the text, it goes back to the original message. Sometimes it occurs in errors that force me to always refresh the conversation. When I switch conversation I have to switch all of my messages to the latest ones.
But I feel it's not my internet that's causing the issue but something on the website.
## Steps to reproduce
The performance is quite mixed, but on mobile is unplayable. (Samsung A40)
Try to select any text, and it will direct you to the first message.
The last one I don't how to replicate except being unlucky with it.
### Specs
- **Windows 11**:
- **Librewolf 124.0.1-1**:
|
https://github.com/huggingface/chat-ui/issues/1463
|
open
|
[
"bug"
] | 2024-09-12T08:13:35Z
| 2024-09-12T09:03:58Z
| 0
|
Ruyeex
|
huggingface/transformers.js
| 929
|
what is pipeline?
|
https://github.com/huggingface/transformers.js/issues/929
|
closed
|
[
"question"
] | 2024-09-12T05:09:05Z
| 2024-10-04T10:24:42Z
| null |
chakravarthi-vatala
|
|
pytorch/torchchat
| 1,134
|
Failures when using PyTorch local build vs. binaries
|
### 🐛 Describe the bug
I ran into an issue with loading the tokenizer, which was root caused to me using my local PyTorch build.
After building the aoti runner, I ran the following command: `cmake-out/aoti_run exportedModels/stories15M.so -z /home/angelayi/.torchchat/model-cache/stories15M/tokenizer.model -i "Once upon a time”`
With my local build, the above command ran into the error: `couldn't load /home/angelayi/.torchchat/model-cache/stories15M/tokenizer.model` which is from the sentencepiece tokenizer. Specifying `-l 2` doesn't change anything as this is the default setting.
Changing to `-l 3` results in the following error:
```
terminate called after throwing an instance of 'std::invalid_argument'
what(): invalid encoder line:
zsh: IOT instruction (core dumped) cmake-out/aoti_run ../lucy_stories15M.so -z ../tokenizer.model -l 3 -i
```
After re-running `./install/install_requirements.sh`, this installs PyTorch version at 08142024, and runs successfully.
So I tried today's nightly (09112024) using `pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121`, and this also runs successfully.
Going back to my local PyTorch build, I checked out the commit `26e5572` which corresponds to the cutoff of today's nightly, and built PyTorch locally. This runs into the initial error with the tokenizers.
I still didn't figure out how to run with my local PyTorch build, but quoting Nikita, this is motivation to create a docker/venv story :P
cc @malfet @Jack-Khuu
### Versions
main
|
https://github.com/pytorch/torchchat/issues/1134
|
open
|
[
"bug",
"enhancement"
] | 2024-09-11T23:57:18Z
| 2024-09-12T01:01:24Z
| 0
|
angelayi
|
huggingface/diffusers
| 9,417
|
Suggestion for speeding up `index_for_timestep` by removing sequential `nonzero()` calls in samplers
|
**Is your feature request related to a problem? Please describe.**
First off, thanks for the great codebase and providing so many resources! I just wanted to provide some insight into an improvement I made for myself, in case you'd like to include it for all samplers. I'm using the `FlowMatchEulerDiscreteScheduler` and after profiling, I've noticed that it's unexpectedly slowing down my training speeds. I'll describe the issue and proposed solution here rather than making a PR, since this would touch a lot of code and perhaps someone on the diffusers team would like to implement it.
**Describe the solution you'd like.**
This line in particular is very slow because it is a for loop `step_indices = [self.index_for_timestep(t, schedule_timesteps) for t in timestep]` and the `self.index_for_timestep()` is calling a nonzero() function which is slow.
https://github.com/huggingface/diffusers/blob/b9e2f886cd6e9182f1bf1bf7421c6363956f94c5/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L149
**Describe alternatives you've considered.**
I've changed the code as follows:
```python
# huggingface code
def index_for_timestep(self, timestep, schedule_timesteps=None):
if schedule_timesteps is None:
schedule_timesteps = self.timesteps
indices = (schedule_timesteps == timestep).nonzero()
# The sigma index that is taken for the **very** first `step`
# is always the second index (or the last index if there is only 1)
# This way we can ensure we don't accidentally skip a sigma in
# case we start in the middle of the denoising schedule (e.g. for image-to-image)
pos = 1 if len(indices) > 1 else 0
return indices[pos].item()
```
changed to =>
```python
# my code
def index_for_timestep(self, timestep, schedule_timesteps=None):
if schedule_timesteps is None:
schedule_timesteps = self.timesteps
num_steps = len(schedule_timesteps)
start = schedule_timesteps[0].item()
end = schedule_timesteps[-1].item()
indices = torch.round(((timestep - start) / (end - start)) * (num_steps - 1)).long()
return indices
```
and
```python
# huggingface code
# self.begin_index is None when scheduler is used for training, or pipeline does not implement set_begin_index
if self.begin_index is None:
step_indices = [self.index_for_timestep(t, schedule_timesteps) for t in timestep]
```
changed to =>
```python
# my code
# self.begin_index is None when scheduler is used for training, or pipeline does not implement set_begin_index
if self.begin_index is None:
step_indices = self.index_for_timestep(timestep, schedule_timesteps)
```
**Additional context.**
Just wanted to bring this modification to your attention since it could be a training speedup for folks. 🙂 Especially when someone has a large batch size > 1 and this for loop it occurring with nonzero search operations. Some other small changes might be necessary to ensure compatibility of the function changes, but I suspect it could help everyone. Thanks for the consideration!
|
https://github.com/huggingface/diffusers/issues/9417
|
open
|
[
"help wanted",
"wip",
"contributions-welcome",
"performance"
] | 2024-09-11T14:54:37Z
| 2025-02-08T10:26:47Z
| 11
|
ethanweber
|
huggingface/cosmopedia
| 29
|
What is the best way to cite the work?
|
This is absolutely fantastic work. Thank you very much for making it public.
What is the best way to cite this dataset/project? Is there any paper I can cite or should I cite the blog-post?
|
https://github.com/huggingface/cosmopedia/issues/29
|
closed
|
[] | 2024-09-11T14:34:54Z
| 2024-09-11T14:36:15Z
| null |
vijetadeshpande
|
huggingface/diffusers
| 9,416
|
[Schedulers] Add SGMUniform
|
Thanks to @rollingcookies, we can see in this [issue](https://github.com/huggingface/diffusers/issues/9397) that this schedulers works great with the Hyper and probably also Lighting loras/unets.
It'd be fantastic if someone can contribute this scheduler to diffusers.
Please let me know if someone is willing to do this.
|
https://github.com/huggingface/diffusers/issues/9416
|
closed
|
[
"help wanted",
"contributions-welcome",
"advanced"
] | 2024-09-11T13:59:27Z
| 2024-09-23T23:39:56Z
| 12
|
asomoza
|
huggingface/transformers
| 33,416
|
The examples in the examples directory are mostly for fine-tuning pre-trained models?how to trian from scratch
|
### Model description
no
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/transformers/issues/33416
|
open
|
[
"New model"
] | 2024-09-11T03:32:53Z
| 2024-10-03T23:28:42Z
| null |
zc-Chao
|
pytorch/pytorch
| 135,645
|
[ONNX] How to export the FlashAttention kernel
|
### 🐛 Describe the bug
1. code
```
import sys
import torch
from modeling_intern_vit import FlashAttention # FlashAttention of InternVL2-2B model
sys.path.append("/home/InternVL2-2B")
qkv=torch.load("/home/qkv.pth")
falsh=FlashAttention().eval().cuda()
out=falsh(qkv.cuda())
with torch.no_grad():
torch.onnx.export(
falsh,
(qkv,),
"/home/qkv.onnx",
input_names = ["input0"],
output_names = ["qkv_out"],
opset_version = 11
)
```
3. output
```
out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.varlen_fwd(
/usr/local/lib/python3.10/dist-packages/flash_attn/flash_attn_interface.py:90: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
```
5. onnx-file image

6.needed help
"My goal is to export an ONNX file for the visual part of the InternVL2-2B model, which uses the Flash-Attention module. The ONNX file I export produces inference results that differ significantly from those of PyTorch. I then tried exporting the ONNX file for Flash-Attention alone and testing it. However, the ONNX file only includes inputs and outputs, while the Flash-Attention includes many operations like reshape, which are missing in the exported ONNX file. This is the issue I’m facing. I hope to export a functional ONNX file where the inference results are similar to those obtained from PyTorch. This is my requirement."
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.107.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11700F @ 2.50GHz
CPU family: 6
Model: 167
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU max MHz: 4900.0000
CPU min MHz: 800.0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not aff
|
https://github.com/pytorch/pytorch/issues/135645
|
closed
|
[
"module: onnx",
"triaged",
"onnx-triaged"
] | 2024-09-11T01:40:30Z
| 2024-09-27T01:46:09Z
| null |
scuizhibin
|
huggingface/diffusers
| 9,407
|
callback / cannot yield intermediate images on the fly during inference
|
Hi,
in advance apologies if this has been asked already, or if I'm just misusing the diffusers API.
Using `diffusers==0.30.2`
**What API design would you like to have changed or added to the library? Why?**
I will illustrate straight away the general issue with my use case: I need to call a (FLUX) diffusers pipeline from some endpoint of mine, passing a callback that decodes latents and saves on disk intermediate images obtained from them, at the end of each step. So far, so good: I do manage to get the intermediate images saved on disk. I do this using the pipeline argument `callback_on_step_end`
Now, I'd like to _**yield**_ (in the pythonic meaning) these intermediate images on the fly, as soon as they're available, ie at the end of each inference step. I need to do so from my endpoint. That's where my problem is.
I could not make this idea work using with diffusers callback mechanism.
I mean, I did manage that by subclassing the pipeline, copy-pasting the dunder call method code and overriding it, but this is not maintainable, especially since the FLUX code evolves rapidly nowadays.
Also, note that currently diffusers assigns the result of the call to the callback to a variable and expects it to implement the `.pop` method, which might add constraints (diffusers typically expects a kwarg dict, see [here](https://github.com/huggingface/diffusers/blob/v0.30.2/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L1026)).
Another approach I thought of is to monitor the disk contents in a parallel process during the call to the pipeline.
But is there an easier way?
**What use case would this enable or better enable? Can you give us a code example?**
This allows to manipulate the objects produced by the callback live, instead of having to wait for the whole reverse diffusion to finish.
Thank you
cc @sayakpaul @yiyixuxu
also tagging @asomoza since I saw he is the contributor to the official callback interface
|
https://github.com/huggingface/diffusers/issues/9407
|
closed
|
[] | 2024-09-10T16:32:04Z
| 2024-09-25T12:28:20Z
| 8
|
Clement-Lelievre
|
huggingface/transformers.js
| 928
|
The inference speed on the mobile end is a bit slow
|
### Question
If it is a mobile device that does not support WebGPU, how can we improve the inference speed of the model? I have tried WebWorker, but the results were not satisfactory
|
https://github.com/huggingface/transformers.js/issues/928
|
open
|
[
"question"
] | 2024-09-10T09:14:16Z
| 2024-09-11T08:46:33Z
| null |
Gratifyyy
|
pytorch/tutorials
| 3,050
|
Improve example by adding missing import
|
The [example "Creating a Custom Dataset for your files"](https://github.com/pytorch/tutorials/blob/8a8331eb2796c05113c8a98bc03a7a164407fcbf/beginner_source/basics/data_tutorial.py#L123) is missing the import `from torch.utils.data import Dataset`. Since other imports are shown and the purpose of this example is to show how to create a custom dataset, this import is crucial and should be added.
|
https://github.com/pytorch/tutorials/issues/3050
|
closed
|
[] | 2024-09-10T09:04:01Z
| 2025-04-14T18:43:31Z
| 0
|
avitase
|
pytorch/xla
| 7,987
|
Speeding up computation while using SPMD on large TPU pod
|
## ❓ Questions and Help
When running on vp-128 TPU pod (even when sharding only by batch dimension) we are experiencing very low performance comparing to the same pod without SPMD.
Do you have any tips how to increase the performance? some SPMD arguments? things we need to think about when using it? anything that might help because right now the performance is lower than regular in a factor.
@JackCaoG
|
https://github.com/pytorch/xla/issues/7987
|
closed
|
[
"question",
"performance"
] | 2024-09-10T07:59:14Z
| 2025-03-31T15:57:15Z
| null |
dudulightricks
|
huggingface/transformers.js
| 927
|
Error with Using require for ES Modules in @xenova/transformers Package
|
### Question
trying to use require to import the Pipeline class from the @xenova/transformers package, but encounter the following error:
const { Pipeline } = require('@xenova/transformers');
^
Error [ERR_REQUIRE_ESM]: require() of ES Module D:\Z-charity\dating_app_backend\node_modules@xenova\transformers\src\transformers.js from D:\Z-charity\dating_app_backend\controllers\authController.js not supported.
Instead change the require of transformers.js in D:\Z-charity\dating_app_backend\controllers\authController.js to a dynamic import() which is available in all CommonJS modules.
at Object. (D:\Z-charity\dating_app_backend\controllers\authController.js:10:22) {
code: 'ERR_REQUIRE_ESM'
Issue with Dynamic Import
const getPipeline = async () => {
const { Pipeline } = await import('@xenova/transformers');
return new Pipeline('text-classification', 'xenova/bert-base-uncased');
};
{
"message": "Server error",
"error": "Must implement _call method in subclass"
}
Reproduction
trying to use require to import the Pipeline class from the @xenova/transformers package, but encounter the following error:
const { Pipeline } = require('@xenova/transformers');
^
Error [ERR_REQUIRE_ESM]: require() of ES Module D:\Z-charity\dating_app_backend\node_modules@xenova\transformers\src\transformers.js from D:\Z-charity\dating_app_backend\controllers\authController.js not supported.
Instead change the require of transformers.js in D:\Z-charity\dating_app_backend\controllers\authController.js to a dynamic import() which is available in all CommonJS modules.
at Object. (D:\Z-charity\dating_app_backend\controllers\authController.js:10:22) {
code: 'ERR_REQUIRE_ESM'
Issue with Dynamic Import
const getPipeline = async () => {
const { Pipeline } = await import('@xenova/transformers');
return new Pipeline('text-classification', 'xenova/bert-base-uncased');
};
{
"message": "Server error",
"error": "Must implement _call method in subclass"
}
|
https://github.com/huggingface/transformers.js/issues/927
|
closed
|
[
"question"
] | 2024-09-10T06:02:53Z
| 2024-12-08T19:17:31Z
| null |
qamarali205
|
huggingface/transformers.js
| 925
|
V3 - WebGPU Whisper in Chrome Extention
|
### Question
Can [webGPU accelerated whisper](https://huggingface.co/spaces/Xenova/whisper-webgpu) run in a chrome extension?
I checked the space and found the dependency `"@xenova/transformers": "github:xenova/transformers.js#v3"` which I imported in a chrome extension. When I tried to import it, it didn't work.
```
Module not found: Error: Can't resolve '@xenova/transformers' in 'D:\projects\mosaic8\browser-extension\src\utils'
resolve '@xenova/transformers' in 'D:\projects\mosaic8\browser-extension\src\utils'
Parsed request is a module
using description file: D:\projects\mosaic8\browser-extension\package.json (relative path: ./src/utils)
Field 'browser' doesn't contain a valid alias configuration
resolve as module
D:\projects\mosaic8\browser-extension\src\utils\node_modules doesn't exist or is not a directory
D:\projects\mosaic8\browser-extension\src\node_modules doesn't exist or is not a directory
D:\projects\mosaic8\browser-extension\node_modules doesn't exist or is not a directory
looking for modules in D:\projects\mosaic8\node_modules
single file module
using description file: D:\projects\mosaic8\package.json (relative path: ./node_modules/@xenova/transformers)
no extension
Field 'browser' doesn't contain a valid alias configuration
D:\projects\mosaic8\node_modules\@xenova\transformers is not a file
.ts
Field 'browser' doesn't contain a valid alias configuration
D:\projects\mosaic8\node_modules\@xenova\transformers.ts doesn't exist
.tsx
Field 'browser' doesn't contain a valid alias configuration
D:\projects\mosaic8\node_modules\@xenova\transformers.tsx doesn't exist
.js
Field 'browser' doesn't contain a valid alias configuration
D:\projects\mosaic8\node_modules\@xenova\transformers.js doesn't exist
.jsx
Field 'browser' doesn't contain a valid alias configuration
D:\projects\mosaic8\node_modules\@xenova\transformers.jsx doesn't exist
existing directory D:\projects\mosaic8\node_modules\@xenova\transformers
using description file: D:\projects\mosaic8\node_modules\@xenova\transformers\package.json (relative path: .)
using exports field: ./dist/transformers.js
using description file: D:\projects\mosaic8\node_modules\@xenova\transformers\package.json (relative path: ./dist/transformers.js)
no extension
D:\projects\mosaic8\node_modules\@xenova\transformers\dist\transformers.js doesn't exist
.ts
D:\projects\mosaic8\node_modules\@xenova\transformers\dist\transformers.js.ts doesn't exist
.tsx
D:\projects\mosaic8\node_modules\@xenova\transformers\dist\transformers.js.tsx doesn't exist
.js
D:\projects\mosaic8\node_modules\@xenova\transformers\dist\transformers.js.js doesn't exist
.jsx
D:\projects\mosaic8\node_modules\@xenova\transformers\dist\transformers.js.jsx doesn't exist
as directory
D:\projects\mosaic8\node_modules\@xenova\transformers\dist\transformers.js doesn't exist
```
I might be doing something I don't know maybe. What could the issue here be?
What I can understand is that it is trying to search for a ts/tsx/js/jsx file (as specified in the `webpack.config.js` and it is unable to get it.
|
https://github.com/huggingface/transformers.js/issues/925
|
open
|
[
"question"
] | 2024-09-10T02:52:41Z
| 2025-01-18T16:03:26Z
| null |
chandeldivyam
|
huggingface/diffusers
| 9,402
|
[Flux ControlNet] Add img2img and inpaint pipelines
|
We recently added img2img and inpainting pipelines for Flux thanks to @Gothos contribution.
We also have controlnet support for Flux thanks to @wangqixun.
It'd be nice to have controlnet versions of these pipelines since there's been requests to have them.
Basically, we need to create two new pipelines that add the controlnet support from this [pipeline ](https://github.com/huggingface/diffusers/blob/f28a8c257afe8eeb16b4deb973c6b1829f6aea59/src/diffusers/pipelines/flux/pipeline_flux_controlnet.py) to the corresponding pipellines.
- [X] [Image to image](https://github.com/huggingface/diffusers/blob/f28a8c257afe8eeb16b4deb973c6b1829f6aea59/src/diffusers/pipelines/flux/pipeline_flux_img2img.py)
- [X] [Inpaint](https://github.com/huggingface/diffusers/blob/f28a8c257afe8eeb16b4deb973c6b1829f6aea59/src/diffusers/pipelines/flux/pipeline_flux_inpaint.py)
Related issue: #9158
Let me know if someone is interested in contributing this.
|
https://github.com/huggingface/diffusers/issues/9402
|
closed
|
[
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-09-10T02:08:32Z
| 2024-10-25T02:22:19Z
| 11
|
asomoza
|
huggingface/transformers.js
| 924
|
Steps for suppressing strings
|
### Question
What is the syntax for suppressing strings from showing up in the output text? Should I be doing that in my code, or is there a config option for it? I'm trying to remove everything that isn't a word:
```
const suppressedStrings = [
"[BLANK_AUDIO]",
"[CLEARS THROAT]",
"[Coughing]",
"[inaudible]",
"[MUSIC]",
"[MUSIC PLAYING]",
"[Pause]",
"(keyboard clicking)",
];
```
|
https://github.com/huggingface/transformers.js/issues/924
|
open
|
[
"question"
] | 2024-09-09T21:44:16Z
| 2025-01-24T17:53:47Z
| null |
stinoga
|
huggingface/diffusers
| 9,395
|
[Q] Possibly unused `self.final_alpha_cumprod`
|
Hello team, quick question to make sure I understand the behavior of the `step` function in LCM Scheduler.
https://github.com/huggingface/diffusers/blob/a7361dccdc581147620bbd74a6d295cd92daf616/src/diffusers/schedulers/scheduling_lcm.py#L534-L543
Here, it seems that the condition `prev_timestep >= 0` is always `True`, because `timestep` and `self.timesteps[prev_step_index]` cannot be negative. This would mean that `self.final_alpha_cumprod` is never used. Is there a way in which `prev_timestep` can be negative?
|
https://github.com/huggingface/diffusers/issues/9395
|
open
|
[
"stale"
] | 2024-09-09T17:35:08Z
| 2024-11-09T15:03:23Z
| 7
|
fdtomasi
|
huggingface/chat-ui
| 1,458
|
Chat ui sends message prompt 404
|
```
MONGODB_URL='mongodb://localhost:27017'
PLAYWRIGHT_ADBLOCKER='false'
MODELS=`[
{
"name": "Local minicpm",
"tokenizer": "minicpm",
"preprompt": "",
"chatPromptTemplate": "<s>{{preprompt}}{{#each messages}}{{#ifUser}}<|user|>\n{{content}}<|end|>\n<|assistant|>\n{{/ifUser}}{{#ifAssistant}}{{content}}<|end|>\n{{/ifAssistant}}{{/each}}",
"parameters": {
"stop": ["<|end|>", "<|endoftext|>", "<|assistant|>"],
"temperature": 0.7,
"max_new_tokens": 1024,
"truncate": 3071
},
"endpoints": [{
"type" : "openai",
"baseURL": "***/v1/chat/completions",
"defaultHeaders": {
"x-portkey-config": '{ "Authorization": "Bearer apikey" }'
}
}],
},
]`
```
Prompt for the following error:
```
ERROR (15839): 404 status code (no body)
err: {
"type": "NotFoundError",
"message": "404 status code (no body)",
"stack":
Error: 404 status code (no body)
at APIError.generate (file:///Users/user/Desktop/chat-ui/node_modules/openai/error.mjs:50:20)
at OpenAI.makeStatusError (file:///Users/user/Desktop/chat-ui/node_modules/openai/core.mjs:268:25)
at OpenAI.makeRequest (file:///Users/user/Desktop/chat-ui/node_modules/openai/core.mjs:311:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (/Users/user/Desktop/chat-ui/src/lib/server/endpoints/openai/endpointOai.ts:111:36)
at async Module.generateFromDefaultEndpoint (/Users/user/Desktop/chat-ui/src/lib/server/generateFromDefaultEndpoint.ts:11:23)
at async generateTitle (/Users/user/Desktop/chat-ui/src/lib/server/textGeneration/title.ts:53:10)
at async Module.generateTitleForConversation (/Users/user/Desktop/chat-ui/src/lib/server/textGeneration/title.ts:16:19)
"status": 404,
"headers": {
"connection": "keep-alive",
"content-encoding": "gzip",
"content-type": "text/plain; charset=utf-8",
"date": "Mon, 09 Sep 2024 13:29:16 GMT",
"transfer-encoding": "chunked",
"vary": "Accept-Encoding"
}
}
[21:29:16.156] ERROR (15839): 404 status code (no body)
err: {
"type": "NotFoundError",
"message": "404 status code (no body)",
"stack":
Error: 404 status code (no body)
at APIError.generate (file:///Users/user/Desktop/chat-ui/node_modules/openai/error.mjs:50:20)
at OpenAI.makeStatusError (file:///Users/user/Desktop/chat-ui/node_modules/openai/core.mjs:268:25)
at OpenAI.makeRequest (file:///Users/user/Desktop/chat-ui/node_modules/openai/core.mjs:311:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (/Users/user/Desktop/chat-ui/src/lib/server/endpoints/openai/endpointOai.ts:111:36)
at async Module.generate (/Users/user/Desktop/chat-ui/src/lib/server/textGeneration/generate.ts:8:30)
at async textGenerationWithoutTitle (/Users/user/Desktop/chat-ui/src/lib/server/textGeneration/index.ts:62:3)
"status": 404,
"headers": {
"connection": "keep-alive",
"content-encoding": "gzip",
"content-type": "text/plain; charset=utf-8",
"date": "Mon, 09 Sep 2024 13:29:16 GMT",
"transfer-encoding": "chunked",
"vary": "Accept-Encoding"
}
}
```
Accessing through Postman alone is normal
|
https://github.com/huggingface/chat-ui/issues/1458
|
open
|
[
"support"
] | 2024-09-09T13:31:56Z
| 2024-09-13T09:32:24Z
| 2
|
nextdoorUncleLiu
|
huggingface/chat-ui
| 1,456
|
could you provide an easy way to force output as json?
|
current I use
preprompt:'only output json. Do not output anything that is not json. Do not use markdown format. Must begin with {.'
But llama is not smart enough to output json form. It always begin with Here is the JSON answer or begin with ```(markdown format) for give me unvalid json string.
It seems preprompt is not enough to force json format. Could you provide an easy way to output just json. Or maybe the method is in tools.
|
https://github.com/huggingface/chat-ui/issues/1456
|
open
|
[
"enhancement"
] | 2024-09-09T11:34:17Z
| 2024-10-06T18:35:29Z
| 1
|
ghost
|
pytorch/torchtitan
| 572
|
How to calculate the total batchsize
|
Hi, it is me again~ I have a quick simple question: I am using the following training config with 4 GPUs. What is the total number of tokens per optimizer step? Is it 2 * 2048 or 2 * 2048 * 4?
```
[training]
batch_size = 2
seq_len = 2048
warmup_steps = 2000 # lr scheduler warm up, normally 20% of the train steps
max_norm = 1.0 # grad norm clipping
steps = 10000
data_parallel_degree = -1
tensor_parallel_degree = 1
fp8_linear = ""
compile = false
```
|
https://github.com/pytorch/torchtitan/issues/572
|
closed
|
[
"question"
] | 2024-09-09T09:47:50Z
| 2024-09-10T05:43:47Z
| null |
zyushun
|
huggingface/diffusers
| 9,392
|
[Scheduler] Add SNR shift following SD3, would the rest of the code need to be modified?
|
**What API design would you like to have changed or added to the library? Why?**
With the increasing resolution of image or video generation, we need to introduce more noise at smaller T, such as SNR shift following SD3. I have observed that CogVideoX's schedule has already implemented [this](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim_cogvideox.py#L214). If I add this line to the DDPM schedule, would the rest of the code (e.g., noise addition, sampling, etc.) need to be modified? I assume it wouldn't, but I seek a precise response.
**What use case would this enable or better enable? Can you give us a code example?**
```
class DDPMScheduler(SchedulerMixin, ConfigMixin):
def __init__(snr_shift_scale, **kwarg)
# predefine beta and alpha
self.alphas_cumprod = self.alphas_cumprod / (snr_shift_scale + (1 - snr_shift_scale) * self.alphas_cumprod)
# other code
# Other functions are the same as before
```
|
https://github.com/huggingface/diffusers/issues/9392
|
open
|
[
"stale"
] | 2024-09-09T09:19:37Z
| 2025-01-05T15:05:04Z
| 7
|
LinB203
|
huggingface/speech-to-speech
| 96
|
How to designate Melo TTS model to use my trained model?
|
Hi,
I am using Melo as TTS. And I trained with my datasets. How to designate Melo (here at speech to speech) to use my model?
Thanks!
|
https://github.com/huggingface/speech-to-speech/issues/96
|
closed
|
[] | 2024-09-08T20:36:23Z
| 2024-09-10T14:42:58Z
| null |
insufficient-will
|
huggingface/huggingface_hub
| 2,526
|
How can I rename folders in given repo? I need to rename folders
|
### Describe the bug
I am try to rename like below but it fails :/
```
from huggingface_hub import HfApi
import os
# Initialize the Hugging Face API
api = HfApi()
# Set the repository name
repo_name = "MonsterMMORPG/3D-Cartoon-Style-FLUX"
# Define the folder renaming mappings
folder_renames = {
"Training-Checkpoints-NO-Captions": "Training-Checkpoints-Inconsistent-DATASET-NO-Captions",
"Training-Checkpoints-With-Captions": "Training-Checkpoints-Inconsistent-DATASET-With-Captions"
}
# Function to rename folders
def rename_folder(repo_name, old_name, new_name):
try:
api.move_folder(
repo_id=repo_name,
path_in_repo=old_name,
new_path=new_name,
commit_message=f"Rename folder '{old_name}' to '{new_name}'"
)
print(f"Successfully renamed '{old_name}' to '{new_name}'")
except Exception as e:
print(f"Error renaming '{old_name}' to '{new_name}': {str(e)}")
# Iterate through the folder renaming mappings and rename each folder
for old_name, new_name in folder_renames.items():
rename_folder(repo_name, old_name, new_name)
print("Folder renaming process completed.")
```
### Reproduction
_No response_
### Logs
_No response_
### System info
```shell
latest
```
|
https://github.com/huggingface/huggingface_hub/issues/2526
|
closed
|
[
"bug"
] | 2024-09-07T17:23:54Z
| 2024-09-09T10:49:26Z
| null |
FurkanGozukara
|
pytorch/xla
| 7,972
|
Registering CUDA custom calls with the C++ FFI
|
## ❓ Questions and Help
Curious how to build and register a CUDA custom call with XLAC - have followed https://jax.readthedocs.io/en/latest/ffi.html and read https://openxla.org/xla/custom_call and wondering what the equivalent process is for torch / whether it is currently supported.
|
https://github.com/pytorch/xla/issues/7972
|
open
|
[
"question"
] | 2024-09-07T01:27:35Z
| 2025-03-31T16:08:31Z
| null |
skrider
|
huggingface/transformers
| 33,359
|
[Docs] How to build offline HTML or Docset files for other documentation viewers?
|
### Feature request
How can I build the docs into HTML files for use with other documentation viewers like [Dash](https://www.kapeli.com/dash) , [Dash-User-Contributions](https://github.com/Kapeli/Dash-User-Contributions)?
I successfully built the PyTorch docs for Dash by working directly in their `docs/` directory. I’m wondering if a similar process exists for Hugging Face libraries.
### Motivation
The Dash docset viewer is very useful for viewing multiple documentation sets in one place, even offline. It would be great to support it and include all Hugging Face libraries.
### Your contribution
I’ve built the PyTorch docs for Dash, so I’m familiar with incorporating and generating docsets.
|
https://github.com/huggingface/transformers/issues/33359
|
closed
|
[
"Documentation",
"Feature request"
] | 2024-09-06T15:51:35Z
| 2024-09-10T23:43:57Z
| null |
ueoo
|
huggingface/transformers
| 33,343
|
How to install transformers==4.45, two or three days I can install successfully, but today cannot.
|
### System Info
torch2.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
pip install git+https://github.com/huggingface/transformers.git
### Expected behavior
How to install the latest transformers
|
https://github.com/huggingface/transformers/issues/33343
|
closed
|
[
"Installation",
"bug"
] | 2024-09-06T08:23:00Z
| 2024-10-16T08:04:10Z
| null |
HyacinthJingjing
|
pytorch/torchchat
| 1,114
|
What is the future plan of this torchchat project?
|
### 🐛 Describe the bug
Torchchat provides a solution of running LLM with PyTorch optimization on servers, desktop and mobile.
May I know what is the future plan of this project? Is there any new features to finish to encourage users to use Torchchat as a solution?
|
https://github.com/pytorch/torchchat/issues/1114
|
closed
|
[] | 2024-09-06T06:03:17Z
| 2024-09-09T15:40:39Z
| null |
yanbing-j
|
huggingface/optimum-nvidia
| 149
|
How to use TensorRT model converter
|
Referring to [src/optimum/nvidia/export/converter.py] -> class 'TensorRTModelConverter' this could 'Take a local model and create the TRTLLM checkpoint and engine'
Questions:
- What are applicable local model format? e.g. JAX, HuggingFace, DeepSpeed
- How to use this script individually to generate TRTLLM checkpoint/engine? Could you please share if any tutorial?
Thank you.
|
https://github.com/huggingface/optimum-nvidia/issues/149
|
open
|
[] | 2024-09-05T18:55:15Z
| 2024-09-05T18:55:15Z
| null |
FortunaZhang
|
huggingface/datasets
| 7,139
|
Use load_dataset to load imagenet-1K But find a empty dataset
|
### Describe the bug
```python
def get_dataset(data_path, train_folder="train", val_folder="val"):
traindir = os.path.join(data_path, train_folder)
valdir = os.path.join(data_path, val_folder)
def transform_val_examples(examples):
transform = Compose([
Resize(256),
CenterCrop(224),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
def transform_train_examples(examples):
transform = Compose([
RandomResizedCrop(224),
RandomHorizontalFlip(),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
# @fengsicheng: This way is very slow for big dataset like ImageNet-1K (but can pass the network problem using local dataset)
# train_set = load_dataset("imagefolder", data_dir=traindir, num_proc=4)
# test_set = load_dataset("imagefolder", data_dir=valdir, num_proc=4)
train_set = load_dataset("imagenet-1K", split="train", trust_remote_code=True)
test_set = load_dataset("imagenet-1K", split="test", trust_remote_code=True)
print(train_set["label"])
train_set.set_transform(transform_train_examples)
test_set.set_transform(transform_val_examples)
return train_set, test_set
```
above the code, but output of the print is a list of None:
<img width="952" alt="image" src="https://github.com/user-attachments/assets/c4e2fdd8-3b8f-481e-8f86-9bbeb49d79fb">
### Steps to reproduce the bug
1. just ran the code
2. see the print
### Expected behavior
I do not know how to fix this, can anyone provide help or something? It is hurry for me
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-5.4.0-190-generic-x86_64-with-glibc2.31
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.6
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
|
https://github.com/huggingface/datasets/issues/7139
|
open
|
[] | 2024-09-05T15:12:22Z
| 2024-10-09T04:02:41Z
| 2
|
fscdc
|
huggingface/datasets
| 7,138
|
Cache only changed columns?
|
### Feature request
Cache only the actual changes to the dataset i.e. changed columns.
### Motivation
I realized that caching actually saves the complete dataset again.
This is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again.
### Your contribution
Is this even viable in the current architecture of the package?
I quickly looked into it and it seems it would require significant changes.
I would spend some time looking into this but maybe somebody could help with the feasibility and some plan to implement before spending too much time on it?
|
https://github.com/huggingface/datasets/issues/7138
|
open
|
[
"enhancement"
] | 2024-09-05T12:56:47Z
| 2024-09-20T13:27:20Z
| 2
|
Modexus
|
huggingface/lerobot
| 413
|
Compatible off-the-shelf robots?
|
Huge thanks for making all of this available!
Can you recommend any (low-cost) off-the-shelf robots to work with?
|
https://github.com/huggingface/lerobot/issues/413
|
closed
|
[
"question"
] | 2024-09-05T10:21:24Z
| 2025-10-08T08:27:56Z
| null |
danielfriis
|
huggingface/diffusers
| 9,362
|
IndexError: index 29 is out of bounds for dimension 0 with size 29
|
### Describe the bug
I have three problems because of the same reason.
1) TypeError: unsupported operand type(s) for +=: 'NoneType' and 'int'
# upon completion increase step index by one
self._step_index += 1 <---Error [here](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L303)
2) IndexError: index 29 is out of bounds for dimension 0 with size 29
sigma_next = self.sigmas[self.step_index + 1] <--- Error [here](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L295)
3) RuntimeError: Already borrowed
if _truncation is not None:
self._tokenizer.no_truncation() <--- Error here
Example: https://github.com/huggingface/tokenizers/issues/537
The reason, as I understood, is threads. Do you know, how can I solve this problem?
### Reproduction
```
from diffusers import (
FluxPipeline,
FlowMatchEulerDiscreteScheduler,
)
import torch
pipeline = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16
).to("cuda")
seed = 42
height = 720
width = 1280
generator = torch.Generator(device="cuda").manual_seed(seed)
pipeline(
prompt=prompt + ", highly detailed, all is depicted as silhouettes, without words",
guidance_scale=0.,
# num_inference_steps=10,
height=height,
width=width,
generator=generator,
max_sequence_length=256,
).images[0]
```
### Logs
```shell
For example:
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 1473, in wsgi_app
response = self.full_dispatch_request()
File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 882, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 880, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
File "/app/main.py", line 29, in generate_image
image = imagegen.run(**data)
File "/app/image_generator.py", line 102, in run
return generate_image()
File "/app/image_generator.py", line 89, in generate_image
return self.pipeline(
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 734, in __call__
latents = self.scheduler.step(noise_pred, t, latents, return_dict=False)[0]
File "/opt/conda/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py", line 295, in step
sigma_next = self.sigmas[self.step_index + 1]
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
```
### System Info
- 🤗 Diffusers version: 0.31.0.dev0
- Platform: Linux-5.4.0-171-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.13
- PyTorch version (GPU?): 2.2.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.24.6
- Transformers version: 4.44.2
- Accelerate version: 0.34.0
- PEFT version: 0.12.0
- Bitsandbytes version: not installed
- Safetensors version: 0.4.4
- xFormers version: not installed
- Accelerator: NVIDIA RTX A6000, 46068 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@yiyixuxu @sayakpaul @DN6
|
https://github.com/huggingface/diffusers/issues/9362
|
open
|
[
"bug",
"stale"
] | 2024-09-04T11:02:49Z
| 2024-11-25T15:04:22Z
| 8
|
Anvarka
|
pytorch/pytorch
| 135,098
|
How to gracefully mask CompositeImplicitAutograd for different backends
|
### 🐛 Describe the bug
I implemented torch.compile’s backend for my hardware via privateUserOne. I also found that torch.compile by default decomposes upsample_nearest2d into a bunch of small operators, just like _upsample_nearest does. But on my hardware, the _unsafe_index operator doesn’t perform well, so I’d like to be able to call the custom upsample_nearest2d operator directly for better performance. I don't know if this is a bug or if there could be a better implementation.
### Error logs
_No response_
### Minified repro
_No response_
### Versions
It is irrelevant to the execution environment and is related to code implementation.
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
|
https://github.com/pytorch/pytorch/issues/135098
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 2024-09-04T09:11:28Z
| 2024-11-01T06:20:49Z
| null |
yangxiaorun
|
huggingface/tokenizers
| 1,627
|
Rust: How to handle models with `precompiled_charsmap = null`
|
Hi guys,
I'm currently working on https://github.com/supabase/edge-runtime/pull/368 that pretends to add a rust implementation of `pipeline()`.
While I was coding the `translation` task I figured out that I can't load the `Tokenizer` instance for [Xenova/opus-mt-en-fr](https://huggingface.co/Xenova/opus-mt-en-fr) `onnx` model and their other `opus-mt-*` variants.
<details>
<summary>I got the following:</summary>
```rust
let tokenizer_path = Path::new("opus-mt-en-fr/tokenizer.json");
let tokenizer = Tokenizer::from_file(tokenizer_path).unwrap();
```
```
thread 'main' panicked at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/normalizers/mod.rs:143:26:
Precompiled: Error("invalid type: null, expected a borrowed string", line: 1, column: 28)
stack backtrace:
0: rust_begin_unwind
at /rustc/80eb5a8e910e5185d47cdefe3732d839c78a5e7e/library/std/src/panicking.rs:662:5
1: core::panicking::panic_fmt
at /rustc/80eb5a8e910e5185d47cdefe3732d839c78a5e7e/library/core/src/panicking.rs:74:14
2: core::result::unwrap_failed
at /rustc/80eb5a8e910e5185d47cdefe3732d839c78a5e7e/library/core/src/result.rs:1679:5
3: core::result::Result<T,E>::expect
at /rustc/80eb5a8e910e5185d47cdefe3732d839c78a5e7e/library/core/src/result.rs:1059:23
4: <tokenizers::normalizers::NormalizerWrapper as serde::de::Deserialize>::deserialize
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/normalizers/mod.rs:139:25
5: <serde::de::impls::OptionVisitor<T> as serde::de::Visitor>::visit_some
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde-1.0.207/src/de/impls.rs:916:9
6: <&mut serde_json::de::Deserializer<R> as serde::de::Deserializer>::deserialize_option
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:1672:18
7: serde::de::impls::<impl serde::de::Deserialize for core::option::Option<T>>::deserialize
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde-1.0.207/src/de/impls.rs:935:9
8: <core::marker::PhantomData<T> as serde::de::DeserializeSeed>::deserialize
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde-1.0.207/src/de/mod.rs:801:9
9: <serde_json::de::MapAccess<R> as serde::de::MapAccess>::next_value_seed
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:2008:9
10: serde::de::MapAccess::next_value
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde-1.0.207/src/de/mod.rs:1874:9
11: <tokenizers::tokenizer::serialization::TokenizerVisitor<M,N,PT,PP,D> as serde::de::Visitor>::visit_map
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/serialization.rs:132:55
12: <&mut serde_json::de::Deserializer<R> as serde::de::Deserializer>::deserialize_struct
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:1840:31
13: tokenizers::tokenizer::serialization::<impl serde::de::Deserialize for tokenizers::tokenizer::TokenizerImpl<M,N,PT,PP,D>>::deserialize
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/serialization.rs:62:9
14: <tokenizers::tokenizer::_::<impl serde::de::Deserialize for tokenizers::tokenizer::Tokenizer>::deserialize::__Visitor as serde::de::Visitor>::visit_newtype_struct
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/mod.rs:408:21
15: <&mut serde_json::de::Deserializer<R> as serde::de::Deserializer>::deserialize_newtype_struct
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:1723:9
16: tokenizers::tokenizer::_::<impl serde::de::Deserialize for tokenizers::tokenizer::Tokenizer>::deserialize
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/mod.rs:408:21
17: serde_json::de::from_trait
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:2478:22
18: serde_json::de::from_str
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:2679:5
19: tokenizers::tokenizer::Tokenizer::from_file
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/mod.rs:439:25
20: transformers_rs::pipeline::tasks::seq_to_seq::seq_to_seq
at ./src/pipeline/tasks/seq_to_seq.rs:51:21
21: app::main
at ./examples/app/src/main.rs:78:5
22: core::ops::function::FnOnce::call_on
|
https://github.com/huggingface/tokenizers/issues/1627
|
open
|
[
"Feature Request"
] | 2024-09-04T08:33:06Z
| 2024-10-06T15:34:06Z
| null |
kallebysantos
|
huggingface/optimum
| 2,013
|
Is it possible convert decoder_model_merged.onnx to tensorrt via trtexec command ?
|
At the first I convert whisper-tiny to onnx via optimum-cli
`optimum-cli export onnx --model openai/whisper-tiny --task automatic-speech-recognition-with-past whisper-tiny-onnx`
I got the some config, encoder and decoder_merged model
then I brought encoder and decoder_merged to convert to tensorrt via NGC version 23.09-py3, encoder not problem but decoder_merged got problem while converting.
`trtexec --onnx=/workspace/models/whisper-tiny-onnx/decoder_model_merged.onnx --saveEngine=/workspace/models/whisper-tiny-onnx/decoder_model_merged.plan`
the error happen :
`[5] Assertion failed: (node.output().size() <= static_cast<int32_t>(outputs.size())) && "Node has more output tensors than TRT expected."`

Can someone help me about this or Have another ways for good practice ? Please . . .
|
https://github.com/huggingface/optimum/issues/2013
|
closed
|
[] | 2024-09-03T17:52:40Z
| 2024-09-15T10:16:34Z
| 3
|
ccyrene
|
huggingface/lerobot
| 407
|
Multi-Image support for VQ-BeT
|
Hello, I wanted to ask if there is a possibility to have VQ-BeT running on multiple camera's for some environments that have different views, like Robomimic? If so can someone give me points on what exactly I need to change, I would be happy to submit a PR once I get it working on my side and finish the ICLR deadline!
Currently, if I understand correctly we need to change the `VQBeTRgbEncoder`, it seems like it supports multiple camera views but there is an [assert statement](https://github.com/huggingface/lerobot/blob/27ba2951d128a3db2497d1337031e01fb995ccfe/lerobot/common/policies/vqbet/modeling_vqbet.py#L745) that checks the length of the image views to be 1. Is there a specific reason for this assert statement, i.e., I need to change something else?
|
https://github.com/huggingface/lerobot/issues/407
|
closed
|
[
"question",
"policies"
] | 2024-09-03T17:00:23Z
| 2025-10-08T08:27:39Z
| null |
bkpcoding
|
pytorch/vision
| 8,626
|
Better decoder docs
|
Our decoding docs are poor, disorganized, and don't have any example.
We should improve those to clarify what is supported, how, and encourage users to rely on those.
|
https://github.com/pytorch/vision/issues/8626
|
closed
|
[] | 2024-09-03T14:47:11Z
| 2024-10-01T12:19:14Z
| 0
|
NicolasHug
|
huggingface/optimum
| 2,009
|
[Feature request] Add kwargs or additional options for torch.onnx.export
|
### Feature request
In `optimum.exporters.onnx.convert import export_pytorch`, there could be an option to add additional kwargs to the function which could be passed to the torch.onnx.export function.
### Motivation
If such an option possible or will this ruin any of the other features, or is there a reason why there is no option available as of yet?
### Your contribution
Could contribute if this doesn't ruin any other features, or the current feature.
|
https://github.com/huggingface/optimum/issues/2009
|
open
|
[
"onnx"
] | 2024-09-03T13:52:50Z
| 2024-10-08T15:27:26Z
| 0
|
martinkorelic
|
huggingface/speech-to-speech
| 74
|
How to integrate it with frontend
|
Hi, What steps should I follow to create a web app UI and integrate it?
Many thanks for considering my request.
|
https://github.com/huggingface/speech-to-speech/issues/74
|
open
|
[] | 2024-09-03T12:18:52Z
| 2024-09-03T13:52:08Z
| null |
shrinivasait
|
huggingface/diffusers
| 9,356
|
pipeline_stable_diffusion_xl_adapter
|
### Describe the bug
I want to rewrite the call function of the pipeline_stable_diffusion_xl_adapter. When I want to use the function prepare_ip_adapter_image_embeds, there is an error called "AttributeError: 'NoneType' object has no attribute 'image_projection_layers'". The error tells me that the attribution self.unet.encoder_hid_proj is 'NoneType'. The pre-trianed model is 'stabilityai/stable-diffusion-xl-base-1.0'. Is there anything wrong when I use it? Thank you.
### Reproduction
model_path = 'stabilityai/stable-diffusion-xl-base-1.0'
adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-openpose-sdxl-1.0",)
scheduler = DDPMScheduler.from_pretrained(model_path, subfolder="scheduler")
pipe = AdapterPosePipeline.from_pretrained(model_path, adapter=adapter, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler).to(device)
image_embeds = self.prepare_ip_adapter_image_embeds(
image,
ip_adapter_image_embeds,
device,
batch_size * num_images_per_prompt,
self.do_classifier_free_guidance,
)
### Logs
```shell
root@autodl-container-9d8d46936f-161f523c:~/autodl-tmp/COMP5704_Pose_Driven/src# python run.py
/root/miniconda3/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
@torch.library.impl_abstract("xformers_flash::flash_fwd")
/root/miniconda3/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
@torch.library.impl_abstract("xformers_flash::flash_bwd")
/root/miniconda3/lib/python3.12/site-packages/controlnet_aux/mediapipe_face/mediapipe_face_common.py:7: UserWarning: The module 'mediapipe' is not installed. The package will have limited functionality. Please install it using the command: pip install 'mediapipe'
warnings.warn(
Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.87it/s]
/root/miniconda3/lib/python3.12/site-packages/controlnet_aux/open_pose/body.py:34: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model_dict = util.transfer(self.model, torch.load(model_path))
/root/miniconda3/lib/python3.12/site-packages/controlnet_aux/open_pose/hand.py:14: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model_dict = util.transfer(self.model, torch.load(model_path))
/root/miniconda3/lib/python3.12/site-packages/controlnet_aux/open_pose/face.py:325: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling.
|
https://github.com/huggingface/diffusers/issues/9356
|
open
|
[
"bug",
"stale"
] | 2024-09-03T10:25:57Z
| 2024-10-28T15:03:18Z
| 6
|
Yuhan291
|
huggingface/diffusers
| 9,352
|
Text generation?
|
Hi thanks for this great library!
There seems to be some diffusion models that generate text, instead of images. (For example, these two surveys: https://arxiv.org/abs/2303.06574, https://www.semanticscholar.org/paper/Diffusion-models-in-text-generation%3A-a-survey-Yi-Chen/41941f072db18972b610de9979e755afba35f11e). Therefore, it would be great if Diffusers could support this.
|
https://github.com/huggingface/diffusers/issues/9352
|
open
|
[
"wip"
] | 2024-09-03T06:54:38Z
| 2024-11-23T04:57:37Z
| 13
|
fzyzcjy
|
huggingface/speech-to-speech
| 71
|
How to run in ubuntu
|
I am trying to run it locally in my Ubuntu machine I have nvidia gpu and already setup CUDA.
```
python s2s_pipeline.py \
--recv_host 0.0.0.0 \
--send_host 0.0.0.0 \
--lm_model_name microsoft/Phi-3-mini-4k-instruct \
--init_chat_role system \
--stt_compile_mode reduce-overhead \
--tts_compile_mode default
```
This is the command I passed in the terminal but I am getting Error like this
```
(venv) basal-desktop@basal-desktop:/media/basal-desktop/E/speech-to-speech$ python s2s_pipeline.py --recv_host 0.0.0.0 --send_host 0.0.0.0 --lm_model_name microsoft/Phi-3-mini-4k-instruct --init_chat_role system --stt_compile_mode reduce-overhead --tts_compile_mode default
[nltk_data] Downloading package averaged_perceptron_tagger_eng to
[nltk_data] /home/basal-desktop/nltk_data...
[nltk_data] Package averaged_perceptron_tagger_eng is already up-to-
[nltk_data] date!
Using cache found in /home/basal-desktop/.cache/torch/hub/snakers4_silero-vad_master
2024-09-03 11:20:08,495 - STT.whisper_stt_handler - INFO - Warming up WhisperSTTHandler
You have passed task=transcribe, but also have set `forced_decoder_ids` to [[1, None], [2, 50360]] which creates a conflict. `forced_decoder_ids` will be ignored in favor of task=transcribe.
The attention mask is not set and cannot be inferred from input because pad token is same as eos token.As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
/tmp/tmp1sx5flzq/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp7dgszafh/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpgutcpzdq/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpxya7vifd/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpoxfa0b57/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp9sd15wgk/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpuimau_4j/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp2hzix58m/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmppnjhbdhp/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp2dvfaztp/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpaofqmu2k/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpcnc1scdn/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpnsf4b2jl/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpf_5rg_m_/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpnf8nvq6n/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp2f8iezjt/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp_om2_15p/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpc0t1q8vd/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpdsdc_2ef/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp7h6fpvoc/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp4qfy9i7j/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpsjvhjzmz/main.c:5:10: fatal error: Py
|
https://github.com/huggingface/speech-to-speech/issues/71
|
closed
|
[] | 2024-09-03T06:02:45Z
| 2024-10-01T07:45:20Z
| null |
Basal-Analytics
|
huggingface/optimum
| 2,006
|
Support for gemma2-2b-it(gemma 2nd version) Model Export in Optimum for OpenVINO
|
### Feature request
please provide Support for gemma2 Model Export in Optimum for OpenVINO
version:optimum(1.21.4)
transformers:4.43.4
### Motivation
I encountered an issue while trying to export a gemma2 model using the optimum library for ONNX export. The error message suggests that the gemma2 model is either a custom or unsupported architecture, and I need to provide a custom export configuration.
error:raise ValueError(
ValueError: Trying to export a gemma2 model, that is a custom or unsupported architecture, but no custom export configuration was passed as `custom_export_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum-intel/issues if you would like the model type gemma2 to be supported natively in the OpenVINO export
### Your contribution
It would be great if support for the gemma2 model could be added natively in the optimum library for OpenVINO export. Alternatively, detailed guidance on how to create a custom export configuration for this model would be appreciated.i
|
https://github.com/huggingface/optimum/issues/2006
|
open
|
[
"onnx"
] | 2024-09-03T05:54:51Z
| 2025-01-22T15:40:04Z
| 2
|
chakka12345677
|
huggingface/transformers
| 33,270
|
Static KV cache status: How to use it? Does it work for all models?
|
I see that there are many PRs about [StaticCache](https://github.com/huggingface/transformers/pulls?q=is%3Apr+StaticCache), but I couldn't find a clear documentation on how to use it.
#### What I want
* To not have Transformers allocate memory dynamically for the KV cache when using `model.generate()`, as that leads to increased memory usage (due to garbage collection not happening fast/often enough) and worse performance.
* To use that by default always, for every model, for every supported quantization backend (AutoAWQ, AutoGPTQ, AQLM, bitsandbytes, etc).
#### Who can help?
Maybe @gante
|
https://github.com/huggingface/transformers/issues/33270
|
closed
|
[] | 2024-09-03T02:17:54Z
| 2024-11-25T16:17:25Z
| null |
oobabooga
|
huggingface/transformers.js
| 917
|
Where should I get `decoder_model_merged` file from?
|
### Question
Hey,
I'm trying to use `whisper-web` demo with my finetuned model.
After I managed connecting my model to the demo application, I'm getting errors related to this:
https://github.com/xenova/transformers.js/blob/7f5081da29c3f77ee830269ab801344776e61bcb/src/models.js#L771
Basically, when `transformers.js` tries to load a whisper model, it looks for files called `decoder_model_merged.onnx` / `decoder_model_merged_quantized.onnx` / `decoder_model_merged_fp16.onnx`.
The thing is, that the conversion script didn't create any of these files.
That's how the conversion script output looks like:

Please help me figure out what am I missing here.
P.S. After I'll get it to work, I'll be happy to open a PR on `whisper-web` repository that will enable using local models together with remote (on HF hub) models.
Thanks !
|
https://github.com/huggingface/transformers.js/issues/917
|
closed
|
[
"question"
] | 2024-09-02T07:30:57Z
| 2025-02-26T12:05:05Z
| null |
abuchnick-aiola
|
huggingface/diffusers
| 9,339
|
SD3 inpatinting
|
I found the StableDiffusion3InpaintPipeline, where can i found the weight of SD3 inpainting
|
https://github.com/huggingface/diffusers/issues/9339
|
closed
|
[
"stale"
] | 2024-09-02T05:00:19Z
| 2024-10-02T15:43:24Z
| 5
|
ucasyjz
|
pytorch/torchtitan
| 566
|
Multi-node training without AWS EFA clusters
|
Thank you so much for releasing code for this great project!
For multi-node training, right now I've only found commands in `multinode_trainer.slurm`, which seem to be specific to AWS EFA slurm clusters.
I'm wondering if it's possible to try multi-node training without ASW setup, say with simply the IPs of 2 nodes instead?
Thank you very much for your help!
|
https://github.com/pytorch/torchtitan/issues/566
|
closed
|
[
"question"
] | 2024-08-31T22:41:04Z
| 2024-09-04T20:55:50Z
| null |
LeoXinhaoLee
|
huggingface/transformers
| 33,232
|
How to use hugginface for training: google-t5/t5-base
|
### Feature request
How to use hugginface for training / 如何使用huggingface来训练:
https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation
#What is the format and how do I write it? / 这个格式是怎么样的,怎么写呢?
def batch_collator(data):
print(data) #?????????????????????????????????????????????
return {
'pixel_values': torch.stack([x for x in pixel_values]),
'labels': torch.tensor([x for x in labels])
}
trainer = Trainer(
model=model,
args=training_args,
data_collator=batch_collator,//这个需要怎么写?
train_dataset=dataset['train'],
)
### Motivation
无
### Your contribution
无
我已经试了可以用: https://www.kaggle.com/code/weililong/google-t5-t5-base
不知道有没有什么坑
|
https://github.com/huggingface/transformers/issues/33232
|
open
|
[
"Usage",
"Feature request"
] | 2024-08-31T07:41:18Z
| 2024-09-09T08:45:50Z
| null |
gg22mm
|
pytorch/pytorch
| 134,901
|
How to calculate second derivative using PyTorch with GPU (cuda)
|
### 🚀 The feature, motivation and pitch
I have a python code segment related to a deep RL algorithm where it calculates the second order optimization and second derivative with Hessian matrix and fisher information matrix. Normally I run the whole code on GPU (cuda), but since I got a computational issue to calculate second derivative in cuda,
```
NotImplementedError: the derivative for '_cudnn_rnn_backward' is not implemented. Double backwards is not supported for CuDNN RNNs due to limitations in the CuDNN API. To run double backwards, please disable the CuDNN backend temporarily while running the forward pass of your RNN. For example:
with torch.backends.cudnn.flags(enabled=False):
output = model(inputs)
```
I had to move to CPU for this code segment, and now the code is executing sequentially instead of in parallel, which takes a long time to run:
```
grads = torch.autograd.grad(policy_loss, self.policy.Actor.parameters(), retain_graph=True)
loss_grad = torch.cat([grad.view(-1) for grad in grads])
def Fvp_fim(v = -loss_grad):
with torch.backends.cudnn.flags(enabled=False):
M, mu, info = self.policy.Actor.get_fim(states_batch)
#pdb.set_trace()
mu = mu.view(-1)
filter_input_ids = set([info['std_id']])
t = torch.ones(mu.size(), requires_grad=True, device=mu.device)
mu_t = (mu * t).sum()
Jt = compute_flat_grad(mu_t, self.policy.Actor.parameters(), filter_input_ids=filter_input_ids, create_graph=True)
Jtv = (Jt * v).sum()
Jv = torch.autograd.grad(Jtv, t)[0]
MJv = M * Jv.detach()
mu_MJv = (MJv * mu).sum()
JTMJv = compute_flat_grad(mu_MJv, self.policy.Actor.parameters(), filter_input_ids=filter_input_ids, create_graph=True).detach()
JTMJv /= states_batch.shape[0]
std_index = info['std_index']
JTMJv[std_index: std_index + M.shape[0]] += 2 * v[std_index: std_index + M.shape[0]]
return JTMJv + v * self.damping
```
Above is the main function, where it calculates the second derivative. below are the supportive functions and relevant classes it has used.
```
def compute_flat_grad(output, inputs, filter_input_ids=set(), retain_graph=True, create_graph=False):
if create_graph:
retain_graph = True
inputs = list(inputs)
params = []
for i, param in enumerate(inputs):
if i not in filter_input_ids:
params.append(param)
grads = torch.autograd.grad(output, params, retain_graph=retain_graph, create_graph=create_graph, allow_unused=True)
j = 0
out_grads = []
for i, param in enumerate(inputs):
if (i in filter_input_ids):
out_grads.append(torch.zeros(param.view(-1).shape, device=param.device, dtype=param.dtype))
else:
if (grads[j] == None):
out_grads.append(torch.zeros(param.view(-1).shape, device=param.device, dtype=param.dtype))
else:
out_grads.append(grads[j].view(-1))
j += 1
grads = torch.cat(out_grads)
for param in params:
param.grad = None
return grads
------
import torch
import torch.nn as nn
from agents.models.feature_extracter import LSTMFeatureExtractor
from agents.models.policy import PolicyModule
from agents.models.value import ValueModule
class ActorNetwork(nn.Module):
def __init__(self, args):
super(ActorNetwork, self).__init__()
self.FeatureExtractor = LSTMFeatureExtractor(args)
self.PolicyModule = PolicyModule(args)
def forward(self, s):
lstmOut = self.FeatureExtractor.forward(s)
mu, sigma, action, log_prob = self.PolicyModule.forward(lstmOut)
return mu, sigma, action, log_prob
def get_fim(self, x):
mu, sigma, _, _ = self.forward(x)
if sigma.dim() == 1:
sigma = sigma.unsqueeze(0)
cov_inv = sigma.pow(-2).repeat(x.size(0), 1)
param_count = 0
std_index = 0
id = 0
std_id = id
for name, param in self.named_parameters():
if name == "sigma.weight":
std_id = id
std_index = param_count
param_count += param.view(-1).shape[0]
id += 1
return cov_inv.detach(), mu, {'std_id': std_id, 'std_index': std_index}
```
In the bigger picture there are large amounts of batches going through this function, since all of 'em have to go sequentially through this function, it highly increases the total running time. Is there a possible way to calculate the second derivative with Pytorch while running on cuda/GPU?
### Alternatives
_No response_
### Additional context
_No response_
cc @csarofeen @ptrblck @xwang233 @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @mikaylagawarecki @zou3519 @Chillee @samdow @kshitij12345
|
https://github.com/pytorch/pytorch/issues/134901
|
open
|
[
"module: double backwards",
"module: cudnn",
"module: autograd",
"module: rnn",
"triaged",
"module: functorch"
] | 2024-08-31T04:01:40Z
| 2024-09-04T01:48:21Z
| null |
Damika-Anupama
|
huggingface/transformers
| 33,228
|
How to obtain batch index of validation dataset?
|
Hi,
I wanted to know how would we fetch the batch id/index of the eval dataset in ```preprocess_logits_for_metrics()``` ?
Thanks in advance!
|
https://github.com/huggingface/transformers/issues/33228
|
closed
|
[
"Usage"
] | 2024-08-31T00:11:13Z
| 2024-10-13T08:04:26Z
| null |
SoumiDas
|
huggingface/transformers
| 33,210
|
The model's address is https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you
|
### Feature request
hello,The model's address is [https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I](https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx%E3%80%82I) don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you
### Motivation
hello,The model's address is [https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I](https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx%E3%80%82I) don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you
### Your contribution
hello,The model's address is [https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I](https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx%E3%80%82I) don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you
|
https://github.com/huggingface/transformers/issues/33210
|
open
|
[
"Feature request"
] | 2024-08-30T09:33:01Z
| 2024-10-22T07:18:15Z
| null |
pengpengtao
|
huggingface/dataset-viewer
| 3,054
|
Image URL detection
|
[`is_image_url`](https://github.com/huggingface/dataset-viewer/blob/946b0788fa426007161f2077a70b5ae64b211cf8/libs/libcommon/src/libcommon/utils.py#L131-L134) relies on a filename and extension being present, however, in some cases an image URL does not contain a filename. Example [dataset](https://huggingface.co/datasets/bigdata-pw/SteamScreenshots) and example [URL](https://steamuserimages-a.akamaihd.net/ugc/910172100453203507/062F4787060B2E4E93EFC4631E96183B027A860B/). This could be improved by checking the `content-type` header of the response or checking for strings like "image" in the URL.
|
https://github.com/huggingface/dataset-viewer/issues/3054
|
open
|
[
"question",
"improvement / optimization",
"P2"
] | 2024-08-29T23:17:55Z
| 2025-07-04T09:37:23Z
| null |
hlky
|
huggingface/transformers.js
| 911
|
Next.js example breaks with v3
|
### Question
Are there steps documented anywhere for running V3 in your app? I'm trying to test it out via these steps:
1. Pointing to the alpha in my `package.json`: `"@huggingface/transformers": "^3.0.0-alpha.10",`
2. `npm i`
3. `cd node_modules/@hugginface/transformers && npm i`
4. copy the [webpack.config.js](https://github.com/xenova/transformers.js/blob/main/webpack.config.js) from the repo into the node_modules/@hugginface/transformers dir.
5. `npm run build` in node_modules/@hugginface/transformers dir.
I then run my app, and get the following error:
```
ERROR in ../../node_modules/@huggingface/transformers/dist/transformers.js 42256:34-64
Module not found: Error: Can't resolve './' in '/node_modules/@huggingface/transformers/dist'
webpack compiled with 1 error
```
Thanks, I'm excited to test out the latest and greatest!
|
https://github.com/huggingface/transformers.js/issues/911
|
closed
|
[
"question"
] | 2024-08-29T20:17:03Z
| 2025-02-16T12:35:47Z
| null |
stinoga
|
pytorch/xla
| 7,925
|
Prepare a documentation to explain the use cases for `torch.compile`, `torch_xla.compile`, torch_xla eager mode, torchxla2
|
## 📚 Documentation
Author a documentation to explain the use cases for `torch.compile`, `torch_xla.compile`, torch_xla eager mode, torchxla2. Users and customers look for clarity on the "the utility" of each option, pros/cons, small example to demonstrate correct use.
cc @ManfeiBai @JackCaoG @will-cromar @qihqi
|
https://github.com/pytorch/xla/issues/7925
|
closed
|
[
"documentation"
] | 2024-08-29T17:01:06Z
| 2024-09-24T18:33:39Z
| 2
|
miladm
|
pytorch/torchtitan
| 562
|
Pipeline Parallelism + FSDP
|
On `PP + FSDP` and `PP + TP + FSDP`:
- Is there any documentation on how these different parallelisms compose?
- What are the largest training runs these strategies have been tested on?
- Are there benchmarks for how these strategies compare against other distributed training frameworks that expose similar parallelisms?
Particularly interested in how `PP + FSDP` work together as it seems DeepSpeed explicitly disallows `ZeRO 2/3 + PP` (see [here](https://github.com/microsoft/DeepSpeed/blob/4864991f53bd2e12446198bcc655f919eb9157f9/deepspeed/runtime/pipe/engine.py#L77-L78) specifically, and [here](https://github.com/microsoft/DeepSpeed/issues/1110) for discussion).
@wconstab @weifengpy @wanchaol
|
https://github.com/pytorch/torchtitan/issues/562
|
open
|
[
"enhancement",
"question",
"module: pipelining"
] | 2024-08-29T14:19:58Z
| 2025-10-30T06:21:51Z
| null |
jeromeku
|
huggingface/diffusers
| 9,317
|
Finetuning on dataset
|
dear @thedarkzeno and @patil-suraj
Thank you so much for putting your work out there. I wanted to ask, how would the training be for training on a dataset and not a single instance image as mentioned in train_dreambooth_inpaint. And can I finetune models trained from https://github.com/CompVis/latent-diffusion repository?
Thanks in advance
|
https://github.com/huggingface/diffusers/issues/9317
|
closed
|
[
"stale"
] | 2024-08-29T12:20:51Z
| 2024-10-23T16:10:47Z
| 4
|
ultiwinter
|
pytorch/pytorch
| 134,760
|
How to correctly release the memory of a tensor
|
i have fined the memory increase at this line.
[param.copy_(input_param)](https://github.com/pytorch/pytorch/blob/d01a7a9faa5a742a3df7374b97bbc1db1205b6ed/torch/nn/modules/module.py#L2425)
but the memory cant be released clean after the module use.
what happen in it and how to correctly release the memory of a tensor.
[more detail](https://github.com/comfyanonymous/ComfyUI/issues/4655#issuecomment-2317354203)
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
|
https://github.com/pytorch/pytorch/issues/134760
|
closed
|
[
"module: nn",
"module: memory usage",
"triaged"
] | 2024-08-29T11:39:12Z
| 2024-08-30T08:24:23Z
| null |
huangqiaobo
|
huggingface/optimum-quanto
| 300
|
How to quantize, save and load Stable Diffusion 3 model.
|
import torch
from optimum.quanto import qint2, qint4, qint8, quantize, freeze
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.bfloat16)
quantize(pipe.text_encoder, weights=qint4)
freeze(pipe.text_encoder)
quantize(pipe.text_encoder_3, weights=qint4)
freeze(pipe.text_encoder_3)
quantize(pipe.transformer, weights=qint8, exclude="proj_out")
freeze(pipe.transformer)
pipe = pipe.to("cuda")
pipe.save_pretrained("/content/drive/MyDrive/quantized_Stable_diffusion_1")
after saving how can i load this model from this directory and perform text to image generation
|
https://github.com/huggingface/optimum-quanto/issues/300
|
closed
|
[
"Stale"
] | 2024-08-29T06:24:02Z
| 2024-10-06T02:06:30Z
| null |
jainrahul52
|
huggingface/optimum
| 2,002
|
Is it possible to infer the model separately through encoder.onnx and decoder.onnx
|
### Feature request
Is it possible to infer the model separately through encoder.onnx and decoder.onnx
### Motivation
Is it possible to infer the model separately through encoder.onnx and decoder.onnx
### Your contribution
Is it possible to infer the model separately through encoder.onnx and decoder.onnx
|
https://github.com/huggingface/optimum/issues/2002
|
open
|
[
"onnx"
] | 2024-08-29T03:26:20Z
| 2024-10-08T15:28:59Z
| 0
|
pengpengtao
|
pytorch/TensorRT
| 3,124
|
❓ [Question] dynamo conversion failing w/ TRTInterpreter
|
## ❓ Question
im able to `torch.export` and generate an ExportedProgram with no issues for my model. upon compiling with `torch_tensorrt`...
```python
ep = torch.export.load("...")
example_inputs = ep.example_inputs[0]
model = ep.module().to("cuda")
compile_spec = {
"ir": "torch_compile",
"inputs": example_inputs,
"enabled_precisions": enabled_precisions,
"workspace_size": workspace_size,
"min_block_size": min_block_size,
"torch_executed_ops": {},
"sparse_weights": True,
}
optimized_model = torch_tensorrt.compile(model, **compile_spec)
```
... i run into this error:
```
ERROR:torch_tensorrt [TensorRT Conversion Context]:INetworkDefinition::addConstant: Error Code 3: API Usage Error (Parameter check failed, condition: !weights.values == !weights.count. )
Traceback (most recent call last):
...
File ".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 479, in run
self._construct_trt_network_def()
File ".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 325, in _construct_trt_network_def
super().run()
File ".../lib/python3.10/site-packages/torch/fx/interpreter.py", line 145, in run
self.env[node] = self.run_node(node)
File ".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 529, in run_node
trt_node: torch.fx.Node = super().run_node(n)
File ".../lib/python3.10/site-packages/torch/fx/interpreter.py", line 202, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File ".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 638, in call_function
return converter(self.ctx, target, args, kwargs, self._cur_node_name)
File ".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/aten_ops_converters.py", line 242, in aten_ops_cat
return impl.cat.cat(
File ".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/impl/cat.py", line 31, in cat
each_input = get_trt_tensor(ctx, each_input, f"{name}_tensor_{i}")
File ".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/converter_utils.py", line 384, in get_trt_tensor
return create_constant(ctx, input_val, name, dtype, min_rank)
File ".../lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/converter_utils.py", line 349, in create_constant
constant.name = name
torch._dynamo.exc.BackendCompilerFailed: backend='torch_tensorrt_backend' raised:
AttributeError: 'NoneType' object has no attribute 'name'
```
im currently able to cleanly generate an `ExportedProgram` via `torch.export`, and outputs from the trace match the original PyTorch model. in particular, its unclear to me why `!weights.values == !weights.count` would be an `API Usage Error`, and the discrepancy between torch.compile and how torch_tensorrt interprets / performs the op conversion (torch.compile on the ExportedProgram module works fine)
## What you have already tried
i've narrowed the issue down to a single module that does positional encoding. the output of this module is then concat'd with another tensor, which is the error above. without this module, everything works as expected, and i'm able to see about a 5x speedup.
the only unique thing about this module is that it has a buffer and some in-place operations; however, i've dumped and manually inspected the fx Graph and the trace looks correct (buffer lifted as a constant input). other things ive done are: re-writing the forward so that they are no in-place operations to make graph capture easier.
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.4
- CPU Architecture: aarch64
- OS (e.g., Linux): Ubuntu
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source): modified bazel build rules + install
- Are you using local sources or building from archives: local build from source
- Python version: 3.10
- CUDA version: 12.4
- GPU models and configuration: Ampere (Jetson Nano, JetPack 6.0)
- Any other relevant information: i compiled torch_tensorrt on HEAD of main as of last Friday (8/23)
## Additional context
cc @narendasan not sure if you have any insight here. thanks!
|
https://github.com/pytorch/TensorRT/issues/3124
|
open
|
[
"question"
] | 2024-08-28T20:09:48Z
| 2024-09-06T19:36:58Z
| null |
patrick-botco
|
pytorch/tutorials
| 3,017
|
💡 [REQUEST] - What is purpose of `out.backward(torch.randn(1, 10))` in neural_networks_tutorial
|
### 🚀 Describe the improvement or the new tutorial
In [neural networks tutorial for beginners](https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html), we have the following:
Zero the gradient buffers of all parameters and backprops with random gradients:
```
net.zero_grad()
out.backward(torch.randn(1, 10))
```
What is the purpose of this? It is not part of standard ML workflows and can be confusing to beginners. (As evidence,I am helping some people learn basics of ML and I got questions about this line. This is how I found out about it!)
If there is no good reason for it, then I suggest:
- dropping these few lines
- changing wording of other parts of the page if needed. E.g. 'at this point we covered... calling backward'
### Existing tutorials on this topic
_No response_
### Additional context
_No response_
cc @subramen @albanD
|
https://github.com/pytorch/tutorials/issues/3017
|
open
|
[
"question",
"intro",
"core"
] | 2024-08-28T14:51:46Z
| 2025-04-16T18:24:08Z
| null |
Lovkush-A
|
huggingface/diffusers
| 9,303
|
[Add] VEnhancer - the interpolation and upscaler for CogVideoX-5b
|
### Model/Pipeline/Scheduler description
VEnhancer, a generative space-time enhancement framework that can improve the existing T2V results.
https://github.com/Vchitect/VEnhancer
### Open source status
- [X] The model implementation is available.
- [X] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/diffusers/issues/9303
|
open
|
[
"stale"
] | 2024-08-28T14:43:32Z
| 2024-12-11T15:04:32Z
| 3
|
tin2tin
|
huggingface/text-generation-inference
| 2,466
|
Guide on how to use TensorRT-LLM Backend
|
### Feature request
Does any documentation exist, or would it be possible to add documentation, on how to use the TensorRT-LLM backend? #2458 makes mention that the TRT-LLM backend exists, and I can see that there's a Dockerfile for TRT-LLM, but I don't see any guides on how to build/use it.
### Motivation
I would like to run TensorRT-LLM models using TGI.
### Your contribution
I'm willing to test any builds/processes/pipelines that are available.
|
https://github.com/huggingface/text-generation-inference/issues/2466
|
open
|
[] | 2024-08-28T13:24:26Z
| 2025-05-18T16:23:14Z
| null |
michaelthreet
|
huggingface/lerobot
| 390
|
[Feature Request] Add end effector pos field in lerobot dataset?
|
Aloha style joint space dataset will limit data set to the specific robot. Can we change joint space data or add a field of end effector to cartesian space data base on the robot URDF file?
It may help robotics community build a more generalized policy.
|
https://github.com/huggingface/lerobot/issues/390
|
closed
|
[
"question",
"dataset",
"robots"
] | 2024-08-28T13:19:15Z
| 2024-08-29T09:55:27Z
| null |
hilookas
|
huggingface/datasets
| 7,129
|
Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output
|
In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code:
````
from datasets import Features
features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])})
features
````
which expects to output (as stated in the documentation):
````
{'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'], id=None)}
````
but it generates the following
````
{'label': ClassLabel(names=['bad', 'ok', 'good'], id=None)}
````
If my understanding is correct, this happens because although num_classes is used during the init of the object, it is afterward ignored:
https://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/features/features.py#L975
I would like to work on this issue if this is something needed 😄
|
https://github.com/huggingface/datasets/issues/7129
|
closed
|
[] | 2024-08-28T12:27:48Z
| 2024-12-06T11:32:02Z
| 0
|
sergiopaniego
|
huggingface/diffusers
| 9,299
|
CUDAGRAPHs for Flux position embeddings
|
@yiyixuxu
Is it possible to refactor the Flux positional embeddings so that we can fully make use of CUDAGRAPHs?
```bash
skipping cudagraphs due to skipping cudagraphs due to cpu device (device_put). Found from :
File "/home/sayak/diffusers/src/diffusers/models/transformers/transformer_flux.py", line 469, in forward
image_rotary_emb = self.pos_embed(ids)
File "/home/sayak/.pyenv/versions/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/sayak/diffusers/src/diffusers/models/embeddings.py", line 630, in forward
self.axes_dim[i], pos[:, i], repeat_interleave_real=True, use_real=True, freqs_dtype=freqs_dtype
```
<details>
<summary>Code</summary>
```python
import torch
torch.set_float32_matmul_precision("high")
torch._inductor.conv_1x1_as_mm = True
torch._inductor.coordinate_descent_tuning = True
torch._inductor.epilogue_fusion = False
torch._inductor.coordinate_descent_check_all_directions = True
import diffusers
from platform import python_version
from diffusers import DiffusionPipeline
print(diffusers.__version__)
print(torch.__version__)
print(python_version())
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda")
pipe.transformer.to(memory_format=torch.channels_last)
pipe.vae.to(memory_format=torch.channels_last)
pipe.transformer = torch.compile(pipe.transformer, mode="max-autotune", fullgraph=True)
pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True)
for _ in range(5):
image = pipe(
"Happy bear",
num_inference_steps=5,
guidance_scale=3.5,
max_sequence_length=512,
generator=torch.manual_seed(42),
height=1024,
width=1024,
).images[0]
```
</details>
If we can fully make sure CUDAGRAPHs `torch.compile()` would be faster.
|
https://github.com/huggingface/diffusers/issues/9299
|
closed
|
[] | 2024-08-28T11:33:16Z
| 2024-08-29T19:37:17Z
| 0
|
sayakpaul
|
pytorch/pytorch
| 134,668
|
Whether tensor parallelism supports the overlap of communication calculations for gradient computation, and how to implement it
|
### 🚀 The feature, motivation and pitch
I want to know How to achieve the overlap of communication calculations when finding the gradient after row cutting/column cutting of the linear layer,thanks
The following is
https://pytorch.org/docs/2.3/distributed.tensor.parallel.html
### Alternatives
_No response_
### Additional context
_No response_
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
|
https://github.com/pytorch/pytorch/issues/134668
|
open
|
[
"oncall: distributed",
"triaged"
] | 2024-08-28T11:06:58Z
| 2024-08-30T17:54:43Z
| null |
Xingzhi107
|
huggingface/transformers.js
| 906
|
Unsupported model type: jais
|
### Question
### System Info
macOS, node v20.10, @xenova/transformers 2.17.2
### Environment/Platform
- [ ] Website/web-app
- [ ] Browser extension
- [x] Server-side (e.g., Node.js, Deno, Bun)
- [ ] Desktop app (e.g., Electron)
- [ ] Other (e.g., VSCode extension)
### Description
```
Error: Unsupported model type: jais
at Function.from_pretrained (file:///node_modules/@xenova/transformers/src/models.js:5526:19)
at async Promise.all (index 1)
at loadItems (file:///node_modules/@xenova/transformers/src/pipelines.js:3279:5)
at pipeline (file:///node_modules/@xenova/transformers/src/pipelines.js:3219:21)
at SearchQueryParser.initializeModel (src/search-engine/query-parser/search-query-parser.ts:27:18)
```
### Reproduction
```javascript
import { Logger } from '@nestjs/common';
export class SearchQueryParser {
private tokenizer: any;
private model: any;
private logger: Logger;
private systemPrompt = '';
constructor() {
this.logger = new Logger('query parser');
this.initializeModel();
}
private async initializeModel() {
const { AutoTokenizer, pipeline } = await import('@xenova/transformers');
this.tokenizer = await AutoTokenizer.from_pretrained(
'omarabb315/Query-5KM-no_synonyms_noon_1',
{
progress_callback: (data) => {
this.logger.verbose(
${data.status} ${data.file || ''} ${data.progress || ''}`,
);
},
},
);
this.model = await pipeline(
'text-generation',
'omarabb315/Query-5KM-no_synonyms_noon_1',
);
}
async parse(query: string): Promise<any> {
if (!this.model) {
await this.initializeModel();
}
const tokenizerResponse = this.tokenizer.apply_chat_template(
[
{ role: 'system', content: this.systemPrompt },
{ role: 'user', content: query },
],
{
tokenize: false,
add_generation_prompt: true,
},
);
const response = this.model(tokenizerResponse.toString());
const parsedQuery = response[0].generated_text;
return parsedQuery;
}
}
```
|
https://github.com/huggingface/transformers.js/issues/906
|
closed
|
[
"question"
] | 2024-08-28T09:46:17Z
| 2024-08-28T21:01:10Z
| null |
SherifElfadaly
|
huggingface/trl
| 1,986
|
how to convert dpodata to ktodata
|
### Feature request
how to convert dpodata to ktodata
### Motivation
how to convert dpodata to ktodata
### Your contribution
how to convert dpodata to ktodata
|
https://github.com/huggingface/trl/issues/1986
|
closed
|
[] | 2024-08-28T06:23:13Z
| 2024-08-28T09:02:35Z
| null |
dotsonliu
|
pytorch/ao
| 763
|
How to reduce autoquant compilation time
|
Autoquant has been popular among the diffusers crowd since its OOB performance has been the best but the main issue is compile times are quite long. There's a few strategies to mitigate this
1. Tune faster: either with better heuristics or a faster tuning core loop
2. Cache things: It's fine if tuning takes a long time if subsequent tunings take less time so we could have a cache. Right now some users are conflating the kernel autotune cach as an autoquant cache. Probably makes sense to hide the autotune cache
3. Print progress more verbosely: Since people are waiting for a long time we can print a nice report to make things more appealing
|
https://github.com/pytorch/ao/issues/763
|
open
|
[] | 2024-08-27T20:49:09Z
| 2024-08-28T17:36:03Z
| null |
msaroufim
|
huggingface/datasets
| 7,128
|
Filter Large Dataset Entry by Entry
|
### Feature request
I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process.
Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset. Now, the dataset consists of many tables. Ideally, I would want to have some simple filtering criterion, such that I only see the "good" tables. Here is an example of what the code might look like:
```
dataset = load_dataset(
"really-large-dataset",
streaming=True
)
# And let's say we process the dataset bit by bit because we want intermediate results
dataset = islice(dataset, 10000)
# Define a function to filter the data
def filter_function(table):
if some_condition:
return True
else:
return False
# Use the filter function on your dataset
filtered_dataset = (ex for ex in dataset if filter_function(ex))
```
And then I work on the processed dataset, which would be magnitudes faster than working on the original. I would love to hear if the problem setup + solution makes sense to people, and if anyone has suggestions!
### Motivation
See description above
### Your contribution
Happy to make PR if this is a new feature
|
https://github.com/huggingface/datasets/issues/7128
|
open
|
[
"enhancement"
] | 2024-08-27T20:31:09Z
| 2024-10-07T23:37:44Z
| 4
|
QiyaoWei
|
huggingface/huggingface_hub
| 2,491
|
How to uplaod folders into repo with most effective way - on error continue resume max speed
|
Hello. I have the below tasks for uploading however I am not sure if they are most effective way of doing
#### This cell is used to upload single file into a repo with certain name
```
from huggingface_hub import HfApi
api = HfApi()
api.upload_file(
path_or_fileobj=r"/home/Ubuntu/apps/stable-diffusion-webui/models/Stable-diffusion/model_name.safetensors",
path_in_repo="model_name.safetensors",
repo_id="YourUserName/reponame",
repo_type="model",
)
```
#### This cell is used to upload a folder into a repo with single commit
```
from huggingface_hub import HfApi
api = HfApi()
api.upload_folder(
folder_path=r"/home/Ubuntu/apps/stable-diffusion-webui/models/Stable-diffusion",
repo_id="YourUserName/reponame",
repo_type="model",
)
```
This one is especially super slow whenever I run. I think it re-calculates sha to compare if files modified
#### This cell uploads a folder into remote repo with multi commit
#### Supports continue feature so if gets interrupted you can run again to continue / resume
```
from huggingface_hub import HfApi
from huggingface_hub import get_collection, delete_collection_item
from huggingface_hub import upload_file
from huggingface_hub import (
HfFolder,
ModelCard,
ModelCardData,
create_repo,
hf_hub_download,
upload_folder,
whoami,
)
api = HfApi()
upload_folder(
folder_path=r"/home/Ubuntu/apps/stable-diffusion-webui/models/Stable-diffusion",
repo_id="YourUserName/reponame",
repo_type="model",
multi_commits=True,
multi_commits_verbose=True,
)
```
|
https://github.com/huggingface/huggingface_hub/issues/2491
|
closed
|
[
"bug"
] | 2024-08-27T16:36:04Z
| 2024-08-28T08:24:22Z
| null |
FurkanGozukara
|
huggingface/Google-Cloud-Containers
| 73
|
Download model files from GCS (Instead of HF Hub)
|
When deploying an HF model to Vertex AI, I would like to download a fine-tuned model from GCS, instead of from HF Hub, like so:
```
model = aiplatform.Model.upload(
display_name="my-model",
serving_container_image_uri=os.getenv("CONTAINER_URI"),
serving_container_environment_variables={
"AIP_STORAGE_URI": "gs://path/to/model/files",
},
serving_container_ports=[8080],
)
model.wait()
```
I would expect this to be supported since the entrypoint script logic should handle this: https://github.com/huggingface/Google-Cloud-Containers/blob/main/containers/tei/cpu/1.4.0/entrypoint.sh
Will this be supported when V1.4 is released? When will this be?
|
https://github.com/huggingface/Google-Cloud-Containers/issues/73
|
closed
|
[
"tei",
"question"
] | 2024-08-27T12:14:10Z
| 2024-09-16T07:07:11Z
| null |
rm-jeremyduplessis
|
pytorch/ao
| 750
|
Question RE AO MX formats
|
I noticed the [MX readme](https://github.com/pytorch/ao/tree/main/torchao/prototype/mx_formats) has this line: "we match bitwise to other implementations of the OCP MX spec (code not in this repo), with a couple of edge cases left to resolve." Is there a list of edge cases where AO does not match reference implementations? Also, is https://github.com/microsoft/microxcaling the reference implementation AO is trying to match or something else?
|
https://github.com/pytorch/ao/issues/750
|
closed
|
[
"question",
"mx"
] | 2024-08-26T17:37:40Z
| 2024-08-27T17:15:01Z
| null |
tsengalb99
|
huggingface/chat-ui
| 1,436
|
MODELS=`[ variable problem when I docker run
|
Hello,
I want to use Ollama to use Mistral model and I followed the documentation below : https://huggingface.co/docs/chat-ui/configuration/models/providers/ollama
`deploy.sh` :
```sh
#!/bin/bash
sudo docker compose down
sudo docker rm -f mongodb && sudo docker rm -f chat-ui
# nginx and ollama
sudo docker compose up -d
# mongodb
sudo docker run -d -p 27017:27017 -v mongodb-data:/data/db --name mongodb --network backend mongo:latest
# chat-ui
sudo docker run -d -p 3000:3000 --env-file .env.local -v chat-ui:/data --name chat-ui --network proxy ghcr.io/huggingface/chat-ui-db && sudo docker network connect backend chat-ui
```
`docker-compose.yml` :
```YAML
services:
nginx:
image: nginx:latest
container_name: nginx
ports:
- 80:80
- 443:443
networks:
- proxy
volumes:
- ./nginx:/etc/nginx/conf.d
- ./ssl:/etc/ssl
restart: unless-stopped
ollama:
build:
context: ./ollama
dockerfile: Dockerfile
image: ollama-with-ca
container_name: ollama
ports:
- 11434:11434
networks:
- backend
environment:
- HTTPS_PROXY=http://<username>:<password>@proxy.test.fr:8090
volumes:
- ollama-data:/data
restart: unless-stopped
entrypoint: ["/bin/bash", "start-mistral.sh"]
networks:
backend:
proxy:
external: true
volumes:
ollama-data:
```
`.env.local` :
```
MONGODB_URL=mongodb://mongodb:27017
HF_TOKEN=hf_*****
MODELS=`[
{
"name": "Ollama Mistral",
"chatPromptTemplate": "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{
{/if}
}{
{/if}
} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s> {{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop": ["</s>"]
},
"endpoints": [
{
"type": "ollama",
"url" : "ollama://ollama:11434",
"ollamaName" : "mistral"
}
]
}
]`
```
When I start my script, at the end of the execution, the container doesn't want to launch, I get the following error :
```sh
docker: poorly formatted environment: variable '"name": "Ollama Mistral",' contains whitespaces.
See 'docker run --help'.
```
I already tried to put `chat-ui` and `mongodb` containers in the `docker-compose.yml` and it doesn't works, same as this issue : https://github.com/huggingface/chat-ui/issues/614
Any solutions ?
Thanks in advance.
|
https://github.com/huggingface/chat-ui/issues/1436
|
closed
|
[
"support"
] | 2024-08-26T14:00:26Z
| 2024-08-27T11:04:39Z
| 5
|
avirgos
|
huggingface/diffusers
| 9,276
|
How can I manually update some of their checkpoints of UNet2/3DConditionModel objects?
|
### Discussed in https://github.com/huggingface/diffusers/discussions/9273
<div type='discussions-op-text'>
<sup>Originally posted by **justin4ai** August 26, 2024</sup>
Hello, I'm quite new to diffusers package and trying to implement fine-tuning code that uses the saved checkpoints initialized with ```UNet2/3DConditionModel.from_pretrained``` method as shown below:
```python
reference_unet = UNet2DConditionModel.from_pretrained( # ReferenceNet은 2D condition만 받음 (reference image via CLIP)
cfg.base_model_path,
subfolder="unet",
).to(device="cuda")
denoising_unet = UNet3DConditionModel.from_pretrained_2d(
cfg.base_model_path,
"",
subfolder="unet",
unet_additional_kwargs={
"use_motion_module": False,
"unet_use_temporal_attention": False,
},
).to(device="cuda")
prev = denoising_unet.state_dict()
li = torch.load("./pretrained_weights/denoising_unet.pth")
for key in li:
denoising_unet[key] = li[key] # I know this kind of direct assigning to the object doesn't make sense though.
reference_unet.load_state_dict(torch.load("./pretrained_weights/reference_unet.pth"))
```
The checkpoint I try to load is saved from the previous training of ``` UNet2/3DConditionModel objects``` with ```state_dict = model.state_dict()``` and ```torch.save(state_dict, save_path)```. But I have no Idea about how to directly assign certain values to specific layers in those class objects.
If you help me out with this, I will be so much glad! Looking forward to your help. Also please let me know if my description of the situation is not enough for you to help me out.
Cheers,
Junstin</div>
|
https://github.com/huggingface/diffusers/issues/9276
|
open
|
[
"stale"
] | 2024-08-26T07:49:23Z
| 2024-09-25T15:03:01Z
| 1
|
justin4ai
|
huggingface/transformers
| 33,115
|
How to get the score of each token when using pipeline
|
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1,
output_scores=True
)
The model I use is Qwen2-7B-Instruct. When I try to output the score of each token by modifying the parameters, it doesn't work.
|
https://github.com/huggingface/transformers/issues/33115
|
closed
|
[
"Usage"
] | 2024-08-26T07:00:54Z
| 2025-03-06T08:23:58Z
| null |
xin0623
|
huggingface/diffusers
| 9,271
|
The different quality between ComfyUI and Diffusers ?
|
### Discussed in https://github.com/huggingface/diffusers/discussions/9265
<div type='discussions-op-text'>
<sup>Originally posted by **vuongminh1907** August 25, 2024</sup>
I had a problem using InstantID (https://github.com/instantX-research/InstantID), which uses Diffusers as its base. Additionally, I tried ComfyUI (https://github.com/cubiq/ComfyUI_InstantID), and the quality of the images improved better I think.
I discussed this with Cubiq, and he mentioned that there are no differences in how they applied the IP Adapter (https://github.com/cubiq/ComfyUI_InstantID/issues/206).

Can you explain this issue to me? Perhaps it’s related to the Sampler in ComfyUI and Diffusers.</div>
|
https://github.com/huggingface/diffusers/issues/9271
|
closed
|
[
"stale"
] | 2024-08-26T02:53:23Z
| 2024-10-15T18:10:42Z
| 3
|
vuongminh1907
|
huggingface/diffusers
| 9,264
|
Could you make an inpainting model for flux?
|
### Model/Pipeline/Scheduler description
The [stable-diffusion-xl-1.0-inpainting-0.1](https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1) model helps a lot. Could you make a similar inpainting model for flux?
https://huggingface.co/black-forest-labs/FLUX.1-dev
### Open source status
- [ ] The model implementation is available.
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1
https://huggingface.co/black-forest-labs/FLUX.1-dev
|
https://github.com/huggingface/diffusers/issues/9264
|
closed
|
[] | 2024-08-24T17:32:32Z
| 2024-08-24T17:37:59Z
| 2
|
snowbedding
|
huggingface/transformers
| 33,106
|
how to fine tune TrOCR on specifique langage guide.
|
### Model description
hello , just passed through issues and other , but none of them talked on how to fine-tune TrOCR on specifique langage , like how to pick encoder and decoder , model .. etc ,
can you @NielsRogge , write a simple instructions/guide on this topic ?
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/transformers/issues/33106
|
closed
|
[] | 2024-08-24T14:33:02Z
| 2025-06-15T08:07:10Z
| null |
MohamedLahmeri01
|
pytorch/xla
| 7,911
|
Documentation: Discoverability of http://pytorch.org/xla
|
## 📚 Documentation: Discoverability of http://pytorch.org/xla
The docs are very hard to find _despite_ being hosted on [pytorch.org](http://pytorch.org/). If I visit [pytorch.org](http://pytorch.org/) I can't find any link that goes to [pytorch.org/xla](http://pytorch.org/xla). The closest I could find is somewhere deep in https://pytorch.org/pytorch-domains and even then it links to version 2.1! I think the discoverability can use some support possibly after we've polished up the landing page.
|
https://github.com/pytorch/xla/issues/7911
|
closed
|
[
"documentation"
] | 2024-08-24T02:58:03Z
| 2024-11-04T17:38:23Z
| 7
|
tengyifei
|
huggingface/datasets
| 7,123
|
Make dataset viewer more flexible in displaying metadata alongside images
|
### Feature request
To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is that this be made more flexible for datasets with multiple subsets to avoid the need to put a `metadata.csv` into each image directory where they are not as easily accessed.
### Motivation
When creating datasets with multiple subsets I can't get the images to display alongside their associated metadata (it's usually one or the other that will show up). Since this requires a file specifically named `metadata.csv`, I then have to place that file within the image directory, which makes it much more difficult to access. Additionally, it still doesn't necessarily display the images alongside their metadata correctly (see, for instance, [this discussion](https://huggingface.co/datasets/imageomics/2018-NEON-beetles/discussions/8)).
It was suggested I bring this discussion to GitHub on another dataset struggling with a similar issue ([discussion](https://huggingface.co/datasets/imageomics/fish-vista/discussions/4)). In that case, it's a mix of data subsets, where some just reference the image URLs, while others actually have the images uploaded. The ones with images uploaded are not displaying images, but renaming that file to just `metadata.csv` would diminish the clarity of the construction of the dataset itself (and I'm not entirely convinced it would solve the issue).
### Your contribution
I can make a suggestion for one approach to address the issue:
For instance, even if it could just end in `_metadata.csv` or `-metadata.csv`, that would be very helpful to allow for more flexibility of dataset structure without impacting clarity. I would think that the functionality on the backend looking for `metadata.csv` could reasonably be adapted to look for such an ending on a filename (maybe also check that it has a `file_name` column?).
Presumably, requiring the `configs` in a setup like on [this dataset](https://huggingface.co/datasets/imageomics/rare-species/blob/main/README.md) could also help in figuring out how it should work?
```
configs:
- config_name: <image subset>
data_files:
- <image-metadata>.csv
- <path/to/images>/*.jpg
```
I'd also be happy to look at whatever solution is decided upon and contribute to the ideation.
Thanks for your time and consideration! The dataset viewer really is fabulous when it works :)
|
https://github.com/huggingface/datasets/issues/7123
|
open
|
[
"enhancement"
] | 2024-08-23T22:56:01Z
| 2024-10-17T09:13:47Z
| 3
|
egrace479
|
pytorch/vision
| 8,608
|
loss_box_reg increasing while training mask rcnn
|
### 🐛 Describe the bug
I am trying to train maskRcnn model using detectron2 on my custom LVO deteset. My dataset is a single class dataset and some of the image have no annotation in it. The architecture need to learn negative examples as well for proper training as the test data contains both positive and negative lvo cases. I have segmentation annotation in coco format and have registered it using CocoRegistration.
When I try to train the maskrcnn model the overall loss decreases but the loss_box_reg increases, and the prediction results bounding box have scores less then 0.1 for every cases (even positive cases). Why is this happening.
How to reproduce this error:
```
cfg = get_cfg()
# cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/retinanet_R_101_FPN_3x.yaml"))
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("train",)
cfg.DATASETS.TEST = () # no metrics implemented for this dataset
cfg.DATALOADER.NUM_WORKERS = 2
cfg.INPUT.MAX_SIZE_TRAIN = 512 # every training image have size 512
cfg.INPUT.MIN_SIZE_TRAIN = (512,)
cfg.INPUT.MAX_SIZE_TEST = 512
cfg.INPUT.MIN_SIZE_TEST = 512
cfg.INPUT.MASK_FORMAT = "bitmask"
# cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/retinanet_R_101_FPN_3x.yaml") # initialize from model zoo
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml")
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025
cfg.SOLVER.MAX_ITER = 2000
cfg.SOLVER.CHECKPOINT_PERIOD = 200
cfg.SOLVER.STEPS = [] # do not decay learning rate
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512 # faster, and good enough for this toy dataset
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (ballon)
cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS = False
cfg.OUTPUT_DIR = out_dir
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
```
My positive and negative dataset sample

Annotation example:
```
{
"id": 80,
"image_id": 180,
"category_id": 1,
"segmentation": {
"counts": [
large list
],
"size": [512, 512]
},
"area": 247.0,
"bbox": [302.0, 227.0, 24.0, 13.0],
"iscrowd": 0,
"attributes": { "occluded": false
}},
```
Issue:
Total loss:

Loss_box_reg:

My prediction scoreexample for positive cases:
scores: tensor([0.0901, 0.0862, 0.0737, 0.0697, 0.0679, 0.0670, 0.0668, 0.0665, 0.0664, ........])
Help me solve this problem
### Versions
Versions:
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 9.4 (Plow) (x86_64)
GCC version: (GCC) 11.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.34
Python version: 3.9.18 (main, May 16 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (64-bit runtime)
Python platform: Linux-5.14.0-427.18.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 6
CPU(s) scaling MHz: 100%
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Flags: -------some giberish-----------
Virtualization: VT-x
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 40 MiB (32 instances)
L3 cache: 48 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,1
|
https://github.com/pytorch/vision/issues/8608
|
closed
|
[] | 2024-08-23T15:15:27Z
| 2024-08-27T10:13:48Z
| 1
|
ArpanGyawali
|
huggingface/diffusers
| 9,258
|
Kohya SS FLUX LoRA training is way faster on Linux than Windows any ideas to debug? Same settings, libraries and GPU
|
### Describe the bug
I am using Kohya SS to train FLUX LoRA
On Linux RTX 3090 gets like 5.5 second / it - batch size 1 and 1024x1024 px resolution
On Windows RTX 3090 TI gets 7.7 second / it - has the most powerful CPU 13900 K
This speed dispercany is huge between Windows and Linux for some reason
Torch upgrade from 2.1 to 2.4 on Linux caused huge speed up and VRAM usage reduction but on Windows only VRAM usage dropped - speed same
Any ideas for how to fix? Using SDPA Cross Attention
I am sharing venv pip freeze of both Windows and Linux
Both has Python 3.10.11
**Windows pip freeze**
```
Microsoft Windows [Version 10.0.19045.4717]
(c) Microsoft Corporation. All rights reserved.
R:\Kohya_GUI_Flux_Installer\kohya_ss\venv\Scripts>activate
(venv) R:\Kohya_GUI_Flux_Installer\kohya_ss\venv\Scripts>pip freeze
absl-py==2.1.0
accelerate==0.33.0
aiofiles==23.2.1
aiohappyeyeballs==2.4.0
aiohttp==3.10.5
aiosignal==1.3.1
altair==4.2.2
annotated-types==0.7.0
antlr4-python3-runtime==4.9.3
anyio==4.4.0
appdirs==1.4.4
astunparse==1.6.3
async-timeout==4.0.3
attrs==24.2.0
bitsandbytes==0.43.3
certifi==2022.12.7
charset-normalizer==2.1.1
click==8.1.7
colorama==0.4.6
coloredlogs==15.0.1
contourpy==1.2.1
cycler==0.12.1
dadaptation==3.2
diffusers==0.25.0
docker-pycreds==0.4.0
easygui==0.98.3
einops==0.7.0
entrypoints==0.4
exceptiongroup==1.2.2
fairscale==0.4.13
fastapi==0.112.1
ffmpy==0.4.0
filelock==3.13.1
flatbuffers==24.3.25
fonttools==4.53.1
frozenlist==1.4.1
fsspec==2024.2.0
ftfy==6.1.1
gast==0.6.0
gitdb==4.0.11
GitPython==3.1.43
google-pasta==0.2.0
gradio==4.41.0
gradio_client==1.3.0
grpcio==1.65.5
h11==0.14.0
h5py==3.11.0
httpcore==1.0.5
httpx==0.27.0
huggingface-hub==0.24.5
humanfriendly==10.0
idna==3.4
imagesize==1.4.1
importlib_metadata==8.4.0
importlib_resources==6.4.4
invisible-watermark==0.2.0
Jinja2==3.1.3
jsonschema==4.23.0
jsonschema-specifications==2023.12.1
keras==3.5.0
kiwisolver==1.4.5
libclang==18.1.1
-e git+https://github.com/kohya-ss/sd-scripts.git@e1cd19c0c0ef55709e8eb1e5babe25045f65031f#egg=library&subdirectory=..\..\sd-scripts
lightning-utilities==0.11.6
lion-pytorch==0.0.6
lycoris-lora==2.2.0.post3
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==2.1.5
matplotlib==3.9.2
mdurl==0.1.2
ml-dtypes==0.4.0
mpmath==1.3.0
multidict==6.0.5
namex==0.0.8
networkx==3.2.1
numpy==1.26.3
nvidia-cublas-cu12==12.4.2.65
nvidia-cuda-cupti-cu12==12.4.99
nvidia-cuda-nvrtc-cu12==12.4.99
nvidia-cuda-runtime-cu12==12.4.99
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.2.0.44
nvidia-curand-cu12==10.3.5.119
nvidia-cusolver-cu12==11.6.0.99
nvidia-cusparse-cu12==12.3.0.142
nvidia-nvjitlink-cu12==12.4.99
nvidia-nvtx-cu12==12.4.99
omegaconf==2.3.0
onnx==1.16.1
onnxruntime-gpu==1.17.1
open-clip-torch==2.20.0
opencv-python==4.7.0.68
opt-einsum==3.3.0
optree==0.12.1
orjson==3.10.7
packaging==24.1
pandas==2.2.2
pathtools==0.1.2
pillow==10.2.0
prodigyopt==1.0
protobuf==3.20.3
psutil==6.0.0
pydantic==2.8.2
pydantic_core==2.20.1
pydub==0.25.1
Pygments==2.18.0
pyparsing==3.1.2
pyreadline3==3.4.1
python-dateutil==2.9.0.post0
python-multipart==0.0.9
pytorch-lightning==1.9.0
pytz==2024.1
PyWavelets==1.7.0
PyYAML==6.0.2
referencing==0.35.1
regex==2024.7.24
requests==2.32.3
rich==13.7.1
rpds-py==0.20.0
ruff==0.6.1
safetensors==0.4.4
scipy==1.11.4
semantic-version==2.10.0
sentencepiece==0.2.0
sentry-sdk==2.13.0
setproctitle==1.3.3
shellingham==1.5.4
six==1.16.0
smmap==5.0.1
sniffio==1.3.1
starlette==0.38.2
sympy==1.12
tensorboard==2.17.1
tensorboard-data-server==0.7.2
tensorflow==2.17.0
tensorflow-intel==2.17.0
tensorflow-io-gcs-filesystem==0.31.0
termcolor==2.4.0
timm==0.6.12
tk==0.1.0
tokenizers==0.19.1
toml==0.10.2
tomlkit==0.12.0
toolz==0.12.1
torch==2.4.0+cu124
torchmetrics==1.4.1
torchvision==0.19.0+cu124
tqdm==4.66.5
transformers==4.44.0
typer==0.12.4
typing_extensions==4.9.0
tzdata==2024.1
urllib3==2.2.2
uvicorn==0.30.6
voluptuous==0.13.1
wandb==0.15.11
wcwidth==0.2.13
websockets==12.0
Werkzeug==3.0.4
wrapt==1.16.0
xformers==0.0.27.post2
yarl==1.9.4
zipp==3.20.0
(venv) R:\Kohya_GUI_Flux_Installer\kohya_ss\venv\Scripts>
```
**Ubuntu pip freeze**
```
(venv) Ubuntu@0054-kci-prxmx10136:~/apps/kohya_ss$ pip freeze
absl-py==2.1.0
accelerate==0.33.0
aiofiles==23.2.1
aiohttp==3.9.5
aiosignal==1.3.1
altair==4.2.2
annotated-types==0.7.0
antlr4-python3-runtime==4.9.3
anyio==4.4.0
appdirs==1.4.4
astunparse==1.6.3
async-timeout==4.0.3
attrs==23.2.0
bitsandbytes==0.43.3
cachetools==5.3.3
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
coloredlogs==15.0.1
contourpy==1.2.1
cycler==0.12.1
dadaptation==3.1
diffusers==0.25.0
dnspython==2.6.1
docker-pycreds==0.4.0
easygui==0.98.3
einops==0.7.0
email_validator==2.1.1
entrypoints==0.4
exceptiongroup==1.2.1
fairscale==0.4.13
fastapi==0.111.0
fastapi-cli==0.0
|
https://github.com/huggingface/diffusers/issues/9258
|
closed
|
[
"bug"
] | 2024-08-23T11:42:53Z
| 2024-08-23T11:55:18Z
| 1
|
FurkanGozukara
|
huggingface/datasets
| 7,122
|
[interleave_dataset] sample batches from a single source at a time
|
### Feature request
interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar manner (each batch only contains data from a single source)?
### Motivation
Some recent research [[1](https://blog.salesforceairesearch.com/sfr-embedded-mistral/), [2](https://arxiv.org/pdf/2310.07554)] shows that source homogenous batching can be helpful for contrastive learning. Can we add a function called `RandomlyCyclingMultiSourcesBatchesIterable` to support this functionality?
### Your contribution
I can contribute a PR. But I wonder what the best way is to test its correctness and robustness.
|
https://github.com/huggingface/datasets/issues/7122
|
open
|
[
"enhancement"
] | 2024-08-23T07:21:15Z
| 2024-08-23T07:21:15Z
| 0
|
memray
|
huggingface/text-generation-inference
| 2,452
|
How to get the token probability by curl request?
|
### Feature request
curl -v -X POST http://.....srv/generate -H "Content-Type: application/json" -d '{"inputs": "xxxxx:","parameters": {"max_new_tokens": 256}}'
user this curl request, get output like
{"generated_text": xxxx}
how to get generated text probability from llm in TGI service?
### Motivation
no
### Your contribution
no
|
https://github.com/huggingface/text-generation-inference/issues/2452
|
closed
|
[] | 2024-08-23T03:01:17Z
| 2024-08-27T01:32:44Z
| null |
TWSFar
|
huggingface/speech-to-speech
| 37
|
[Feature request] How about adding an optional speech to viseme model at the end of our chain?
|
Hi there,
Thank you so much for your work on this project. It's truly amazing, and I’m excited to see all the innovative tools that people will build based on it. I can already imagine many will integrate your speech-to-speech pipeline with avatar or robot embodiments, where lip sync will be crucial.
To support this, could you help us add functionality to the current flow? The current process includes 1) speech-to-text, 2) LLM, and 3) text-to-speech. I’d like to add a fourth step: either speech-to-viseme or speech-to-text with `return_timestamp = "word"`, followed by manual mapping of words to phonemes, and then to visemes.
Best regards,
Fabio
|
https://github.com/huggingface/speech-to-speech/issues/37
|
open
|
[] | 2024-08-22T21:32:47Z
| 2024-09-09T17:16:45Z
| null |
fabiocat93
|
pytorch/TensorRT
| 3,115
|
❓ [Question] JetPack 6.0
|
## ❓ Question
<!-- Your question -->
i'd like to use torch_tensorrt w/ JetPack 6.0, but from `setup.py`, it seems like latest supported version is JetPack 5.0 https://github.com/pytorch/TensorRT/blob/main/setup.py#L147-L164
## What you have already tried
1. added JetPack 6.0 to setup.py, setting `JETPACK_VERSION` to 6.0.
2. downloaded bazelisk, manually added to PATH
3. ran setup.py:
```bash
python setup.py bdist_wheel --jetpack-version 6.0 --use-cxx11-abi
```
4. tried creating a new WORKSPACE under `toolchains/jp_workspaces/WORKSPACE.jp60`, effectively copying and pasting `jp50` - but changing `libtorch` to be from Python 3.10. ran `bazel clean --expunge`, eventually ending with `ValueError: Can't find the directory of package torch_tensorrt: I looked in ./src/torch_tensorrt and ./torch_tensorrt`
potentially missing something obvious here. thank you!
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.4
- CPU Architecture: ARM64
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): conda env, pip install
- Build command you used (if compiling from source): setup.py w/ Bazel (through Bazelisk)
- Are you using local sources or building from archives:
- Python version: 3.10
- CUDA version: 12.2
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
cc @narendasan @zewenli98 (seeing a lot of your commits around setup.py and jp50 :))
|
https://github.com/pytorch/TensorRT/issues/3115
|
closed
|
[
"question"
] | 2024-08-22T17:52:45Z
| 2024-08-26T17:11:33Z
| null |
patrick-botco
|
pytorch/TensorRT
| 3,114
|
❓ [Question] Revisit the argument types of normalization converters
|
## ❓ Question
https://github.com/pytorch/TensorRT/pull/3099#issuecomment-2303600863
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/3114
|
open
|
[
"question"
] | 2024-08-22T16:17:01Z
| 2024-08-22T18:04:10Z
| null |
peri044
|
huggingface/huggingface_hub
| 2,480
|
How to use the HF Nvidia NIM API with the HF inference client?
|
### Describe the bug
We recently introduced the [Nvidia NIM API](https://huggingface.co/blog/inference-dgx-cloud) for selected models. The recommended use is via the OAI client like this (with a specific fine-grained token for an enterprise org):
```py
from openai import OpenAI
client = OpenAI(
base_url="https://huggingface.co/api/integrations/dgx/v1",
api_key="YOUR_FINE_GRAINED_TOKEN_HERE"
)
chat_completion = client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Count to 500"}
],
stream=True,
max_tokens=1024
)
# Iterate and print stream
for message in chat_completion:
print(message.choices[0].delta.content, end='')
```
How can users use this API with the HF inference client directly?
The InferenceClient.chat_completions [docs](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion) provide this example snippet for OAI syntax (example 3):
```py
# instead of `from openai import OpenAI`
from huggingface_hub import InferenceClient
# instead of `client = OpenAI(...)`
client = InferenceClient(
base_url=...,
api_key=...,
)
output = client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Count to 10"},
],
stream=True,
max_tokens=1024,
)
for chunk in output:
print(chunk.choices[0].delta.content)
```
When I transpose the logic from the NIM OAI code snippet to the code above, I get this:
```py
# instead of `from openai import OpenAI`
from huggingface_hub import InferenceClient
# instead of `client = OpenAI(...)`
client = InferenceClient(
api_key="enterprise-org-token",
base_url="https://huggingface.co/api/integrations/dgx/v1",
)
output = client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Count to 10"},
],
stream=True,
max_tokens=1024,
)
for chunk in output:
print(chunk.choices[0].delta.content)
```
This throws this error:
```py
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
File ~/miniconda/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:304, in hf_raise_for_status(response, endpoint_name)
303 try:
--> 304 response.raise_for_status()
305 except HTTPError as e:
File ~/miniconda/lib/python3.9/site-packages/requests/models.py:1024, in Response.raise_for_status(self)
1023 if http_error_msg:
-> 1024 raise HTTPError(http_error_msg, response=self)
HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/integrations/dgx/v1/chat/completions
The above exception was the direct cause of the following exception:
BadRequestError Traceback (most recent call last)
Cell In[48], line 10
4 # instead of `client = OpenAI(...)`
5 client = InferenceClient(
6 api_key="hf_****",
7 base_url="https://huggingface.co/api/integrations/dgx/v1",
8 )
---> 10 output = client.chat.completions.create(
11 model="meta-llama/Meta-Llama-3-8B-Instruct",
12 messages=[
13 {"role": "system", "content": "You are a helpful assistant."},
14 {"role": "user", "content": "Count to 10"},
15 ],
16 stream=True,
17 max_tokens=1024,
18 )
20 for chunk in output:
21 print(chunk.choices[0].delta.content)
File ~/miniconda/lib/python3.9/site-packages/huggingface_hub/inference/_client.py:837, in InferenceClient.chat_completion(self, messages, model, stream, frequency_penalty, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, temperature, tool_choice, tool_prompt, tools, top_logprobs, top_p)
833 # `model` is sent in the payload. Not used by the server but can be useful for debugging/routing.
834 # If it's a ID on the Hub => use it. Otherwise, we use a random string.
835 model_id = model if not is_url and model.count("/") == 1 else "tgi"
--> 837 data = self.post(
838 model=model_url,
839 json=dict(
840 model=model_id,
841 messages=messages,
842 frequency_penalty=frequency_penalty,
843 logit_bias=logit_bias,
844 logprobs=logprobs,
845 max_tokens=max_tokens,
846 n=n,
847 presence_penalty=presence_penalty,
848 response_format=response_format,
849 seed
|
https://github.com/huggingface/huggingface_hub/issues/2480
|
closed
|
[
"bug"
] | 2024-08-22T12:32:16Z
| 2024-08-26T12:45:55Z
| null |
MoritzLaurer
|
huggingface/transformers.js
| 896
|
How to use this model: Xenova/bge-reranker-base
|
### Question
I see that it supports transformers.js, but I can't find the instructions for use. Please help me with using it.
|
https://github.com/huggingface/transformers.js/issues/896
|
closed
|
[
"question"
] | 2024-08-22T07:33:42Z
| 2024-08-29T00:12:52Z
| null |
gy9527
|
pytorch/pytorch
| 134,207
|
How to fallback the operators those are unsupported by XLA back to cpu backend?
|
I'm using the xla backend, and there are some operators that are not supported by the xla backend.
How can I use the backend fallback mechanism to fallback these unsupported operators to CPU backend?
Thanks!
cc @bdhirsh
|
https://github.com/pytorch/pytorch/issues/134207
|
closed
|
[
"triaged",
"module: xla"
] | 2024-08-22T06:46:21Z
| 2024-09-05T06:52:20Z
| null |
wwtghx
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.