repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/sentence-transformers
| 2,268
|
How to chop up a long document into chunks of max sequence length?
|
Given a long document, how do I chop it up into chunks so that each chunk is within the [max sequence length](https://www.sbert.net/examples/applications/computing-embeddings/README.html#input-sequence-length) of a model?
|
https://github.com/huggingface/sentence-transformers/issues/2268
|
open
|
[] | 2023-08-02T16:50:09Z
| 2023-08-04T18:47:22Z
| null |
siddhsql
|
huggingface/dataset-viewer
| 1,602
|
Parallel steps update incoherence
|
See the discussion https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M/discussions/1#64c9e88a6a26cddbecd9bec6
Before the dataset update, the `split-first-rows-from-parquet` response was a success, and thus the `split-first-rows-from-streaming` response, computed later, is a `ResponseAlreadyComputedError` error.
But after the dataset update, the `split-first-rows-from-parquet` response was an error (due to a disk issue: ` FileSystemError`) and, due to a heavy load on the infra, the `split-first-rows-from-streaming` response has not been processed yet, so: it's still `ResponseAlreadyComputedError`.
Possibilities:
1. remove `ResponseAlreadyComputedError`, and copy the response (doubles storage)
2. change the model for parallel steps, and store only once. Let's say we have M+N parallel steps. If M steps are successful (normally with the same response) and N steps are erroneous, let's store the optional successful response content once, and all the responses, removing the success content for successful responses. It is a lot of complexity.
3. keep the logic, but if a parallel step gives an error whereas it had a successful response before AND the other parallel step is `ResponseAlreadyComputedError`, copy the successful answer to the other step. Seems brittle and overly complex.
4. keep the logic, but if a parallel step gives an error whereas it had a successful response before AND the other parallel step is `ResponseAlreadyComputedError`, delete the other answer
None seems like a good idea. Do you have better ideas @huggingface/datasets-server ?
|
https://github.com/huggingface/dataset-viewer/issues/1602
|
closed
|
[
"bug",
"question",
"P1"
] | 2023-08-02T13:44:35Z
| 2024-02-06T14:52:06Z
| null |
severo
|
huggingface/transformers
| 25,264
|
[Question] How to load AutoFeatureExtractor on GPU?
|
Hi, I am following this guide to learn how to do audio classification with wav2vec2: https://huggingface.co/docs/transformers/main/tasks/audio_classification
I intend to extract features of my data with the following codes
```
feature_extractor = AutoFeatureExtractor.from_pretrained("/workspace/models/wav2vec2-large-robust")
def preprocess_function(examples):
audio_arrays = [x["array"] for x in tqdm(examples["audio"])]
inputs = feature_extractor(
audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True
)
return inputs
encoded_audio_dataset_train = audio_dataset_train.map(preprocess_function, remove_columns="audio", batched=True)
```
But it seems the extractor is loaded to CPU instead of GPU, and I didn't find in documentation how to set the device for loading feature extractor. I assume the feature extraction is done by the wav2vec2 model itself right? If so how to do this on GPU? Or is it mentioned in any documentation that I didn't notice?
This is my first time to use transformers library in audio processing so please forgive my clumsiness.
Any help is much appreciated.
|
https://github.com/huggingface/transformers/issues/25264
|
closed
|
[] | 2023-08-02T12:26:20Z
| 2023-09-11T08:02:43Z
| null |
treya-lin
|
huggingface/datasets
| 6,111
|
raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." )
|
### Describe the bug
For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for example, [How to elegantly download hf models, zhihu zhuanlan](https://zhuanlan.zhihu.com/p/475260268) proposed a crawlder based solution, and [Is there any mirror for hf_hub, zhihu answer](https://www.zhihu.com/question/371644077) provided some cloud based solutions, and [How to avoid pitfalls on Hugging face downloading, zhihu zhuanlan] gave some useful suggestions), and then use `load_from_disk` to get the dataset object.
However, when one finally has the local files on the disk, it is still buggy when trying to load the files into objects.
### Steps to reproduce the bug
Steps to reproduce the bug:
1. Found CIFAR dataset in hugging face: https://huggingface.co/datasets/cifar100/tree/main
2. Click ":" button to show "Clone repository" option, and then follow the prompts on the box:
```bash
cd my_directory_absolute
git lfs install
git clone https://huggingface.co/datasets/cifar100
ls my_directory_absolute/cifar100 # confirm that the directory exists and it is OK.
```
3. Write A python file to try to load the dataset
```python
from datasets import load_dataset, load_from_disk
dataset = load_from_disk("my_directory_absolute/cifar100")
```
Notice that according to issue #3700 , it is wrong to use load_dataset("my_directory_absolute/cifar100"), so we must use load_from_disk instead.
4. Then you will see the error reported:
```log
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[5], line 9
1 from datasets import load_dataset, load_from_disk
----> 9 dataset = load_from_disk("my_directory_absolute/cifar100")
File [~/miniconda3/envs/ai/lib/python3.10/site-packages/datasets/load.py:2232), in load_from_disk(dataset_path, fs, keep_in_memory, storage_options)
2230 return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
2231 else:
-> 2232 raise FileNotFoundError(
2233 f"Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory."
2234 )
FileNotFoundError: Directory my_directory_absolute/cifar100 is neither a `Dataset` directory nor a `DatasetDict` directory.
```
### Expected behavior
The dataset should be load successfully.
### Environment info
```bash
datasets-cli env
```
-> results:
```txt
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.14.2
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
```
|
https://github.com/huggingface/datasets/issues/6111
|
closed
|
[] | 2023-08-02T09:17:29Z
| 2023-08-29T02:00:28Z
| 3
|
2catycm
|
huggingface/transformers
| 25,257
|
how to print out the data loaded by each epoch during trainer.train() training?
|
### Feature request
please tell to me,
how to print out the data loaded by each epoch during trainer.train() training?
### Motivation
how to print out the data loaded by each epoch during trainer.train() training?
### Your contribution
how to print out the data loaded by each epoch during trainer.train() training?
|
https://github.com/huggingface/transformers/issues/25257
|
closed
|
[] | 2023-08-02T09:13:55Z
| 2023-09-11T08:02:47Z
| null |
ahong007007
|
huggingface/tokenizers
| 1,310
|
How to train BPE tokenizer with multiple CPU
|
Hi
I tried to train a BPE tokenizer with about 10GB text, but it seems extremely slow(runs more than 24 hours and not finished yet).
Is there a way to turn on multi CPU training (from htop there only 1 CPU used)?
Here is the code.
```
from tokenizers import Tokenizer, decoders, models, normalizers, pre_tokenizers, trainers, processors
tokenizer = Tokenizer(models.BPE())
tokenizer.normalizer = normalizers.NFC()
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=False)
tokenizer.post_processor = processors.ByteLevel(trim_offsets=False)
tokenizer.decoder = decoders.ByteLevel()
trainer = trainers.BpeTrainer(
vocab_size = 50000,
min_frequency = 1,
initial_alphabet = pre_tokenizers.ByteLevel.alphabet(),
special_tokens = special_tokens
)
with open("train_bpe.txt") as f
tokenizer.train(f, trainer=trainer)
```
|
https://github.com/huggingface/tokenizers/issues/1310
|
closed
|
[] | 2023-08-02T08:14:07Z
| 2023-08-02T09:10:44Z
| null |
voidmagic
|
pytorch/examples
| 1,179
|
How to load Transformer model once using FSDP
|
## 📚 Documentation
@HamidShojanazeri, I'm following your [FSDP example](https://github.com/pytorch/examples/tree/main/distributed/FSDP) and swapped in a bigger model, `google/flan-t5-xxl`, and am a little unclear on what happens when the script starts up. I'm running on a server with 8 V100s so I run the launch command as listed in the README.md file:
`torchrun --nnodes 1 --nproc_per_node 8 T5_training.py`
Next, I was having trouble downloading the model weights because I think with 8 processes, each one was trying to download the weights and they were removing each others' file locks, so I changed the [`setup_model`](https://github.com/pytorch/examples/blob/741de70c4a20d9c83f811b946c186c4f83abcccb/distributed/FSDP/utils/train_utils.py#L99-L102) function so that only rank 0 downloads the weights and then all other processes will read from the local cache.
Finally, my big question for you is - as the `setup_model` function is currently written, is it fair to say that we're loading a copy of the model weights for every process running (e.g. in my case, 8 processes)? If so, how can we load the model once and broadcast the weights to all other processes? I ask because this will become a blocker at bigger model scales because we'll eventually run out of CPU memory trying to do this.
Here's my modified `setup_model` function for reference:
```
def setup_model(model_name, model_max_length=512, cache_dir=None, rank=None):
# TODO: is this loading the model on all processes?
# 1) this seems time consuming, and 2) it seems like it would use way too much memory
# ensure weights are only downloaded by one process
if rank == 0:
model = T5ForConditionalGeneration.from_pretrained(model_name, cache_dir=cache_dir)
# set model_max_length to avoid warnings
tokenizer = T5Tokenizer.from_pretrained(model_name, model_max_length=model_max_length, cache_dir=cache_dir)
dist.barrier()
if rank != 0:
model = T5ForConditionalGeneration.from_pretrained(model_name, cache_dir=cache_dir)
# set model_max_length to avoid warnings
tokenizer = T5Tokenizer.from_pretrained(model_name, model_max_length=model_max_length, cache_dir=cache_dir)
return model, tokenizer
```
I imagine this all gets easier and more memory efficient once we start saving the model in the formats you've specified in the model_checkpointing directory but we have to get there in the first place.
I should also note, in case it makes a difference, that I'm setting up the distributed process group (within `T5_training.py`) before calling `setup_model`, whereas you call `setup_model` before setting up the distributed process group in your example.
|
https://github.com/pytorch/examples/issues/1179
|
open
|
[] | 2023-08-01T22:01:24Z
| 2023-08-01T22:01:24Z
| null |
ToddMorrill
|
huggingface/chat-ui
| 380
|
Issue with Text Generation in Stream Mode
|
Hi
The text generation in stream mode is not functioning as expected on my development server, which is running behind a reverse proxy with the correct base path defined. I'm only receiving a single response in one go, whereas I expect a continuous stream of text.
Please assist me in resolving this issue. Thank you!
|
https://github.com/huggingface/chat-ui/issues/380
|
closed
|
[
"support"
] | 2023-08-01T19:07:50Z
| 2023-09-10T12:22:16Z
| 10
|
bilal-rachik
|
huggingface/transformers
| 25,245
|
BLIP-2 request: If it's even possible, can you please provide an official example script of how to get the text(caption) features and image features into the same vector space (e.g. for cross-modal retrieval/search using BLIP-2 models, similar to what we can already do with CLIP.) Thanks in advance.
|
### System Info
linux, python 3.8+, pytorch '1.13.0+cu116'
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
N/A
|
https://github.com/huggingface/transformers/issues/25245
|
closed
|
[] | 2023-08-01T18:21:07Z
| 2023-09-21T08:03:25Z
| null |
wingz1
|
huggingface/dataset-viewer
| 1,591
|
Should we convert the datasets to other formats than parquet?
|
One OP asked for CSV conversion (not explicitly from the Hub itself): https://huggingface.co/datasets/medical_questions_pairs/discussions/3#64c8c2af527d76365563285c
|
https://github.com/huggingface/dataset-viewer/issues/1591
|
closed
|
[
"question",
"feature request",
"P2"
] | 2023-08-01T13:47:12Z
| 2024-06-19T14:19:01Z
| null |
severo
|
pytorch/TensorRT
| 2,159
|
❓ [Question] Could torch-tensorrt support mixed-precision inference?
|
## ❓ Question
<!-- Your question -->
Hello, in my PyTorch inference, I initially set the entire model to fp16 and provided fp16 inputs. Considering the output will become `NAN` (transformer model) , and then I used `.to()` to switch certain weight layers and inference parameters back to fp32.
However, if I export it to ONNX and convert to TensorRT, I would need to make those settings again in TensorRT, which can be quite complicated.
I would like to know if the torch_tensorrt export includes these dependence and if it can automatically perform mixed-precision export to TensorRT based on my settings. Thank you!
|
https://github.com/pytorch/TensorRT/issues/2159
|
closed
|
[
"question"
] | 2023-08-01T10:32:24Z
| 2023-08-16T01:34:08Z
| null |
sanbuphy
|
huggingface/optimum
| 1,243
|
transformers.convert_graph_to_onnx.quantize equivalent with optimum?
|
Historically, I've used the following to quantize a model after training:
```python
import sys
from pathlib import Path
from transformers.convert_graph_to_onnx import quantize
input_file = sys.argv[1]
print("Performing quantization of model '{}'".format(input_file))
quantized_model_path = quantize(Path(input_file))
print("Rename quantized model '{}' to '{}'".format(quantized_model_path.name, input_file))
quantized_model_path.replace(input_file)
```
Is there a way to accomplish the same type of quantization using`optimum-cli? The quantize method from above (that is deprecated) produces a much smaller model than optimum-cli.
```
Original model 448M multilingual-e5-small-onnx/model.onnx
Model after above 112M multilingual-e5-small-onnx/model.onnx
```
I've tried the following export/quantize commands, but the model file size is still above 400MB
```
$ optimum-cli export onnx --task sentence-similarity -m intfloat/multilingual-e5-small --optimize O3 multilingual-e5-small-onnx
$ optimum-cli onnxruntime quantize --onnx_model multilingual-e5-small-onnx --avx2 --output test
```
```
403M Aug 1 09:38 test/model_quantized.onnx
```
Thank you!
|
https://github.com/huggingface/optimum/issues/1243
|
closed
|
[] | 2023-08-01T07:59:03Z
| 2023-08-01T21:45:46Z
| 2
|
jobergum
|
huggingface/sentence-transformers
| 2,266
|
How to measure the quanlity of embeddings?
|
I am using `sentence-transformers` to encode the big texts into input embeddings for a text classification task. However, I'm unsure how to compare the quality of embeddings when evaluating multiple models' performance. Could you please provide some advice?
|
https://github.com/huggingface/sentence-transformers/issues/2266
|
open
|
[] | 2023-08-01T06:59:41Z
| 2023-09-01T06:12:39Z
| null |
sgwhat
|
huggingface/trl
| 597
|
How to run using multi-GPUs?
|
Hi, I'm not so familiar with the training method using multi-GPUs.
I have a machine with 8 A100s, what should I do to full params SFT a llama2-7B model?
How to use the trl tool?
Thanks.
|
https://github.com/huggingface/trl/issues/597
|
closed
|
[] | 2023-08-01T06:36:27Z
| 2023-08-21T03:39:46Z
| null |
jyC23333
|
huggingface/diffusers
| 4,407
|
how to store hub_download on local directory?
|
### Describe the bug
running:
from huggingface_hub import hf_hub_url, hf_hub_download
```
# Generate/show the URL
hf_hub_url(
repo_id="XpucT/Deliberate",
filename="Deliberate-inpainting.safetensors",
)
# Download the file
hf_hub_download(
repo_id="XpucT/Deliberate",
filename="Deliberate-inpainting.safetensors",
)
```
but file is not stored on local directory
### Reproduction
same as above
### Logs
_No response_
### System Info
kaggle notebook
### Who can help?
@sayakpaul @patrickvonplaten @will
|
https://github.com/huggingface/diffusers/issues/4407
|
closed
|
[
"bug"
] | 2023-08-01T05:21:39Z
| 2023-08-01T05:55:46Z
| null |
andysingal
|
huggingface/datasets
| 6,108
|
Loading local datasets got strangely stuck
|
### Describe the bug
I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as:
```python
ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=16)['train']
```
However, I found that the loading process can get stuck -- the progress bar `Generating train split` no more proceed. When I was trying to find the cause and solution, I found a really strange behavior. If I load the dataset in this way:
```python
dlist = list()
for _ in LIST_OF_FILE_PATHS:
dlist.append(load_dataset("json", data_files=_)['train'])
ds = concatenate_datasets(dlist)
```
I can actually successfully load all the files despite its slow speed. But if I load them in batch like above, things go wrong. I did try to use Control-C to trace the stuck point but the program cannot be terminated in this way when `num_proc` is set to `None`. The only thing I can do is use Control-Z to hang it up then kill it. If I use more than 2 cpus, a Control-C would simply cause the following error:
```bash
^C
Process ForkPoolWorker-1:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 114, in worker
task = get()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/queues.py", line 368, in get
res = self._reader.recv_bytes()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 224, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes
buf = self._recv(4)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
Generating train split: 92431 examples [01:23, 1104.25 examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1373, in iflatmap_unordered
yield queue.get(timeout=0.05)
File "<string>", line 2, in get
File "/usr/local/lib/python3.10/dist-packages/multiprocess/managers.py", line 818, in _callmethod
kind, result = conn.recv()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 258, in recv
buf = self._recv_bytes()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes
buf = self._recv(4)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/data/liyongyuan/source/batch_load.py", line 11, in <module>
a = load_dataset(
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2133, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 954, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1049, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1842, in _prepare_split
for job_id, done, content in iflatmap_unordered(
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered
[async_result.get(timeout=0.05) for async_result in async_results]
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in <listcomp>
[async_result.get(timeout=0.05) for async_result in async_results]
File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 770, in get
raise TimeoutError
multiprocess.context.TimeoutError
```
I have validated the basic correctness of these `.jsonl` files. They are correctly formatted (or they cannot be loaded singly by `load_dataset`) though some of the json may contain too long text (more than 1e7 characters). I do not know if this could be the problem. And there should not be any bottleneck in system's resource. The whole dataset is ~300GB, and I am using a cloud server with plenty of storage and 1TB ram.
Thanks for your efforts and patience! Any suggestion or help would be appreciated.
### Steps to reproduce the bug
1. use load_dataset() with `data_files = LIST_OF_FILES`
### Expected behavior
All the files should be smoothly loaded.
### Environment info
- Datasets: A private datas
|
https://github.com/huggingface/datasets/issues/6108
|
open
|
[] | 2023-08-01T02:28:06Z
| 2024-12-31T16:01:00Z
| 7
|
LoveCatc
|
huggingface/chat-ui
| 379
|
Issue with Chat UI when deploying Text Generation API on a remote server
|
I am facing an issue with the Chat UI while using the Text Generation API. Everything works correctly when the Text Generation API is deployed on localhost, but the Chat UI doesn't work when the Text Generation API is deployed on a remote server.
Steps to reproduce the problem:
1. Deploy the Text Generation API on localhost.
2. Use the Chat UI to generate text and verify that it works correctly.
3. Deploy the Text Generation API on a remote server.
4. Use the Chat UI again to generate text and notice that it no longer works.
Expected behavior:
The Chat UI should work properly, whether the Text Generation API is deployed on localhost or on a remote server.
Additional information:
- I am using version 0.4 of the Chat UI and version 0.9.3 of the Text Generation API.
- The remote server hosting the Text Generation API responds correctly to requests.
- Tests have been conducted with the "text generation" client and Postman.
Any assistance in resolving this issue would be highly appreciated. Thank you!

|
https://github.com/huggingface/chat-ui/issues/379
|
open
|
[
"support"
] | 2023-07-31T17:22:49Z
| 2023-09-18T12:55:45Z
| 0
|
bilal-rachik
|
huggingface/chat-ui
| 378
|
Add support for endpoints requiring client authentication using PKI
|
Hi,
Are you open to adding support for endpoints that require client authentication using PKI? I have a requirement to use client authentication with our backend inference server.
Currently authentication config from each endpoint is passed to the headers arg of the fetch command: https://github.com/huggingface/chat-ui/blob/main/src/lib/server/generateFromDefaultEndpoint.ts#L35
My quick googling has yielded this: https://sebtrif.xyz/blog/2019-10-03-client-side-ssl-in-node-js-with-fetch/
tl;dr; they create a `https.Agent(..)` which loads a PKI context from file which is passed to the `agent` arg in the fetch command.
If you're happy for this to be added, how would you like to separate the logic of authentication using headers and client authentication using an SSL context?
Thank you! :)
|
https://github.com/huggingface/chat-ui/issues/378
|
closed
|
[
"question",
"front"
] | 2023-07-31T17:13:53Z
| 2023-08-15T18:51:29Z
| null |
cambriancoder
|
huggingface/chat-ui
| 377
|
Provide a login button, for existing users?
|
I just changed to another laptop, and didn't find a login button to see and work with my account from Huggingface. After I used once the Chat, I got a message to Login. I would suggest making it more traditional to have a username and a login button on the left sidebar.
|
https://github.com/huggingface/chat-ui/issues/377
|
closed
|
[
"enhancement",
"front"
] | 2023-07-31T12:08:52Z
| 2023-08-02T12:19:30Z
| 1
|
tobiashochguertel
|
huggingface/datasets
| 6,104
|
HF Datasets data access is extremely slow even when in memory
|
### Describe the bug
Doing a simple `some_dataset[:10]` can take more than a minute.
Profiling it:
<img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab">
`some_dataset` is completely in memory with no disk cache.
This is proving fatal to my usage of HF Datasets. Is there a way I can forgo the arrow format and store the dataset as PyTorch tensors so that `_tensorize` is not needed? And is `_consolidate` supposed to take this long?
It's faster to produce the dataset from scratch than to access it from HF Datasets!
### Steps to reproduce the bug
I have uploaded the dataset that causes this problem [here](https://huggingface.co/datasets/NightMachinery/hf_datasets_bug1).
```python
#!/usr/bin/env python3
import sys
import time
import torch
from datasets import load_dataset
def main(dataset_name):
# Start the timer
start_time = time.time()
# Load the dataset from Hugging Face Hub
dataset = load_dataset(dataset_name)
# Set the dataset format as torch
dataset.set_format(type="torch")
# Perform an identity map
dataset = dataset.map(lambda example: example, batched=True, batch_size=20)
# End the timer
end_time = time.time()
# Print the time taken
print(f"Time taken: {end_time - start_time:.2f} seconds")
if __name__ == "__main__":
dataset_name = "NightMachinery/hf_datasets_bug1"
print(f"dataset_name: {dataset_name}")
main(dataset_name)
```
### Expected behavior
_
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
|
https://github.com/huggingface/datasets/issues/6104
|
open
|
[] | 2023-07-31T11:12:19Z
| 2023-08-01T11:22:43Z
| 1
|
NightMachinery
|
huggingface/diffusers
| 4,382
|
HOW TO Overcoming the Influence of Seed and Enhancing the Role of Text Prompts
|
I fine-tuned a text2img model using Lora, based on the v1.5 version of stable diffusion. The results generated are very good.
But they can’t be controlled. It seems that the generated results are more based on the seed. Changing the seed changes the image, And if I don’t change the seed and only change the text prompt, the result doesn’t change, or there are only very slight changes.
1. How should I solve this problem?
2. I would like to request a new feature that helps balance the influence between the seed and the prompt, as some questions are indeed sensitive to the seed.
|
https://github.com/huggingface/diffusers/issues/4382
|
closed
|
[] | 2023-07-31T07:41:03Z
| 2023-08-02T09:23:50Z
| null |
XiaoyuZhuang
|
huggingface/transformers.js
| 230
|
[Question] distiluse-base-multilingual-cased-v2 - wrong vector dimension (768 vs 512) in onnx version?
|
I was just playing around with the model [distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2) and noticed that your onnx versions both (quantized and normal) produce embeddings with 768-dimensional vectors instead of 512.
Example:
index.html
```html
<!DOCTYPE html>
<html>
<head>
<title>Transformers.js Example</title>
</head>
<body>
<h1>Transformers.js Example</h1>
<script type="module" src="main.js"></script>
</body>
</html>
```
main.js
```javascript
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.4.4';
async function allocatePipeline() {
let pipe = await pipeline("feature-extraction",
"Xenova/distiluse-base-multilingual-cased-v2");
let out = await await pipe("test", { pooling: 'mean', normalize: true });
console.log(out);
}
allocatePipeline();
```
That gives me
```
Proxy(s) {dims: Array(2), type: 'float32', data: Float32Array(768), size: 768}
```
However, the model page states
> This is a [sentence-transformers](https://www.sbert.net/) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Also, I used the Python package
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('sentence-transformers/distiluse-base-multilingual-cased-v2')
model.encode("test")
```
which gives me a correct 512-dimensional embedding.
Am I missing some option here or overseeing the obvious?
|
https://github.com/huggingface/transformers.js/issues/230
|
closed
|
[
"question"
] | 2023-07-30T16:49:36Z
| 2024-10-18T13:30:12Z
| null |
do-me
|
huggingface/trl
| 592
|
How to load a custom structure model?
|
hello, when I run the following code, I am prompted that only support `AutoModelForCausalLMWithValueHead` and `AutoModelForSeq2SeqLMWithValueHead`. But these two structures seem to only be able to load the specified pre-trained model.
`ppo_trainer = PPOTrainer(config, gen_model, gen_ref_model, tokenizer)`
My model is trained by the T5, and the structure has changed. I would like to know how to load my model? Is it supported?
|
https://github.com/huggingface/trl/issues/592
|
closed
|
[] | 2023-07-30T15:42:18Z
| 2023-08-31T11:00:56Z
| null |
estuday
|
huggingface/datasets
| 6,099
|
How do i get "amazon_us_reviews
|
### Feature request
I have been trying to load 'amazon_us_dataset" but unable to do so.
`amazon_us_reviews = load_dataset('amazon_us_reviews')`
`print(amazon_us_reviews)`
> [ValueError: Config name is missing.
Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1_00', 'Video_DVD_v1_00', 'Video_v1_00', 'Toys_v1_00', 'Tools_v1_00', 'Sports_v1_00', 'Software_v1_00', 'Shoes_v1_00', 'Pet_Products_v1_00', 'Personal_Care_Appliances_v1_00', 'PC_v1_00', 'Outdoors_v1_00', 'Office_Products_v1_00', 'Musical_Instruments_v1_00', 'Music_v1_00', 'Mobile_Electronics_v1_00', 'Mobile_Apps_v1_00', 'Major_Appliances_v1_00', 'Luggage_v1_00', 'Lawn_and_Garden_v1_00', 'Kitchen_v1_00', 'Jewelry_v1_00', 'Home_Improvement_v1_00', 'Home_Entertainment_v1_00', 'Home_v1_00', 'Health_Personal_Care_v1_00', 'Grocery_v1_00', 'Gift_Card_v1_00', 'Furniture_v1_00', 'Electronics_v1_00', 'Digital_Video_Games_v1_00', 'Digital_Video_Download_v1_00', 'Digital_Software_v1_00', 'Digital_Music_Purchase_v1_00', 'Digital_Ebook_Purchase_v1_00', 'Camera_v1_00', 'Books_v1_00', 'Beauty_v1_00', 'Baby_v1_00', 'Automotive_v1_00', 'Apparel_v1_00', 'Digital_Ebook_Purchase_v1_01', 'Books_v1_01', 'Books_v1_02']
Example of usage:
`load_dataset('amazon_us_reviews', 'Wireless_v1_00')`]
__________________________________________________________________________
`amazon_us_reviews = load_dataset('amazon_us_reviews', 'Watches_v1_00')
print(amazon_us_reviews)`
**ERROR**
`Generating` train split: 0%
0/960872 [00:00<?, ? examples/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1692 )
-> 1693 example = self.info.features.encode_example(record) if self.info.features is not None else record
1694 writer.write(example, key)
11 frames
KeyError: 'marketplace'
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1710 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1711 e = e.__context__
-> 1712 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1713
1714 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
### Motivation
The dataset I'm using
https://huggingface.co/datasets/amazon_us_reviews
### Your contribution
What is the best way to load this data
|
https://github.com/huggingface/datasets/issues/6099
|
closed
|
[
"enhancement"
] | 2023-07-30T11:02:17Z
| 2023-08-21T05:08:08Z
| 10
|
IqraBaluch
|
huggingface/trl
| 591
|
how to use SFTTrainer for multi turns dialogue?
|
I wanto use SFTTrainer to train a multi turns dialogues. does it apply to llama-2-7b-cha-hf? is it same to llama-2-7b-hf for instruction tune?
my dataset is multi turns dialogues.
the prompt is:
```
<s>[INST] <<SYS>>
{{ system_prompt }}
<</SYS>>
{{ user_msg_1 }} [/INST] {{ model_answer_1 }} </s><s>[INST] {{ user_msg_2 }} [/INST] {{ model_answer_2 }} </s><s>[INST] {{ user_msg_3 }} [/INST]
```
|
https://github.com/huggingface/trl/issues/591
|
closed
|
[] | 2023-07-30T05:47:40Z
| 2023-08-01T06:21:04Z
| null |
moseshu
|
huggingface/transformers.js
| 228
|
[Question] Chaining automatic-speech recognition tasks sometimes produces weird output?
|
Hi! I'm using the automatic-speech recognition task with vanilla nodejs (20) for (almost) live transcription (after the person has stopped talking)
This is the setup I'm using as per the docs:
```
const multilingual = true;
const model = "base";
const modelName = `Xenova/whisper-${model}${multilingual ? "" : ".en"}`;
const transcriber = await pipeline("automatic-speech-recognition", modelName);
const wav = new wavefile.WaveFile();
wav.fromScratch(1, 48000, "32f", audioBuffer.getChannelData(0));
wav.toSampleRate(16000); // Whisper expects audio with a sampling rate of 16000
let audioData = wav.getSamples();
if (Array.isArray(audioData)) {
audioData = audioData[0];
}
let output = await transcriber(audioData);
```
This code almost works perfectly (also verified the wav files by saving them locally)
But every once in a while the model seems to get stuck for a couple of seconds. I can't say if this is because I'm sending multiple requests to the pipe while there's still a task in progress (multiple speakers), or something else entirely. Sadly I don't think there's any documentation if the pipeline has a queue of some sort or if it just mangles the data weirdly.
The output will look like this even though the sound-snippet only contains a single "Ah...":
```
took 7.202248899996281s: Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah...
```
or like this (no music was being played)
```
took 6.9480034999996425s: [Music]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
```
Generation time is also much, much longer (normally under 1s with whisper-base, this is the main problem I'm facing)
Is this is a bug? I was thinking of working around the problem by canceling the operation if it takes longer than 2-3s if that's possible, but that'd just be the laziest workaround.
(something like `pipe.cancel();` or equivalent)
Or alternatively implementing a queue myself if it actually jumbles data when chaining tasks
Thanks so much in advance for any suggestions!
|
https://github.com/huggingface/transformers.js/issues/228
|
closed
|
[
"question"
] | 2023-07-30T01:32:26Z
| 2024-12-07T14:45:02Z
| null |
funiel
|
huggingface/diffusers
| 4,363
|
how to properly load sd_xl_base_1.0_0.9vae.safetensors
|
### Describe the bug
hi, how should i load sd_xl_base_1.0_0.9vae.safetensors given the namespace is the same as 1.0 one?
### Reproduction
N/A
### Logs
_No response_
### System Info
ec2
### Who can help?
@sayakpaul @patrick
|
https://github.com/huggingface/diffusers/issues/4363
|
closed
|
[
"bug",
"stale"
] | 2023-07-29T21:16:34Z
| 2023-10-18T15:14:58Z
| null |
MaxTran96
|
huggingface/optimum-neuron
| 151
|
any example of how to use with Accelerate?
|
All the examples seem to replace `Trainer` but we are using `Accelerate`. Much appreciated! :)
|
https://github.com/huggingface/optimum-neuron/issues/151
|
closed
|
[
"Stale"
] | 2023-07-29T05:51:20Z
| 2024-12-02T08:05:47Z
| null |
jiangts
|
huggingface/transformers.js
| 226
|
voice recognition
|
@xenova hello bro i wish every things is good on you so i just wanna ask if we can recognize an audio file using his buffer ecxept wav extensions only i mean using mp3 file buffer or flac extension?
```
// Load audio data
let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav';
let buffer = Buffer.from(await fetch(url).then(x => x.arrayBuffer()))
// Read .wav file and convert it to required format
let wav = new wavefile.WaveFile(buffer);
wav.toBitDepth('32f'); // Pipeline expects input as a Float32Array
wav.toSampleRate(16000); // Whisper expects audio with a sampling rate of 16000
let audioData = wav.getSamples();
if (Array.isArray(audioData)) {
// For this demo, if there are multiple channels for the audio file, we just select the first one.
// In practice, you'd probably want to convert all channels to a single channel (e.g., stereo -> mono).
audioData = audioData[0];
}
```
|
https://github.com/huggingface/transformers.js/issues/226
|
closed
|
[
"question"
] | 2023-07-28T16:14:50Z
| 2023-08-20T23:43:31Z
| null |
jedLahrim
|
huggingface/chat-ui
| 372
|
Can I add i18n support?
|
Would be great to support the standard i18n in frontend, we can contribute with it, do you see that it would be an accepted contribution?
Maybe using this lib [kaisermann/svelte-i18n](https://github.com/kaisermann/svelte-i18n/blob/main/docs/Getting%20Started.md)
|
https://github.com/huggingface/chat-ui/issues/372
|
closed
|
[
"enhancement",
"question",
"front"
] | 2023-07-28T11:56:55Z
| 2024-06-17T18:07:41Z
| null |
juancgalvis
|
huggingface/chat-ui
| 371
|
Improve the UI, to be flexible width?
|
The left sidebar is growing here, and I wished I could make it wider. Same for the middle part, which is centered, and sometimes I have to scroll to the side to see the whole code block because the middle part has a left and right margin, what I can't control.
It would be great when we could set the percent value for the left sidebar and the middle part in users' profile?
|
https://github.com/huggingface/chat-ui/issues/371
|
open
|
[] | 2023-07-28T11:27:27Z
| 2023-07-28T15:16:38Z
| 2
|
tobiashochguertel
|
huggingface/accelerate
| 1,786
|
Problem about how to save memory on 2 GPU at one machine.
|
Why I run my script on one GPU at batch_size 8,nothing happened, I use the accelerate launch my script on 2 GPU at same batch_size, both process terminate because CUDA out of Memory.
Here is my config :
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
downcast_bf16: 'no'
dynamo_config:
dynamo_backend: INDUCTOR
gpu_ids: all
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
When normal run my script on one GPU, the memory util is about 23GB/24GB.
Is this config make my process use more memory?
|
https://github.com/huggingface/accelerate/issues/1786
|
closed
|
[] | 2023-07-28T09:42:43Z
| 2023-09-15T15:06:17Z
| null |
Kangkang625
|
huggingface/text-generation-inference
| 720
|
How to make sure the local tgi server's performance is ok
|
### Feature request
Hello, I just deployed the tgi server as docs in docker container on an single A100 and have a load test with bloom-7b1, but the performance has come a long way from other inference servers, like vllm, fastertransformer in the same environment & condition. So, if there is something like an official performance table for a beginner like me to make sure the performance is ok, or there are detailed instructions for me to check and set up some options to improve throughput. Thanks a lot!
### Motivation
None
### Your contribution
None
|
https://github.com/huggingface/text-generation-inference/issues/720
|
closed
|
[
"Stale"
] | 2023-07-28T07:57:18Z
| 2024-04-25T01:58:42Z
| null |
lichangW
|
huggingface/transformers.js
| 224
|
[Question] Merge whisper-base.en main and output_attentions?
|
I can see there is `output_attentions` branch on https://huggingface.co/Xenova/whisper-base.en/tree/main and the difference from `main` seems it can support `return_timestamps: 'word'`.
Is there a plan/schedule to merge these two?
Or these two branches are incompatible to be merged together? In such case, will both receive future updates?
|
https://github.com/huggingface/transformers.js/issues/224
|
closed
|
[
"question"
] | 2023-07-28T07:44:52Z
| 2023-09-04T20:59:21Z
| null |
jozefchutka
|
huggingface/blog
| 1,352
|
How to train the autoformer?
|
Dear authors,
I have read your blog at https://huggingface.co/blog/autoformer, it is great to explain why transformer is better than Dlinear.
However, I am wondering how to train my own Autoformer instead of using a pretrained Autoformer.
Best regards
|
https://github.com/huggingface/blog/issues/1352
|
open
|
[] | 2023-07-28T03:28:33Z
| 2023-12-07T17:40:09Z
| null |
AppleMax1992
|
huggingface/text-generation-inference
| 718
|
How to make sure Flash and PagedAttention are running?
|
### System Info
I am running the following for llamav2, and was wondering how I can make sure pagedattention and flashattention are running? any Flag to be set or they are enabled by default?
```
docker run --gpus all --shm-size 1g -p $PORT:80 \
-v $PWD/data:/data \
-e HUGGING_FACE_HUB_TOKEN=$token \
ghcr.io/huggingface/text-generation-inference:0.9.3 \
--model-id $MODEL \
--sharded false \
--max-input-length 1024 \
--max-total-tokens 2048 \
--max-best-of 5 \
--max-concurrent-requests 5000 \
--max-batch-total-tokens $TOKENS\
--num-shard 4
```
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
It more of question not a bug.
### Expected behavior
just doc clarification.
|
https://github.com/huggingface/text-generation-inference/issues/718
|
closed
|
[] | 2023-07-27T22:55:26Z
| 2023-07-28T08:19:20Z
| null |
HamidShojanazeri
|
huggingface/text-generation-inference
| 716
|
How to load private model in tgi in docker and difference inference performance when loading from huggingface/loading from locally directory
|
Hi team,
How do we load a private model in tgi in the docker because of the access issue?
One solution I think is to pre-download the model and then mount the model directory and load into tgi. However, I find out there is a big performance inference gap between these two methods and could the team provide some hints on why is it?
Reproduce step:
Model example: bigcode/santacoder
1. inference on 100 tokens via model-id bigcode/santacoder is 180ms
Command: `docker run --gpus all --shm-size 1g -p 8080:80 -v /data:/data ghcr.io/huggingface/text-generation-inference:0.9.4 --model-id bigcode/santacoder --num-shard 1 --max-input-length 1000 --max-total-tokens 2000 --max-batch-total-tokens 4096 --max-concurrent-requests 1 --max-stop-sequences 20 --dtype float16 --trust-remote-code`
total_time="158.787824ms" validation_time="221.404µs" queue_time="48.671µs" inference_time="158.517849ms" time_per_token="7.925892ms"
2.1 first git clone the bigcode/santacoder directory by running `git lfs install && git clone https://huggingface.co/bigcode/santacoder `
2.2 running docker image loading via model-id santacoder directory. inference on 100 tokens is 280ms.
command
`docker run --gpus all -v santacoder_path:/model --shm-size 1g -p 8080:80 -v /data:/data ghcr.io/huggingface/text-generation-inference:0.9.4 --model-id /model --num-shard 1 --max-input-length 1000 --max-total-tokens 2000 --max-batch-total-tokens 4096 --max-concurrent-requests 1 --max-stop-sequences 20 --dtype float16 --trust-remote-code`
total_time="329.15002ms" validation_time="183.883µs" queue_time="52.371µs" inference_time="328.914016ms" time_per_token="16.4457ms" seed="None"}:
For loading with local directory, it takes more time to shard and it has one warning about Model does not support automatic max batch total tokens. Also the output is garbage.
Test Command for query server `curl 127.0.0.1:8080/generate -X POST -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' -H 'Content-Type: application/json
`
I think there may be some additional steps to make model better performance but I have not realized it yet. Thanks for the help in advance!
Docker image version: ghcr.io/huggingface/text-generation-inference:0.9.4
|
https://github.com/huggingface/text-generation-inference/issues/716
|
closed
|
[] | 2023-07-27T21:12:38Z
| 2023-07-28T07:12:53Z
| null |
zch-cc
|
huggingface/text-generation-inference
| 711
|
How could I know what is wrong when connect refuse happen?
|
Hi
I try with below command to launch the docker.
```
docker run --rm --name tgi --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=1 -p 8080:80 ghcr.io/huggingface/text-generation-inference:0.9.3 --model-id decapoda-research/llama-7b-hf
```
At this moment, with netstat, I could see in host, 8080 port is already listened.
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN
and with
```
curl 127.0.0.1:8080/generate \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
-H 'Content-Type: application/json'
```
But I get connect refuse.
Is there some debugging method to check what goes wrong for this bug?
Thx
|
https://github.com/huggingface/text-generation-inference/issues/711
|
closed
|
[] | 2023-07-27T13:59:48Z
| 2023-07-27T14:10:46Z
| null |
leiwen83
|
huggingface/transformers
| 25,138
|
How to return detected language using whisper with asr pipeline?
|
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi, @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hello,
I'm trying to use asr pipeline with whisper, in other to detect an audio language and transcribe it. I get the transcribed audio successfully, but I have not found a way to return the detected language too.
I search the GitHub issues, and it seems this was added by [#21427](https://github.com/huggingface/transformers/pull/21427), but I don't know how to return the detected language. Here is my code:
```
from transformers import pipeline
import torch
speech_file = "input.mp3"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
whisper = pipeline("automatic-speech-recognition", max_new_tokens=448, model="openai/whisper-small", device=device)
whisper_result = whisper(speech_file)
print(whisper_result)
```
### Expected behavior
Be able to return detected language.
|
https://github.com/huggingface/transformers/issues/25138
|
closed
|
[] | 2023-07-27T10:51:31Z
| 2025-02-11T11:24:49Z
| null |
arso1er
|
huggingface/text-generation-inference
| 703
|
Is there an example how to quantize a model (e.g. meta-llama/Llama-2-7b-chat-hf) using the prebuilt docker image (e.g. ghcr.io/huggingface/text-generation-inference:0.9.3)
|
### System Info
0.9.3
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
NA
### Expected behavior
A command to quantize a model (e.g. meta-llama/Llama-2-7b-chat-hf) using the prebuilt docker image (e.g. ghcr.io/huggingface/text-generation-inference:0.9.3)
After quantization, the model should be able to be loaded with `text-generation-inference --quantize gptq`
|
https://github.com/huggingface/text-generation-inference/issues/703
|
closed
|
[] | 2023-07-27T01:08:54Z
| 2023-07-28T21:41:46Z
| null |
taoari
|
huggingface/sentence-transformers
| 2,262
|
How to pass more than sentence pairs to InputExamples for fine-tuning?
|
I have more information about each data point such as language and contextual data that could potentially help (maybe) for our task. The task is to generate sentence similarity embedding and labels.
For the time being, I was able to expand the input examples code to get these features in to expand the input.
```
Train_data = [‘sentence1’,’sentence2’,’textcategory1’,’label’]
Train_examples =[InputExample(texts=[x[0],x[1],x[2]],label=x[3]) for x in Train_data]
```
Since the `textcategory1` gets encoded as well at the end of the input example in the form of `sentence1[0];sentence2[0];textcategory1[0]` separated by ;.
1. How does this impact the overall input for a model since it doesnt just see a sentence pair but more?
2. Does the fine-tuning layer see the two sentences as pairs or it sees as a single input and a label?
3. Even though it works, if this is not the correct way how do I include the sense of tokens for the fine-tuning? I.e. use textcategory1 as <TOKEN1> or feature without messing with the embedding.
|
https://github.com/huggingface/sentence-transformers/issues/2262
|
open
|
[] | 2023-07-26T18:29:54Z
| 2023-07-30T15:39:24Z
| null |
cyriltw
|
huggingface/trl
| 578
|
How to load a trained reward model? Different (random) results each time the model is loaded.
|
I trained a reward model using QLoRA and now I want to load it. I followed the instructions from this example from peft:
https://github.com/huggingface/peft/blob/main/examples/sequence_classification/LoRA.ipynb
This leads me to the following code:
```
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSequenceClassification, AutoTokenizer
peft_model_id = "vincentmin/llama-2-7b-reward-oasst1"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForSequenceClassification.from_pretrained(
config.base_model_name_or_path,
num_labels=1,
load_in_8bit=True,
torch_dtype=torch.float16,
)
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, use_auth_token=True)
model.eval()
with torch.no_grad():
reward = model(**tokenizer("hello world", return_tensors='pt')).logits
reward
```
If I run this code twice in a row, including loading the model again, I get different results for `reward`. The model output should be deterministic. If I just calculate the reward with the same loaded model, the result is deterministic. Hence, I'm concluding that there are randomly initialised weights that are not correctly loaded with `PeftModel.from_pretrained`. If I try to test the model on the test data, I'm getting random (close to 50% accuracy) results, while the model reached accuracies of >70% during training.
I trained the model using an adaptation of https://github.com/lvwerra/trl/blob/main/examples/scripts/reward_trainer.py. The resulting configuration is here https://huggingface.co/vincentmin/llama-2-7b-reward-oasst1/blob/main/adapter_config.json.
How are we advised to push and load our finetuned reward models to get deterministic results? I think the community would benefit from a documented example as a companion to `reward_trainer.py`.
|
https://github.com/huggingface/trl/issues/578
|
closed
|
[] | 2023-07-26T15:02:13Z
| 2023-07-26T19:00:10Z
| null |
vincentmin
|
huggingface/datasets
| 6,078
|
resume_download with streaming=True
|
### Describe the bug
I used:
```
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True,
split="train"
)
```
Unfortunately, the server had a problem during the training process. I saved the step my training stopped at.
But how can I resume download from step 1_000_´000 without re-streaming all the first 1 million docs of the dataset?
`download_config=DownloadConfig(resume_download=True)` seems to not work with streaming=True.
### Steps to reproduce the bug
```
from datasets import load_dataset, DownloadConfig
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True, # optional
split="train",
download_config=DownloadConfig(resume_download=True)
)
# interupt the run and try to relaunch it => this restart from scratch
```
### Expected behavior
I would expect a parameter to start streaming from a given index in the dataset.
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.0
|
https://github.com/huggingface/datasets/issues/6078
|
closed
|
[] | 2023-07-26T14:08:22Z
| 2023-07-28T11:05:03Z
| 3
|
NicolasMICAUX
|
huggingface/diffusers
| 4,281
|
how o convert trained LoRA bin format file to A111 safetensor format
|
### Describe the bug
I find script convert_lora_safetensor_to_diffusers.py,but it seems like convert safetensors to bin,not bin to safetensors,I try run this script,error like this:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\Users\fut\Desktop\tinaniu\convert_lora_safetensor_to_diffusers.py:125 in <module> │
│ │
│ 122 │ lora_prefix_text_encoder = args.lora_prefix_text_encoder │
│ 123 │ alpha = args.alpha │
│ 124 │ │
│ ❱ 125 │ pipe = convert(base_model_path, checkpoint_path, lora_prefix_unet, lora_prefix_text_ │
│ 126 │ │
│ 127 │ pipe = pipe.to(args.device) │
│ 128 │ pipe.save_pretrained(args.dump_path, safe_serialization=args.to_safetensors) │
│ │
│ C:\Users\fut\Desktop\tinaniu\convert_lora_safetensor_to_diffusers.py:31 in convert │
│ │
│ 28 │ pipeline = StableDiffusionPipeline.from_pretrained(base_model_path, torch_dtype=torc │
│ 29 │ │
│ 30 │ # load LoRA weight from .safetensors │
│ ❱ 31 │ state_dict = load_file(checkpoint_path) │
│ 32 │ │
│ 33 │ visited = [] │
│ 34 │
│ │
│ D:\anaconda3\lib\site-packages\safetensors\torch.py:259 in load_file │
│ │
│ 256 │ ``` │
│ 257 │ """ │
│ 258 │ result = {} │
│ ❱ 259 │ with safe_open(filename, framework="pt", device=device) as f: │
│ 260 │ │ for k in f.keys(): │
│ 261 │ │ │ result[k] = f.get_tensor(k) │
│ 262 │ return result │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
SafetensorError: Error while deserializing header: HeaderTooLarge
### Reproduction
SafetensorError: Error while deserializing header: HeaderTooLarge
### Logs
_No response_
### System Info
diffusers==0.18.2
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/4281
|
closed
|
[
"bug",
"stale"
] | 2023-07-26T08:16:48Z
| 2023-09-04T15:03:46Z
| null |
futureflsl
|
huggingface/llm-vscode
| 50
|
the vsix doesn't work?,how to fix it
|
i download the vsix from https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode&ssr=false#version-history,but in vscode when i installed it ,it doesn't work 。could you fix this?
|
https://github.com/huggingface/llm-vscode/issues/50
|
closed
|
[] | 2023-07-26T07:05:17Z
| 2023-10-17T14:34:58Z
| null |
CuteBadEgg
|
huggingface/transformers.js
| 216
|
[Question] Getting a lot of ERR 404s when running in browser.
|
When implementing code that accesses bart-large-mnli in the front-end part of my code, the browser console tells me every attempt to use the pipeline fails with an error 404. (at least that's what I think it's telling me)
So I am trying to use the bart-large-mnli to analyze a bunch of 'post' objects, and only display them if the text in the post relates to a selected 'interest'.
Here is my javascript code to do that (checkRelevance.js):
```
import { pipeline } from "@xenova/transformers";
export default async function checkTweet(text, interest) {
try {
console.log(
`checking tweet...\ntext:${text.substring(
0,
10
)}...\ninterest:${interest}`
);
let pipe = await pipeline(
"zero-shot-classification",
"Xenova/bart-large-mnli",
{ quantized: false }
);
// console.log("await out...");
let out = await pipe(text, interest);
console.log(out);
const relevant = out.scores[0] >= 0.5;
console.log(out.scores[0]);
return relevant;
} catch (error) {
console.log(error);
}
}
```
And here is how it is implemented in the front end Feed.jsx:
```
useEffect(() => {
setFilteredPosts(posts.map(post => {
checkTweet(post.text, selectedInterest).then(result => {
if (result) {
return post
}
}
)
}))
}, [selectedInterest]);
// ...
filteredPosts.map((post) => (
<Post
displayName={post.displayName}
userName={post.userName}
verified={post.verified}
text={post.text}
image={post.image}
avatar={post.avatar}
/>)
```
Now when I run checkRelevance.js on it's own with a small test, it accesses the api just fine, but when it's implemented in the browser I get this:
<img width="467" alt="Screen Shot 2023-07-25 at 5 40 40 PM" src="https://github.com/xenova/transformers.js/assets/77216995/6d693e09-d12d-4cfc-855d-7a764e0faca3">
and then this:
<img width="475" alt="Screen Shot 2023-07-25 at 5 41 06 PM" src="https://github.com/xenova/transformers.js/assets/77216995/50ad64c1-28b3-4469-8171-e652ecdc0a33">
I'm not asking you to debug all my code lol, just wondering if there's something extra that needs doing for running it in the browser. If you need to see more lmk. Thanks!
|
https://github.com/huggingface/transformers.js/issues/216
|
closed
|
[
"question"
] | 2023-07-26T00:42:20Z
| 2023-08-20T23:43:04Z
| null |
eklavyaisabird
|
huggingface/transformers.js
| 215
|
[Question] How to use a sharp buffer as input to "image-classification" pipeline ?
|
hi,
i am looking to use a sharp buffer as an input to "image-classification" pipeline, it seems that only url can be provided as an input, i am using the model in nodejs environment (backend) , can anyone provide a solution to this.
thanks
|
https://github.com/huggingface/transformers.js/issues/215
|
closed
|
[
"question"
] | 2023-07-25T21:10:06Z
| 2023-07-25T21:42:18Z
| null |
geminigeek
|
huggingface/chat-ui
| 368
|
Ability to pass in request headers for model endpoints
|
Hello.
I am trying to add an AWS Sagemaker model endpoint to chat-ui and I am getting stuck on the authorization part because I can't pass in request headers to the endpoint. I am able to pass in the authorization string but then I get the following error:
```
Could not parse last message {"message":"Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header. Authorization=AWS4-HMAC-SHA256 Credential=<redacted>, Signature=<redacted>"}
SyntaxError: Unexpected end of JSON input
at JSON.parse (<anonymous>)
at parseGeneratedText (/src/routes/conversation/[id]/+server.ts:196:32)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async saveMessage (/src/routes/conversation/[id]/+server.ts:107:26)
```
Is it possible to add the ability to pass in headers to the model endpoints in the `.env.local` file?
|
https://github.com/huggingface/chat-ui/issues/368
|
closed
|
[] | 2023-07-25T20:12:28Z
| 2023-08-18T15:26:41Z
| 3
|
lotif
|
huggingface/autotrain-advanced
| 161
|
How to save every X steps on cli?
|
You could set --save_strategy steps, but how do you specify the number of steps so that the model is saved every X steps?
My command:
```
autotrain llm --train --project_name project --model ./llama/llama_models/7B-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs 1 --trainer sft --save_strategy steps --save_total_limit 1
```
|
https://github.com/huggingface/autotrain-advanced/issues/161
|
closed
|
[] | 2023-07-25T16:10:22Z
| 2023-12-18T15:29:08Z
| null |
astarostap
|
huggingface/setfit
| 400
|
From which number of training samples does it not make sense anymore to use SetFit?
|
I'm building a classifier that assigns news articles to one of 8 categories, I was wondering if there was a rule of thumb that over a certain number of training samples per class it would make more sense to use a traditional transformer classifier such as roberta-large? Or will SetFit always be more accurate?
|
https://github.com/huggingface/setfit/issues/400
|
open
|
[
"question"
] | 2023-07-25T06:56:04Z
| 2023-08-01T14:13:48Z
| null |
lbelpaire
|
huggingface/diffusers
| 4,234
|
How to train instruct-pix2pix with controlnet and inference
|
Hi guys,
I want to train instruct-pix2pix using controlnet condition. As you know, currently available for [instruct-pix2pix](https://huggingface.co/docs/diffusers/training/instructpix2pix) and [control net](https://huggingface.co/docs/diffusers/training/controlnet) separately.
**Q1)** Have you plan about this problem for implementation?
**Q2)** How I can merge them and add controlnet into instruct-pix2pix?
**Q3)** Suppose this issue is done, I want to do start training, In your opinion, If we use controlnet pretraining network, and freeze that network and I want to train only instruct-pix2pix model, Is it common way to do?
|
https://github.com/huggingface/diffusers/issues/4234
|
closed
|
[
"stale"
] | 2023-07-24T13:47:02Z
| 2023-08-31T15:04:14Z
| null |
mzeynali
|
huggingface/chat-ui
| 366
|
v0.4.0 Not on GitHub
|
The hosted version is already at v0.4.0. This is at least not reflected in the tags or releases here. Is there other non public code?
|
https://github.com/huggingface/chat-ui/issues/366
|
closed
|
[] | 2023-07-24T11:35:38Z
| 2023-07-24T13:19:30Z
| 2
|
claell
|
huggingface/chat-ui
| 364
|
Facing Error 403 after deployment
|
Hi folks!
My Chat-UI setup along with a custom LangChain model works perfect on localhost. I tried to deploy it on an Azure VM with Docker Containers and I have been facing this issue which might be due to MongoDB.

Any help is appreciated. Thank you
|
https://github.com/huggingface/chat-ui/issues/364
|
closed
|
[
"back",
"support"
] | 2023-07-24T10:57:53Z
| 2024-04-25T16:29:38Z
| 13
|
awsum0225
|
huggingface/chat-ui
| 363
|
When starting with build files, it becomes impossible to change the model.
|
When starting with pm2 following the Docker file's instructions, I encounter an issue where I cannot change the model. Specifically, after clicking on "Current Model," a popup to select the model appears, but even after selecting "Apply," no changes are observed. Upon inspecting the developer tools, I noticed a 403 Error for http://localhost:3000/settings. This problem occurs both when hosting the software on a Docker container and when deploying it directly.

Also, I have confirmed that this error does not occur when using `npm run dev` or `npm run preview`. Therefore, I suspect that this issue may be related to pm2. If someone has any hints or insights that could help resolve this problem, I would greatly appreciate comments.
My environment is as follows:
OS: Windows 10 + WSL 2 (Ubuntu 20.04)
Node Version: 18.15.0
Commit ID: 569bde33470b075bf1365af2cb03a1b31b875379
|
https://github.com/huggingface/chat-ui/issues/363
|
closed
|
[
"bug",
"support"
] | 2023-07-24T08:30:03Z
| 2023-10-16T16:07:25Z
| 4
|
suzuki-shm
|
huggingface/diffusers
| 4,222
|
How to train ldm on a low-resolution image dataset (128*128)
|
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
https://github.com/huggingface/diffusers/issues/4222
|
closed
|
[
"stale"
] | 2023-07-24T03:14:20Z
| 2023-08-31T15:04:25Z
| null |
crowningwang
|
huggingface/text-generation-inference
| 679
|
How to load a model from a given path?
|
### System Info
tgi version:0.9.0
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
I just want to use tgi to run llama-7b model to get the throughput on A100. The model files are preloaded in a given path. I followed the readme and found the following error.
**Is theres any option for load model from a path?** Thanks~
```shell
me@ubuntu20-02:~/zy$ docker run --gpus all --shm-size 1g -p 8080:80 -v ~/w/data:/data ghcr.io/huggingface/text-generation-inference:0.9.2 --model-id /shared/models/huggingface/llama-7B-hf/
2023-07-23T14:17:02.797888Z INFO text_generation_launcher: Args { model_id: "/shared/models/huggingface/LLM/llama-7B-hf/", revision: None, validation_workers: 2, sharded: None, num_shard: None, quantize: None, dtype: None, trust_remote_code: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_input_length: 1024, max_total_tokens: 2048, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: 16000, max_waiting_tokens: 20, hostname: "1401cbf60306", port: 80, shard_uds_path: "/tmp/text-generation-server", master_addr: "localhost", master_port: 29500, huggingface_hub_cache: Some("/data"), weights_cache_override: None, disable_custom_kernels: false, json_output: false, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None, ngrok: false, ngrok_authtoken: None, ngrok_domain: None, ngrok_username: None, ngrok_password: None, env: false }
2023-07-23T14:17:02.798147Z INFO text_generation_launcher: Starting download process.
2023-07-23T14:17:08.906356Z ERROR text_generation_launcher: Download encountered an error: Traceback (most recent call last):
File "/opt/conda/bin/text-generation-server", line 8, in <module>
sys.exit(app())
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py", line 109, in download_weights
utils.weight_files(model_id, revision, extension)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/hub.py", line 96, in weight_files
filenames = weight_hub_files(model_id, revision, extension)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/hub.py", line 25, in weight_hub_files
info = api.model_info(model_id, revision=revision)
File "/opt/conda/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 112, in _inner_fn
validate_repo_id(arg_value)
File "/opt/conda/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/bigdata/shared/models/huggingface/LLM/llama-7B-hf/'. Use `repo_type` argument if needed.
Error: DownloadError
```
### Expected behavior
output the running log.
|
https://github.com/huggingface/text-generation-inference/issues/679
|
closed
|
[] | 2023-07-23T06:35:16Z
| 2023-07-24T01:34:10Z
| null |
zhaoyang-star
|
huggingface/controlnet_aux
| 67
|
Please I want to know how to install
|
Hello, I am new to this and I want to know how to install this particular package. I have installed other packages, but this one I do not know how. Please help with this.
|
https://github.com/huggingface/controlnet_aux/issues/67
|
open
|
[] | 2023-07-22T18:57:33Z
| 2023-07-26T01:03:21Z
| null |
sohaib19922
|
huggingface/diffusers
| 4,210
|
How to use "attention_mask" in "forward" function of "UNet2DConditionModel" defined in "diffusers/src/diffusers/models /unet_2d_condition.py"?
|
### Describe the bug
How to use the "attention_mask" in UNet2DConditionModel? What should the size of "attention_mask" look like?
And "attention_mask" can not be used when opening "enable_xformers_memory_efficient_attention" in "examples/text_to_image/train_text_to_image.py"?
` File "/usr/local/lib/python3.9/dist-packages/diffusers/models/unet_2d_blocks.py", line 970, in custom_forward
return module(*inputs, return_dict=return_dict)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/diffusers/models/transformer_2d.py", line 291, in forward
hidden_states = block(
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/diffusers/models/attention.py", line 154, in forward
attn_output = self.attn1(
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/diffusers/models/attention_processor.py", line 321, in forward
return self.processor(
File "/usr/local/lib/python3.9/dist-packages/diffusers/models/attention_processor.py", line 1027, in __call__
attention_mask = attention_mask.expand(-1, query_tokens, -1)
RuntimeError: expand(torch.cuda.HalfTensor{[80, 1, 6144, 6144]}, size=[-1, 6144, -1]): the number of sizes provided (3) must be greater or equal to the number of dimensions in the tensor (4)`
### Reproduction
None
### Logs
_No response_
### System Info
- `diffusers` version: 0.19.0.dev0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Huggingface_hub version: 0.16.4
- Transformers version: 4.30.2
- Accelerate version: 0.21.0
- xFormers version: 0.0.20
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/4210
|
closed
|
[
"bug",
"stale"
] | 2023-07-22T17:28:56Z
| 2024-10-18T16:34:37Z
| null |
ZihaoW123
|
huggingface/accelerate
| 1,758
|
How to use c10 backend for fault tolerance
|
Hi,
I found little to no documentation on how to use c10 backend for fault tolerance with accelerate. PyTorch seems to be having this:
https://pytorch.org/docs/stable/elastic/rendezvous.html
I am looking for fault tolerance in case of crash in few nodes, which also means adjusting batch size dynamically to account for nodes that are down.
Thanks in advance.
|
https://github.com/huggingface/accelerate/issues/1758
|
closed
|
[] | 2023-07-22T08:26:33Z
| 2023-08-29T15:06:00Z
| null |
geekyGoku
|
huggingface/autotrain-advanced
| 155
|
How to do inference via autotrain-advanced?
|
I see an option to do inference autotrain llm --help.
1. Can you share command to do inference on say llama2 model ? How do you pass lora files to do inference?
2. Any option to do merge and unload while saving the model locally?
3. Any option for multi-gpu training with single node - specify local rank?
|
https://github.com/huggingface/autotrain-advanced/issues/155
|
closed
|
[] | 2023-07-22T05:55:25Z
| 2023-12-15T00:14:28Z
| null |
sujithjoseph
|
huggingface/transformers.js
| 206
|
[Question] Output always equal to Input in text-generation
|
I tried a different types of input and always get the output equals the input... What I'm missing?
```
const answerer = await pipeline('text-generation', 'Xenova/LaMini-Cerebras-590M');
let zica = await answerer(`Based on this history:
André de Mattos Ferraz is an engineering manager in Rio de Janeiro, Brazil. He has worked in systems development in the oil sector, working in several areas of the oil/gas life cycle: Exploration, Reservoir, and Production. He also worked on data science projects for predicting failures of water injection pumps, forecasting water filter saturation (SRU), and analyzing vibrations.
What are André tech skills?`);
console.log(zica)
```

|
https://github.com/huggingface/transformers.js/issues/206
|
closed
|
[
"question"
] | 2023-07-21T21:18:02Z
| 2023-07-22T02:21:05Z
| null |
AndreEneva
|
huggingface/transformers.js
| 205
|
[Question] Is transformers.js expected to work with react native?
|
I've naively been trying to run the transformers js library via react native on android.
Note that onnxruntime-react-native explicitly supports react native, however the transformers.js package depends only on onnxruntime-web and onnruntime-node.
Importing the transformers.js works fine, however as I try to load a model, I receive the error `import.meta` is currently unsupported from `transformers.js`.
It would be super convenient to be able to use pipes directly without needing to interface without onnxruntine-react-native directly! If not supported yet, what would need to be done?
|
https://github.com/huggingface/transformers.js/issues/205
|
closed
|
[
"question"
] | 2023-07-21T20:55:44Z
| 2023-07-21T21:35:35Z
| null |
Wehzie
|
huggingface/setfit
| 398
|
hyperparameters to control how to handle long documents
|
It's common that one might want to use setfit for classifying documents that are longer than max_token_len.
There are several strategies for handling long documents, and the efficacy of each is data dependent:
* Break the document up at max_token_length, possibly avoiding breaking word boundaries.
* Optionally using a sliding window.
* Keeping all the windows, or the first k-windows, or something fancier like finding the most "interesting" windows with respect to the overall corpus.
Then after embedding each window, different classification strategies are possible:
* maxpool then predict
* average then predict
* predict then average
It would be great if these could approaches could be hyperparameters for validation + test.
For train, it might be easiest to insist the training max_token_len is in bounds, alternately the above strategies could be used too.
Related:
https://github.com/UKPLab/sentence-transformers/issues/1673
https://github.com/UKPLab/sentence-transformers/issues/1333
https://github.com/UKPLab/sentence-transformers/issues/1166
|
https://github.com/huggingface/setfit/issues/398
|
open
|
[] | 2023-07-21T11:53:13Z
| 2023-07-21T11:53:13Z
| null |
turian
|
huggingface/text-generation-inference
| 672
|
What is optimal max batch size max sequence length (max_total_tokens) for running llama 2 70b chat on 4 A100 80GB?
|
This is what i have in my current config
validation_workers: 2, max_total_tokens: 4096, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: None, max_waiting_tokens: 20
What do you recommend I should use to get the most out of inference for this setup?
|
https://github.com/huggingface/text-generation-inference/issues/672
|
closed
|
[] | 2023-07-21T11:17:49Z
| 2023-07-21T12:45:31Z
| null |
yakotoka
|
huggingface/datasets
| 6,057
|
Why is the speed difference of gen example so big?
|
```python
def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir):
with open(metadata_path, 'r') as file:
metadata = json.load(file)
for idx, item in enumerate(metadata):
image_path = item.get('image_path')
text_content = item.get('text_content')
image_data = open(image_path, "rb").read()
yield idx, {
"text": text_content,
"image": {
"path": image_path,
"bytes": image_data,
},
"conditioning_image": {
"path": image_path,
"bytes": image_data,
},
}
```
Hello,
I use the above function to deal with my local data set, but I am very surprised that the speed at which I generate example is very different. When I start a training task, **sometimes 1000examples/s, sometimes only 10examples/s.**

I'm not saying that speed is changing all the time. I mean, the reading speed is different in different training, which will cause me to start training over and over again until the speed of this generation of examples is normal.
|
https://github.com/huggingface/datasets/issues/6057
|
closed
|
[] | 2023-07-21T03:34:49Z
| 2023-10-04T18:06:16Z
| 1
|
pixeli99
|
pytorch/cpuinfo
| 169
|
How to cross-compile arm64 on linux
|
https://github.com/pytorch/cpuinfo/issues/169
|
closed
|
[] | 2023-07-21T03:11:25Z
| 2023-07-21T19:09:06Z
| null |
HongxiaoMa
|
|
huggingface/transformers.js
| 203
|
how to do embeddings?
|
I want to create an AI assistant for my personal website using Node.js. While I can easily create it using OpenAI embeddings, their API costs are prohibitively expensive. Therefore, I am looking for an alternative method and wondering how I can perform embeddings using a CSV file. Can you advise me on how to do this?
```
async function getEmbeddings(tokens) {
console.log("start getEmbeddings");
let response;
try {
console.log("initiating openai api call");
response = await openai.createEmbedding({
model: "text-embedding-ada-002",
input: tokens,
});
} catch (e) {
console.error("Error calling OpenAI API getEmbeddings:", e?.response?.data);
throw new Error("Error calling OpenAI API getEmbeddings");
}
return response.data.data;
}
```
|
https://github.com/huggingface/transformers.js/issues/203
|
closed
|
[
"question"
] | 2023-07-21T02:41:40Z
| 2024-06-26T14:09:51Z
| null |
putuoka
|
huggingface/chat-ui
| 361
|
Configuration for Llama 2
|
I am trying to self host Llama 2 with https://github.com/huggingface/text-generation-inference and https://github.com/huggingface/chat-ui . If I give configuration for chat-ui like this:
```
{
"name": "llama2-7b-chat",
"datasetName": "llama2-7b-chat",
"description": "A good alternative to ChatGPT",
"endpoints": [{"url": "http://127.0.0.1:8081/generate_stream"}],
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>",
"messageEndToken": "</s>",
"preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.8,
"top_p": 0.95,
"repetition_penalty": 1.8,
"top_k": 10,
"truncate": 1000,
"max_new_tokens": 1024
}
}
```
It will not return good response like https://huggingface.co/chat.

|
https://github.com/huggingface/chat-ui/issues/361
|
closed
|
[
"support",
"models"
] | 2023-07-20T14:04:29Z
| 2023-08-22T13:54:46Z
| 3
|
aisensiy
|
huggingface/text-generation-inference
| 658
|
How to use AutoGPTQ model in tgi
|

command:
export GPTQ_BITS=4
export GPTQ_GROUPSIZE=128
text-generation-launcher --model-id Ziya-LLaMA-13B_4bit --disable-custom-kernels --port 6006 --revision gptq-4bit-128g-actorder_True --quantize gptq
result:
Traceback (most recent call last):
File "/root/miniconda3/envs/text-generation-inference/bin/text-generation-server", line 8, in <module>
sys.exit(app())
File "/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/cli.py", line 78, in serve
server.serve(
File "/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/server.py", line 169, in serve
asyncio.run(
File "/root/miniconda3/envs/text-generation-inference/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/root/miniconda3/envs/text-generation-inference/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/server.py", line 136, in serve_inner
model = get_model(
File "/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/models/__init__.py", line 195, in get_model
return CausalLM(
File "/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/models/causal_lm.py", line 477, in __init__
model = AutoModelForCausalLM.from_pretrained(
File "/root/miniconda3/envs/text-generation-inference/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 467, in from_pretrained
return model_class.from_pretrained(
File "/root/miniconda3/envs/text-generation-inference/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2387, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory Ziya-LLaMA-13B_4bit.
rank=0
2023-07-20T08:34:02.453608Z ERROR text_generation_launcher: Shard 0 failed to start
2023-07-20T08:34:02.453654Z INFO text_generation_launcher: Shutting down shards
|
https://github.com/huggingface/text-generation-inference/issues/658
|
closed
|
[] | 2023-07-20T08:42:57Z
| 2023-07-31T23:50:55Z
| null |
Minami-su
|
huggingface/chat-ui
| 358
|
Broken encoding for Korean and possibly other languages
|
I was testing the llama2 and noticed there are some encoding errors (Ignore that the output is total nonsense):
<img width="1618" alt="image" src="https://github.com/huggingface/chat-ui/assets/15624271/61868780-efa0-4670-84d9-734410a05451">
I though It could be because of weird mid-unicode tokenization but I also noticed this on a custom demo using huggingchat ui:
It renders correctly & strangely enough breaks and unbreaks randomly.
https://github.com/huggingface/chat-ui/assets/15624271/7b7e97cb-876d-47cc-b89d-aabebb9197cf
|
https://github.com/huggingface/chat-ui/issues/358
|
closed
|
[
"question",
"models"
] | 2023-07-20T05:00:03Z
| 2023-09-11T09:34:12Z
| null |
cceyda
|
huggingface/diffusers
| 4,160
|
How to use diffusers force zeros?
|
it seems that it only has effect if its used on instance of diffusers class before model is loaded,
but i only get instance when i call from_pretrained or from_single_file
|
https://github.com/huggingface/diffusers/issues/4160
|
closed
|
[
"stale",
"SD.Next"
] | 2023-07-19T22:36:38Z
| 2023-09-01T13:09:28Z
| null |
patrickvonplaten
|
huggingface/transformers.js
| 200
|
[Question] Translation models
|
<!-- QUESTION GOES HERE -->
@xenova is there a model that do the text translation that have lighter weight i mean with minimum size?
|
https://github.com/huggingface/transformers.js/issues/200
|
closed
|
[
"question"
] | 2023-07-19T22:07:37Z
| 2023-07-27T00:17:24Z
| null |
jedLahrim
|
huggingface/dataset-viewer
| 1,532
|
provide one "partial" field per entry in aggregated responses
|
For example, https://datasets-server.huggingface.co/size?dataset=c4 only provides a global `partial: true` field and the response does not explicit that the "train" split is partial, while the "test" one is complete.
Every entry in `configs` and `splits` should also include its own `partial` field, to be able to show this information in the viewer (selects)
- currently:
<img width="1528" alt="Capture d’écran 2023-07-19 à 16 00 28" src="https://github.com/huggingface/datasets-server/assets/1676121/92d27982-0fa3-44f2-a73f-a0ae614da40c">
- ideally, something like:
<img width="1529" alt="Capture d’écran 2023-07-19 à 16 01 39" src="https://github.com/huggingface/datasets-server/assets/1676121/c638af93-30de-4ab7-8fdd-389202d41c88">
Endpoints where we want these extra fields:
- /info, dataset-level
- /size, dataset-level
- /size, config-level
|
https://github.com/huggingface/dataset-viewer/issues/1532
|
open
|
[
"question",
"feature request",
"P2"
] | 2023-07-19T20:01:58Z
| 2024-05-16T09:36:20Z
| null |
severo
|
huggingface/datasets
| 6,053
|
Change package name from "datasets" to something less generic
|
### Feature request
I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude.
My preference would be a pattern like what you get with all the other big libraries like numpy or pandas:
```
import huggingface as hf
# hf.transformers, hf.datasets, hf.evaluate
```
or things like
```
import huggingface.transformers as tf
# tf.load_model(), etc
```
If this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on.
I realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this.
Sorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like "package name".
Sister issues:
- [transformers](https://github.com/huggingface/transformers/issues/24934)
- **datasets**
- [evaluate](https://github.com/huggingface/evaluate/issues/476)
### Motivation
Not taking up package names the user is likely to want to use.
### Your contribution
No - more a matter of internal discussion among core library authors.
|
https://github.com/huggingface/datasets/issues/6053
|
closed
|
[
"enhancement"
] | 2023-07-19T19:53:28Z
| 2024-11-20T21:22:36Z
| 2
|
jack-jjm
|
huggingface/trl
| 542
|
Supervised Finetuning - How to mask loss for prompts
|
How can I mask the loss in supervised fine-tuning for prompts similar to how it is done in the LLAMA-2 paper?
Specifically, I have a dataset of prompts and ideal answers. When fine-tuning my model with a `SFTTrainer` using a `ConstantLengthDataset` (similar to the StackExchange example), how can I ensure that prompts are not considered in the loss?
|
https://github.com/huggingface/trl/issues/542
|
closed
|
[] | 2023-07-19T14:55:17Z
| 2023-08-16T15:02:50Z
| null |
jvhoffbauer
|
huggingface/chat-ui
| 351
|
Starchat-beta doesn't stop generating text properly
|
Hi, I am deploying starchat-beta and chat-ui locally, it is strange that I found the chat will generate some useful text in the beginning, then it will not stop, then generates some unrelated text, like below

Is it related with .env.local configuration?

|
https://github.com/huggingface/chat-ui/issues/351
|
closed
|
[
"support",
"models"
] | 2023-07-19T14:32:59Z
| 2023-07-20T06:29:09Z
| 3
|
XiaPZ
|
huggingface/trl
| 534
|
How to load a trained model to continue trianing?
|
Dear TRL team,
I face a challenge that I can't finish the training in one go. Thus, I need to load the model that is trained half-way and continue the training process. Could you please guide me how to load the half-way trained model and continue the trianing process?
Best
|
https://github.com/huggingface/trl/issues/534
|
closed
|
[] | 2023-07-19T04:36:15Z
| 2023-08-26T15:04:58Z
| null |
zyzisastudyreallyhardguy
|
huggingface/diffusers
| 4,150
|
How to train text-to-image model based on SDXL?
|
Can I use the train_text_to_image.py code directly?
|
https://github.com/huggingface/diffusers/issues/4150
|
closed
|
[] | 2023-07-19T02:59:00Z
| 2023-07-21T15:23:30Z
| null |
EnzoWuu
|
huggingface/text-generation-inference
| 636
|
How to config vllm gpu_memory_utilization?
|
Hi team, I am trying using codegen2.5 7b model on tgi with A100 40GB and it gives me out of memory error because of vllm. I wonder if there is any way I can config gpu_memory_utilization in the code such that the vllm does not reserve too memory beforehand
|
https://github.com/huggingface/text-generation-inference/issues/636
|
closed
|
[] | 2023-07-18T20:19:28Z
| 2024-07-04T07:32:01Z
| null |
zch-cc
|
huggingface/optimum
| 1,202
|
What is the process for contributing a new backend?
|
### Feature request
In terms of contributing a new backend/optimizer to Optimum as an optional extension, what is the process?
I have been working on an Optimum integration with [DeepSparse](https://github.com/neuralmagic/deepsparse), Neural Magic's inference runtime for sparse execution on CPUs. If it is an open-source contribution that we've already started and will continue to support, is it mostly just a function of creating a `huggingface/optimum-deepsparse` repo to push up the state?
### Motivation
We already have a project hosted by Neural Magic: https://github.com/neuralmagic/optimum-deepsparse
It is already functional for a few simple tasks (image/text/audio/token classification, question answering, masked lm) and is generally going for usability-parity with ORTModel since DeepSparse also takes in ONNX models directly for compilation.
DeepSparse supports x86 and ARM CPUs, and is able to see performance benefits from unstructured sparsity on all platforms.
Having optimum-deepsparse be officially installable through the Optimum base as an extension i.e. `pip install optimum[deepsparse]` would be important for writing clean flows for people to sparsify their models and get the maximal inference performance out of their CPUs.
### Your contribution
https://github.com/neuralmagic/optimum-deepsparse
I'm happy to submit a PR to add it to Optimum's setup.py, write documentation to detail how to use it, and anything else required to make an official request. Thank you!
|
https://github.com/huggingface/optimum/issues/1202
|
closed
|
[
"question",
"Stale"
] | 2023-07-18T18:07:14Z
| 2025-05-13T02:14:09Z
| null |
mgoin
|
huggingface/accelerate
| 1,743
|
what is the possible reason for accelerate running on cuda 12.2 8xA100 with error accelerate multiprocessing.api:failed (exitcode: -9)
|
### System Info
```Shell
ubuntu 22.04
gpu A100 80G
cuda version 12.2
accelerate version 0.21.0
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
running the demo script from diffusers [train_text_to_image.py](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) for 100k iterations with batch size 8 each gpu, 8 A100 gpus in total
### Expected behavior
successful training without any problem
|
https://github.com/huggingface/accelerate/issues/1743
|
closed
|
[] | 2023-07-18T13:33:35Z
| 2023-08-15T09:18:05Z
| null |
garychan22
|
pytorch/TensorRT
| 2,122
|
❓ Why the speed (time) in PTQ and QAT are different?
|
## ❓ Why the speed (time) in PTQ and QAT are different?
I used your sample notebook.
The link is https://github.com/pytorch/TensorRT/blob/main/notebooks/qat-ptq-workflow.ipynb.
I also performed this approach on some other models. In all cases like your example the PTQ converted model is faster than QAT converted model.
I think they must have the same speed because their process is the same just some weights are different. Speeds must be the same. Is this for your implementation or this is typical?
Can I make QAT converted model faster like PTQ?
|
https://github.com/pytorch/TensorRT/issues/2122
|
closed
|
[
"question",
"No Activity",
"component: quantization"
] | 2023-07-18T13:18:58Z
| 2023-11-02T00:02:20Z
| null |
panahikhas
|
huggingface/datasets
| 6,048
|
when i use datasets.load_dataset, i encounter the http connect error!
|
### Describe the bug
`common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)`
when i run the code above, i got the error as below:
--------------------------------------------
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f299ed082e0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))")))
--------------------------------------------------
My all data is on local machine, why does it need to connect the internet? how can i fix it, because my machine cannot connect the internet.
### Steps to reproduce the bug
1
### Expected behavior
no error when i use the load_dataset func
### Environment info
python=3.8.15
|
https://github.com/huggingface/datasets/issues/6048
|
closed
|
[] | 2023-07-18T10:16:34Z
| 2023-07-18T16:18:39Z
| 1
|
yangy1992
|
huggingface/safetensors
| 299
|
Any plan to support Nvidia GPUDirect Storage?
|
### Feature request
Nvidia GPUDirect Storage has better performance to load model from NVMe disk or supported distributed storage. It will do the real `zero copy`.
### Motivation
It will get better performance with Nvidia GDS.
### Your contribution
Not sure.
|
https://github.com/huggingface/safetensors/issues/299
|
closed
|
[
"Stale"
] | 2023-07-17T06:36:51Z
| 2025-11-22T05:21:50Z
| 9
|
carmark
|
pytorch/pytorch.github.io
| 1,410
|
Website front page does not say what PyTorch is
|
## 📚 Documentation
I came across PyTorch because I was installing some software and it appeared in the logs, so I decided to look it up and arrived on https://pytorch.org/. Unfortunately this was not enlightening, as the front page of the website does not clarify what PyTorch is. It does list: membership availability notice; links to featured reads, PyTorch 2.0, upcoming events; feature highlights; installation instructions; featured projects; community discussion channel links; but nowhere does it actually say what PyTorch is, which seems to me like quite important information for the front page of a project.
|
https://github.com/pytorch/pytorch.github.io/issues/1410
|
closed
|
[] | 2023-07-16T23:21:04Z
| 2023-07-21T15:11:52Z
| null |
zopsicle
|
huggingface/optimum
| 1,191
|
ONNX Generation - Support for Donut
|
### Feature request
I have been trying to convert my custom Donut model to ONNX by using this specific command:
!python3 -m optimum.exporters.onnx --model={custom_model_id} --task=vision2seq-lm ./models/onnx --optimize O4 --atol 1e-2 --opset=13
The following exception occurs at the end of the process, by which I understand the vision-encoder-decoder is not supported yet. Are there any plans to integrate vision-encoder-decoder for optimum.exporters.onnx soon?
Error observed:
File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/utils.py", line 162, in check_optimization_supported_model
raise NotImplementedError(
NotImplementedError: ONNX Runtime doesn't support the graph optimization of vision-encoder-decoder yet. Only ['albert', 'bart', 'bert', 'big_bird', 'blenderbot', 'bloom', 'camembert', 'codegen', 'deberta', 'deberta-v2', 'distilbert', 'electra', 'gpt2', 'gpt_neo', 'gpt_neox', 'gptj', 'longt5', 'llama', 'marian', 'mbart', 'mt5', 'm2m_100', 'nystromformer', 'pegasus', 'roberta', 't5', 'vit', 'whisper', 'xlm-roberta'] are supported. If you want to support vision-encoder-decoder please propose a PR or open up an issue in ONNX Runtime: https://github.com/microsoft/onnxruntime.
### Motivation
Use optimum.exporters.onnx to convert custom Donut model to ONNX to improve inference performance.
### Your contribution
Still looking at the links and getting familiar how to proceed with change. will be grateful if someone can point me to resources where I can get started. thanks.
|
https://github.com/huggingface/optimum/issues/1191
|
closed
|
[
"feature-request",
"onnx"
] | 2023-07-16T13:38:38Z
| 2024-10-15T16:14:33Z
| 3
|
ghost
|
huggingface/transformers.js
| 194
|
[Question] Transformers.js bundle size
|
I'm building a small project that runs `transformers.js` in a `Worker` to do client side embedding.
I noticed that including `import { pipeline } from '@xenova/transformers';` immediately increases my bundle size to over **3MB**.

Created using [webpack-bundle-analyzer](https://www.npmjs.com/package/webpack-bundle-analyzer)
Optimizing for this It's probably a large effort, but I was wondering if you have any ideas on how this could be optimized.
|
https://github.com/huggingface/transformers.js/issues/194
|
closed
|
[
"question"
] | 2023-07-16T08:06:28Z
| 2023-07-16T16:28:52Z
| null |
lizozom
|
huggingface/trl
| 520
|
how to change the cache directory when using AutoModelForCausalLMWithValueHead.from_pretrained()
|
I have tried several methods, but it still download to my home directory
|
https://github.com/huggingface/trl/issues/520
|
closed
|
[] | 2023-07-16T04:21:45Z
| 2023-07-17T08:11:02Z
| null |
zyzisastudyreallyhardguy
|
pytorch/TensorRT
| 2,117
|
❓ Unable to freeze tensor of type Int64/Float64 into constant layer, try to compile model with truncate_long_and_double enabled?
|
## ❓ RuntimeError: [Error thrown at core/conversion/converters/converter_util.cpp:251] Unable to freeze tensor of type Int64/Float64 into constant layer, try to compile model with truncate_long_and_double enabled:
1. A pre-trained Torch model like Resnet18 was loaded
2. The model was quantized using `pytorch_quantization.quant_modules.initialize()`
3. The quantized model was calibrated
4. The model was fine-tuned (QAT)
5. I tried to convert the fine-tuned model to TensorRT using
`trt_mod = torch_tensorrt.compile(qat_model,
inputs=[torch_tensorrt.Input([32, 3, 32, 32])],
enabled_precisions={torch.int8})`
but I encountered the error below:
`File "/home/i2027/anaconda3/envs/p/lib/python3.10/site-packages/torch_tensorrt/_compile.py", line 133, in compile`
` return torch_tensorrt.ts.compile(`
`File "/home/i2027/anaconda3/envs/p/lib/python3.10/site-packages/torch_tensorrt/ts/_compiler.py", line 139, in compile`
` compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))`
`RuntimeError: [Error thrown at core/conversion/converters/converter_util.cpp:251] Unable to freeze tensor of type Int64/Float64 into constant layer, try to compile model with truncate_long_and_double enabled`
I have checked the model parameters and all of them were of type float32. I don't know why TorchTensorRT complains about Int64/Float64! Please note that I have managed to convert a simple CNN to TensorRT using the method described above successfully. However, I failed to convert an existing torchvision model using the steps above. I will be grateful for any hint.
## Environment
- PyTorch Version: 2.0.1+cu118
- CPU Architecture: x86
- OS: Ubuntu 20.04
- How you installed PyTorch: pip
- Python version: 3.10.11
- CUDA version: 12.1
- GPU models and configuration: GeForce GTX 1080 - 12 GB
- All pckages versions:
- - torch==2.0.1+cu118
- - torch_tensorrt==1.4.0
- - torchvision==0.15.2+cu118
- - pytorch_quantization==2.1.2
- - torchvision==0.15.2+cu118
## Additional context
The code is available at https://github.com/panahikhas/TensorRT-QAT/blob/main/torch-tensorrt-QAT.py to reproduce the results.
|
https://github.com/pytorch/TensorRT/issues/2117
|
closed
|
[
"question"
] | 2023-07-15T14:46:23Z
| 2023-07-18T13:00:07Z
| null |
panahikhas
|
huggingface/peft
| 711
|
How to change the location of soft tokens in prompt tuning
|
### Feature request
In fact, when prompt tuning, we will not always add it to the front, it may be in the middle. So I think it's important for us to change the location of soft tokens.
### Motivation
In fact, when prompt tuning, we will not always add it to the front, it may be in the middle. So I think it's important for us to change the location of soft tokens.
### Your contribution
no
|
https://github.com/huggingface/peft/issues/711
|
closed
|
[] | 2023-07-15T13:57:52Z
| 2024-04-09T06:39:55Z
| null |
XueTianci
|
huggingface/datasets
| 6,038
|
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare if str(split_generator.split_info.name).lower() == "all": AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'?
|
Hi, I use the code below to load local file
```
def _split_generators(self, dl_manager):
# TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
# If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
# dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
# It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
# urls = _URLS[self.config.name]
data_dir = dl_manager.download_and_extract(_URLs)
print(data_dir)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.join(data_dir["train"]),
"split": "train",
},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.join(data_dir["dev"]),
"split": "dev",
},
),
]
```
and error occured
```
Traceback (most recent call last):
File "/home/zhizhou/data1/zhanghao/huggingface/FineTuning_Transformer/load_local_dataset.py", line 2, in <module>
dataset = load_dataset("./QA_script.py",data_files='/home/zhizhou/.cache/huggingface/datasets/conversatiom_corps/part_file.json')
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/load.py", line 1809, in load_dataset
builder_instance.download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 1670, in _download_and_prepare
super()._download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare
if str(split_generator.split_info.name).lower() == "all":
AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'?
```
Could you help me?
|
https://github.com/huggingface/datasets/issues/6038
|
closed
|
[] | 2023-07-15T07:58:08Z
| 2023-07-24T11:54:15Z
| 1
|
BaiMeiyingxue
|
huggingface/datasets
| 6,033
|
`map` function doesn't fully utilize `input_columns`.
|
### Describe the bug
I wanted to select only some columns of data.
And I thought that's why the argument `input_columns` exists.
What I expected is like this:
If there are ["a", "b", "c", "d"] columns, and if I set `input_columns=["a", "d"]`, the data will have only ["a", "d"] columns.
But it doesn't select columns.
It preserves existing columns.
The main cause is `update` function of `dictionary` type `transformed_batch`.
https://github.com/huggingface/datasets/blob/682d21e94ab1e64c11b583de39dc4c93f0101c5a/src/datasets/iterable_dataset.py#L687-L691
`transformed_batch` gets all the columns by `transformed_batch = dict(batch)`.
Even `function_args` selects `input_columns`, `update` preserves columns other than `input_columns`.
I think it should take a new dictionary with columns in `input_columns` like this:
```
# transformed_batch = dict(batch)
# transformed_batch.update(self.function(*function_args, **self.fn_kwargs)
# This is what I think correct.
transformed_batch = self.function(*function_args, **self.fn_kwargs)
```
Let me know how to use `input_columns`.
### Steps to reproduce the bug
Described all above.
### Expected behavior
Described all above.
### Environment info
datasets: 2.12
python: 3.8
|
https://github.com/huggingface/datasets/issues/6033
|
closed
|
[] | 2023-07-14T08:49:28Z
| 2023-07-14T09:16:04Z
| 0
|
kwonmha
|
huggingface/text-generation-inference
| 614
|
How to make it? How can we extend the 'max_new_tokens' from 1512 to either 4096 or 8192?
|
### System Info
How can we extend the 'max_new_tokens' from 1512 to either 4096 or 8192?
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [X] My own modifications
### Reproduction
'max_new_tokens' from 1512 to either 4096 or 8192
### Expected behavior
'max_new_tokens' from 1512 to either 4096 or 8192
|
https://github.com/huggingface/text-generation-inference/issues/614
|
closed
|
[] | 2023-07-14T08:46:29Z
| 2023-07-19T06:04:32Z
| null |
DiamondYuanqi
|
pytorch/xla
| 5,307
|
About deepspeed support for "xla"
|
## ❓ Questions and Help
[distributed support of deepspeed on xla] Hello, does deepspeed support distributed training for xla? If not, can you provide support in this regard?
|
https://github.com/pytorch/xla/issues/5307
|
closed
|
[
"question"
] | 2023-07-14T02:56:46Z
| 2025-04-29T14:03:05Z
| null |
zhuziaaa
|
huggingface/transformers.js
| 193
|
all-MiniLM-L6-v2 vector lengths
|
Hey, is there any way to programmatically set fix the vector embedding array lengths to a certain length? I was using https://huggingface.co/Xenova/all-MiniLM-L6-v2 with nodejs and every input I ran through the pipe gave a different length, and it would be nice to be able to keep it consistent.
|
https://github.com/huggingface/transformers.js/issues/193
|
closed
|
[
"question"
] | 2023-07-13T20:31:06Z
| 2023-07-13T22:32:03Z
| null |
unkn-wn
|
huggingface/chat-ui
| 344
|
404 not found error when exporting data
|
https://github.com/huggingface/chat-ui/blob/1eff97d9fd47d8c486480d4d9a5208437c519cbb/src/routes/admin/export/%2Bserver.ts#L16
I am using the main branch and tried to export the dataset with the curl request given in the code, but the server returns 404 not found.
Its behind an reverse proxy with ssl, do i need to call the localhost or should it be possible even from outside the network ?
|
https://github.com/huggingface/chat-ui/issues/344
|
closed
|
[
"question",
"back"
] | 2023-07-13T08:40:27Z
| 2023-11-10T09:50:22Z
| null |
flozi00
|
pytorch/TensorRT
| 2,108
|
❓ [Question] How can I learn to convert an intermediate format IR to the TensorRT target?
|
## ❓ Question
<!-- Your question -->
Hello, I am also currently working on something similar to PyTorch FX. I would like to convert an intermediate format graph into a target engine (which can be any inference framework, using TensorRT as an example). I wanted to ask how Torch TRT accomplishes this operation. Are there any source code or documentation resources that I can refer to? Thank you very much!
|
https://github.com/pytorch/TensorRT/issues/2108
|
closed
|
[
"question"
] | 2023-07-13T04:58:16Z
| 2023-08-11T08:08:23Z
| null |
sanbuphy
|
huggingface/sentence-transformers
| 2,254
|
How to prepare label for the dataset that has two pairs of text, but not labels?
|
Hi,
Thank you for the great information, I have a question. My data has two column of texts, one as description of a request, the other one like an answer for that request. I want to use the Contrasiveloss to make the pairs of request and answer close and the other answer that are not related far, but I do not know how to provide the label for my positive pairs, and negative one, because the dataset function accept is a triple like this calling InputExample:
(a1,b1,1) (a1,bi,0)
I appreciate your help.
|
https://github.com/huggingface/sentence-transformers/issues/2254
|
open
|
[] | 2023-07-12T21:30:07Z
| 2023-07-30T15:38:09Z
| null |
Yarmohamadshr
|
huggingface/optimum
| 1,183
|
Cannot convert owlvit-base-patch32 model to ONNX and run inference
|
### System Info
```shell
Optimum version: 1.9.1
Python version: 3.11.3
OS: MacOS
```
### Who can help?
@mich
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When using the CLI command
`optimum-cli export onnx --model google/owlvit-base-patch32 --task zero-shot-object-detection object_detection/owlvit_onnx`
I'm able to get a converted ONNX format. Then, when using the following code to perform inference with the converted model:
`checkpoint = "google/owlvit-base-patch32"`
`processor = AutoProcessor.from_pretrained(checkpoint)`
`image = skimage.data.astronaut()`
`image = Image.fromarray(np.uint8(image)).convert("RGB")`
`text_queries = ["human face", "rocket", "nasa badge", "star-spangled banner", "woman", "smile", "hair", 'human head', 'human eye']`
`np_inputs = processor(text=text_queries, images=image, return_tensors="np")`
`session = ort.InferenceSession("object_detection/owlvit_onnx/model.onnx")`
`out =session.run(['logits', 'pred_boxes', 'text_embeds', 'image_embeds'], np_inputs)`
I get the following error:
`RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'/Reshape_3' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, onnxruntime::TensorShapeVector &, bool) gsl::narrow_cast(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{9,16}, requested shape:{2,4,16}`
Now it seems to be related to some input being wrong, but I cannot get what is wrong. The pre-processing step is the same as for the HF model, the only difference being instead of returning "pt" tensors I'm returning "np" so it can work with ONNX. Here are my input shapes:
input_ids: (9, 16)
attention_mask: (9, 16)
pixel_values: (1, 3, 768, 768)
Thanks in advance!
### Expected behavior
Inference to run successfully and outputs to be very similar to that of the original torch model.
|
https://github.com/huggingface/optimum/issues/1183
|
closed
|
[
"bug"
] | 2023-07-12T13:20:12Z
| 2024-07-27T14:27:58Z
| 9
|
Pedrohgv
|
pytorch/pytorch
| 105,047
|
I don't how to bulid pytorch in my cpu
|
If you have a question or would like help and support, please ask at our
[forums](https://discuss.pytorch.org/).
If you are submitting a feature request, please preface the title with [feature request].
If you are submitting a bug report, please fill in the following details.
my cpu is ppcle64
- PyTorch or Caffe2: i want to bulid pytorch
- How you installed PyTorch (conda, pip, source): pip
- Build command you used (if compiling from source):
- OS: Contens8
- PyTorch version:
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- GCC version (if compiling from source):
- CMake version:
- Versions of any other relevant libraries:
|
https://github.com/pytorch/pytorch/issues/105047
|
closed
|
[] | 2023-07-12T08:09:31Z
| 2023-07-14T03:15:15Z
| null |
miaowahexiaohuolong
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.