repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/audio
| 3,796
|
How to use my finetuned version of wave2vec2 for forced alignment as shown in example/
|
### 🐛 Describe the bug
Example script i am following, it used default pretrained model, where as. i want to use my own finetuned model.
https://pytorch.org/audio/main/generated/torchaudio.pipelines.Wav2Vec2FABundle.html#torchaudio.pipelines.Wav2Vec2FABundle
### Versions
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] onnx==1.15.0
[pip3] onnxruntime==1.16.3
[pip3] torch==2.2.2
[pip3] torchaudio==2.2.2
[pip3] torchvision==0.15.2
[conda] numpy 1.24.4 pypi_0 pypi
[conda] torch 2.2.2 pypi_0 pypi
[conda] torchaudio 2.2.2 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
|
https://github.com/pytorch/audio/issues/3796
|
open
|
[] | 2024-05-19T19:13:25Z
| 2024-05-19T19:13:25Z
| null |
omerarshad
|
huggingface/tokenizers
| 1,534
|
How to allow the merging of consecutive newline tokens \n when training a byte-level bpe tokenizer?
|
Hello, I'm currently working on training a byte-level BPE tokenizer using the Huggingface tokenizers library. I've created a simple training script, a sample corpus, and provided the output produced by this script. My aim is to understand why consecutive newline tokens `\n` are not being merged into a single token `\n\n` during the tokenization process. Below are the details:
```python
from tokenizers import (
Tokenizer,
pre_tokenizers,
models,
decoders,
trainers,
processors,
)
files = ["demo_corpus.txt"]
tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.Sequence([
pre_tokenizers.Digits(individual_digits=True),
pre_tokenizers.ByteLevel(add_prefix_space=False, use_regex=True)
])
tokenizer.decoder = decoders.ByteLevel()
tokenizer.post_processor = processors.ByteLevel()
trainer = trainers.BpeTrainer(
initial_alphabet=pre_tokenizers.ByteLevel.alphabet(),
vocab_size=2000,
special_tokens=[
"<pad>", "<|beginoftext|>", "<|endoftext|>"
]
)
tokenizer.train(files, trainer)
test_text = "#include <set>\n\n\n\n\n"
print("pre-tokenize spans:", tokenizer.pre_tokenizer.pre_tokenize_str(test_text))
ids = tokenizer.encode(test_text).ids
print(f"tokens: {[tokenizer.decode([tid]) for tid in ids]}")
```
demo_corpus.txt:
```
#include <cstdio>
#include <vector>
#include <set>
using namespace std;
int main(){
int N, A[100000], p = 0;
multiset<int> S;
scanf("%d", &N);
int p0 = 0, q0 = 1, q = N-1;
vector<int> result;
for(int i: result)
printf("%d\n", i);
}
```
output of training script:
```
pre-tokenize spans: [('#', (0, 1)), ('include', (1, 8)), ('Ġ<', (8, 10)), ('set', (10, 13)), ('>', (13, 14)), ('ĊĊĊĊĊ', (14, 19))]
tokens: ['#', 'include', ' <', 'set', '>', '\n', '\n', '\n', '\n', '\n']
```
the following is tokens produced by llama3 tokenizer:
```python
tokenizer = LlamaTokenizerFast.from_pretrained("my llama3 vocab path")
test_text = "#include <set>\n\n\n\n\n"
print([tokenizer.decode([tid]) for tid in tokenizer(test_text)["input_ids"]])
# output
# ['<|begin_of_text|>', '#include', ' <', 'set', '>\n\n\n\n\n']
```
|
https://github.com/huggingface/tokenizers/issues/1534
|
open
|
[
"bug"
] | 2024-05-18T03:11:35Z
| 2025-07-07T09:34:16Z
| null |
liuslnlp
|
huggingface/transformers
| 30,886
|
How to get the data seen by the model during training?
|
Hi! I haven't been able to find an answer to my question so opening an issue here. I'm fine-tuning the GPT-2 XL model using the trainer for 10 epochs and I'd like to save the data seen by the model during each epoch. More specifically, I want to save the data seen by the model every 242 steps. For instance, data seen from step 1 to step 242, step 243 to step 484, and so on until the end of the 10th epoch. I'm a bit confused about how to do this since the data is shuffled after each epoch. Is it possible to use `TrainerCallback` here?
These are my training args
` training_args = TrainingArguments(
f"models/XL",
evaluation_strategy = "steps",
learning_rate=2e-5,
weight_decay=0.01,
push_to_hub=False,
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
save_strategy="epoch",
save_steps = 242,
fp16=True,
report_to="none",
logging_strategy="steps",
logging_steps=100,
)`
I'd appreciate any directions. Thanks :)
|
https://github.com/huggingface/transformers/issues/30886
|
closed
|
[] | 2024-05-17T21:32:50Z
| 2024-05-20T17:26:29Z
| null |
jaydeepborkar
|
huggingface/optimum
| 1,859
|
Improve inference time TrOCR
|
I have a fine tuning TrOCR model, and i'm using
`from optimum.onnxruntime import ORTModelForVision2Seq`
how i can then make the inferation faster, when some one make a request in a endpoint api ? , i already using async for multi request
|
https://github.com/huggingface/optimum/issues/1859
|
closed
|
[
"question",
"inference",
"Stale"
] | 2024-05-16T13:31:53Z
| 2024-12-18T02:06:21Z
| null |
CrasCris
|
huggingface/chat-ui
| 1,148
|
Chat-ui Audit Logs
|
Hello,
Is there a way to log the username, sessionID, conversation ID, what question was sent in some type of log in chat-ui ? Or just the username and the question?
How can we accomplish this?
Thanks
|
https://github.com/huggingface/chat-ui/issues/1148
|
open
|
[] | 2024-05-16T11:13:30Z
| 2024-05-21T18:48:17Z
| 5
|
Neb2653
|
huggingface/diffusers
| 7,957
|
How to implement `IPAdapterAttnProcessor2_0` with xformers
|
I want to fine-tune IP-adapter model with xformers, but I did not find the implementation of the xformers version corresponding to IPAdapterAttnProcessor2_0. I want to implement attention processor in xformers, are the following two lines of code the only difference between the two versions?
In `XFormersAttnProcessor`:
```python
hidden_states = xformers.ops.memory_efficient_attention(
query, key, value, attn_bias=attention_mask, op=self.attention_op, scale=attn.scale
)
```
In `AttnProcessor2_0`:
```python
hidden_states = F.scaled_dot_product_attention(
query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
)
```
|
https://github.com/huggingface/diffusers/issues/7957
|
closed
|
[] | 2024-05-16T08:54:07Z
| 2024-05-23T13:03:42Z
| null |
JWargrave
|
pytorch/xla
| 7,070
|
Cannot Import _XLAC
|
## ❓ Questions and Help
When I want to import torch_xla,the error occurs
```shell
>>> import torch_xla
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/code/pytorch/torch-xla/torch_xla/__init__.py", line 114, in <module>
import _XLAC
ImportError: /code/pytorch/torch-xla/_XLAC.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZNK5torch8autograd4Node4nameEv
```
And I have followed the guide to make sure my torch version is the same as torch_xla
[https://github.com/Lightning-AI/pytorch-lightning/discussions/8320](url)
```shell
>>> pip list | grep torch
[2]+ Stopped python
(torch_xla) root@0c9ffd606fd3:/code/pytorch/torch-xla# pip list | grep torch
rotary-embedding-torch 0.6.0
torch 2.1.0+cu121 /root/miniconda3/envs/torch_xla/lib/python3.10/site-packages
torch-xla 2.1.0 /code/pytorch/torch-xla
torchaudio 2.1.0+cu121
torchview 0.2.6
torchvision 0.16.0+cu121
torchviz 0.0.2
```
What should I do? TXS help
|
https://github.com/pytorch/xla/issues/7070
|
open
|
[
"question"
] | 2024-05-16T07:24:08Z
| 2025-04-17T13:38:56Z
| null |
DarkenStar
|
huggingface/OBELICS
| 12
|
How to use LDA for topic modeling
|
Thanks for your work again!
In the paper the topic modeling of OBELICS is implemented using LDA, and I am wondering what is the specific LDA model was used, what setting was used to train the model, and most importantly, how the topic was derived from the key words and weights(like using LLMs)? Thank you for answering!
|
https://github.com/huggingface/OBELICS/issues/12
|
open
|
[] | 2024-05-16T03:56:29Z
| 2024-06-11T16:27:12Z
| null |
jrryzh
|
huggingface/transformers.js
| 765
|
Can you use all transformers models with transformers.js?
|
### Question
Hi,
can you use [all transformers models ](https://huggingface.co/models?library=transformers&sort=trending)(which seem to be listed under the python library) also in transformers.js? If yes, how so? Just download and provide the local path? I'm working in nodejs right now.
For example I'd like to use something like [Llama 3](https://huggingface.co/meta-llama/Meta-Llama-3-8B) with Transformers.js.
If that doesn't work, what would be the strongest general purpose LLM available for transformers.js right now (text generation, something like chatgpt, gemini, ...)?
Greetings & thanks a lot!
|
https://github.com/huggingface/transformers.js/issues/765
|
open
|
[
"question"
] | 2024-05-15T19:35:28Z
| 2024-05-15T21:21:57Z
| null |
Sir-hennihau
|
huggingface/datasets
| 6,899
|
List of dictionary features get standardized
|
### Describe the bug
Hi, i’m trying to create a HF dataset from a list using Dataset.from_list.
Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets library standardizes all dictionaries under a feature and adds all possible keys (with None value) from all the dictionaries under that feature.
How can I keep the same set of keys as in the original list for each dictionary under a feature?
### Steps to reproduce the bug
```
from datasets import Dataset
# Define a function to generate a sample with "tools" feature
def generate_sample():
# Generate random sample data
sample_data = {
"text": "Sample text",
"feature_1": []
}
# Add feature_1 with random keys for this sample
feature_1 = [{"key1": "value1"}, {"key2": "value2"}] # Example feature_1 with random keys
sample_data["feature_1"].extend(feature_1)
return sample_data
# Generate multiple samples
num_samples = 10
samples = [generate_sample() for _ in range(num_samples)]
# Create a Hugging Face Dataset
dataset = Dataset.from_list(samples)
dataset[0]
```
```{'text': 'Sample text', 'feature_1': [{'key1': 'value1', 'key2': None}, {'key1': None, 'key2': 'value2'}]}```
### Expected behavior
```{'text': 'Sample text', 'feature_1': [{'key1': 'value1'}, {'key2': 'value2'}]}```
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-5.15.0-1040-nvidia-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.23.0
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
|
https://github.com/huggingface/datasets/issues/6899
|
open
|
[] | 2024-05-15T14:11:35Z
| 2025-04-01T20:48:03Z
| 2
|
sohamparikh
|
huggingface/transformers
| 30,827
|
Using this command(optimum-cli export onnx --model Qwen1.5-0.5B-Chat --task text-generation Qwen1.5-0.5B-Chat_onnx/) to perform onnx transformation, it is found that the tensor type of the model becomes int64. How to solve this problem?
|
### System Info
transformers version : 4.38.1
platform: ubuntu 22.04
python version : 3.10.14
optimum version : 1.19.2
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1.reference conversion command link: https://huggingface.co/docs/transformers/v4.40.1/zh/serialization
2.download model files offline (https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/tree/main)
3.Execute transition instruction:optimum-cli export onnx --model Qwen1.5-0.5B-Chat --task text-generation Qwen1.5-0.5B-Chat_onnx/
The conversion results are as follows:
(mypy3.10_qnn) zhengjr@ubuntu-ThinkStation-P3-Tower:~$ optimum-cli export onnx --model Qwen1.5-0.5B-Chat --task text-generation Qwen1.5-0.5B-Chat_onnx/
2024-05-15 19:42:07.726433: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-05-15 19:42:07.916257: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-05-15 19:42:07.997974: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-05-15 19:42:08.545959: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2024-05-15 19:42:08.546100: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2024-05-15 19:42:08.546104: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Framework not specified. Using pt to export the model.
The task `text-generation` was manually specified, and past key values will not be reused in the decoding. if needed, please pass `--task text-generation-with-past` to export using the past key values.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Using the export variant default. Available variants are:
- default: The default ONNX variant.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
***** Exporting submodel 1/1: Qwen2ForCausalLM *****
Using framework PyTorch: 1.13.1
Overriding 1 configuration item(s)
- use_cache -> False
/home/zhengjr/anaconda3/envs/mypy3.10_qnn/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py:114: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if (input_shape[-1] > 1 or self.sliding_window is not None) and self.is_causal:
/home/zhengjr/anaconda3/envs/mypy3.10_qnn/lib/python3.10/site-packages/optimum/exporters/onnx/model_patcher.py:300: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if past_key_values_length > 0:
/home/zhengjr/anaconda3/envs/mypy3.10_qnn/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:126: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if seq_len > self.max_seq_len_cached:
/home/zhengjr/anaconda3/envs/mypy3.10_qnn/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:290: TracerWarning: Converting a tensor to a Python boole
|
https://github.com/huggingface/transformers/issues/30827
|
closed
|
[] | 2024-05-15T12:45:50Z
| 2024-06-26T08:04:10Z
| null |
JameslaoA
|
pytorch/executorch
| 3,620
|
how to calculate the vocab_size of new model
|
hi,
when I tried to introduce the "Blue LLM" model and evaluate its ppl, there is a mistake as follow:
Traceback (most recent call last):
File "/home/ufoe/anaconda3/envs/linchao/bin/lm_eval", line 8, in <module>
sys.exit(cli_evaluate())
File "/home/ufoe/linchao/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate
results = evaluator.simple_evaluate(
File "/home/ufoe/linchao/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper
return fn(*args, **kwargs)
File "/home/ufoe/linchao/lm-evaluation-harness/lm_eval/evaluator.py", line 180, in simple_evaluate
lm = lm_eval.api.registry.get_model(model).create_from_arg_string(
File "/home/ufoe/linchao/lm-evaluation-harness/lm_eval/api/model.py", line 134, in create_from_arg_string
return cls(**args, **args2)
File "/home/ufoe/linchao/lm-evaluation-harness/lm_eval/models/huggingface.py", line 203, in __init__
self._create_model(
File "/home/ufoe/linchao/lm-evaluation-harness/lm_eval/models/huggingface.py", line 544, in _create_model
self._model = self.AUTO_MODEL_CLASS.from_pretrained(
File "/home/ufoe/anaconda3/envs/linchao/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 556, in from_pretrained
return model_class.from_pretrained(
File "/home/ufoe/anaconda3/envs/linchao/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3502, in from_pretrained
) = cls._load_pretrained_model(
File "/home/ufoe/anaconda3/envs/linchao/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3926, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/home/ufoe/anaconda3/envs/linchao/lib/python3.10/site-packages/transformers/modeling_utils.py", line 805, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/home/ufoe/anaconda3/envs/linchao/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 358, in set_module_tensor_to_device
raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([100008, 4096]) in "weight" (which has shape torch.Size([100096, 4096])), this look incorrect.
how to calculate the vocab_size?
thank you
|
https://github.com/pytorch/executorch/issues/3620
|
closed
|
[] | 2024-05-15T12:20:13Z
| 2024-05-16T05:12:15Z
| null |
l2002924700
|
huggingface/chat-ui
| 1,142
|
Feature request, local assistants
|
I experimented with a few assistants on HF.
The problem I am facing is that I don't know how to get the same behaviour I get on HF from local model (which is the same model).
I tried everything I could thing of.
I think HF does some filtering or rephrasing or has an additional prompt before the assistant description.
Please help.
I am available for chat on discord https://discordapp.com/users/Zibri/
|
https://github.com/huggingface/chat-ui/issues/1142
|
open
|
[
"support"
] | 2024-05-15T11:11:29Z
| 2024-05-27T06:53:21Z
| 2
|
Zibri
|
pytorch/extension-cpp
| 93
|
[feature request] Instruction on how to setup compile-env for Windows
|
Hi
I have been working with extensions successfully on Linux (shipping as `whl`)
An end-user has asked me to provide a windows version of an extension, and I have to admit that it was not as simple as the documentation suggested [here](https://pytorch.org/tutorials/advanced/cpp_extension.html).
Can you please provide a minimal explanation or example on how to setup the compile env for this repo?
I don't mind if it is based on `setuptools` or `cmake`, as long as it does not include a non-free tool like VS-pro [here](https://github.com/mszhanyi/VSIXTorch)
--------------------------------
Here are some general frame of work that will help:
- OS: >=Win10
- PyTorch version: >=1.6.0
- How you installed PyTorch (conda, pip, source): both conda and pip
- Python version: >=1.8
- CUDA version: >=10.2
|
https://github.com/pytorch/extension-cpp/issues/93
|
open
|
[] | 2024-05-15T06:10:08Z
| 2024-05-15T06:10:08Z
| null |
litaws
|
huggingface/optimum
| 1,855
|
how to change optimum temporary path ?
|
### Feature request
c drive less space
### Motivation
help to solve many issue
### Your contribution
dont know
|
https://github.com/huggingface/optimum/issues/1855
|
closed
|
[] | 2024-05-14T11:17:14Z
| 2024-10-14T12:22:35Z
| null |
neonarc4
|
huggingface/optimum
| 1,854
|
ai21labs/Jamba-tiny-random support
|
### Feature request
ai21labs/Jamba-tiny-random mode, is not supported by Optimum export.
ValueError: Trying to export a jamba model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type jamba to be supported natively in the ONNX export.
### Motivation
Jamba is potentially very significant as it has a large context but a small size. This could be used in lots of scenarios if it has good performance.
### Your contribution
Unlikely I could do a PR as ONNX work is not my forte.
|
https://github.com/huggingface/optimum/issues/1854
|
open
|
[
"feature-request",
"onnx"
] | 2024-05-14T10:22:05Z
| 2024-10-09T09:10:58Z
| 0
|
frankia312
|
huggingface/transformers.js
| 763
|
Have considered using wasm technology to implement this library?
|
### Question
Hello, have you ever considered using wasm technology to implement this library? For example, rust's wgpu-rs and c++'s dawn are both implementations of webgpu. They can be converted to wasm and can also be accelerated with simd.
|
https://github.com/huggingface/transformers.js/issues/763
|
open
|
[
"question"
] | 2024-05-14T09:22:57Z
| 2024-05-14T09:28:38Z
| null |
ghost
|
huggingface/trl
| 1,643
|
How to save and resume a checkpoint from PPOTrainer
|
https://github.com/huggingface/trl/blob/5aeb752053876cce64f2164a178635db08d96158/trl/trainer/ppo_trainer.py#L203
It seems that every time the PPOTrainer is initialized, the accelerator is initialized as well. There's no API provided by PPOTrainer to resume checkpoints. How can we save and resume checkpoints?
|
https://github.com/huggingface/trl/issues/1643
|
closed
|
[] | 2024-05-14T09:10:40Z
| 2024-08-08T12:44:25Z
| null |
paraGONG
|
huggingface/tokenizers
| 1,531
|
How to Batch-Encode Paired Input Sentences with Tokenizers: Seeking Clarification
|
Hello.
I'm using the tokenizer to encoding pair sentences in TemplateProcessing in batch_encode.
There's a confusing part where the method requires two lists for sentence A and sentence B.
According to the [guide documentation](https://huggingface.co/docs/tokenizers/quicktour): "To process a batch of sentences pairs, pass two lists to the Tokenizer.encode_batch method: the list of sentences A and the list of sentences B."
Since it instructs to input two lists, it seems like [[A1, A2], [B1, B2]] --(encode)-> {A1, B1}, {A2, B2}.
However, the actual input expects individual pairs batched, not splitting the sentence pairs into lists for A and B.
So, it should be [[A1, B1], [A2, B2]] to encode as {A1, B1}, {A2, B2}.
I've also confirmed that the length of the input list for encode_batch keeps increasing with the number of batches.
Since the guide instructs to input sentence A and sentence B, this is where the confusion arises.
If I've misunderstood anything, could you help clarify this point so I can understand it better?
|
https://github.com/huggingface/tokenizers/issues/1531
|
closed
|
[
"Stale"
] | 2024-05-14T08:03:52Z
| 2024-06-21T08:20:05Z
| null |
insookim43
|
pytorch/xla
| 7,057
|
Experiencing slow recompilation when manually building XLA
|
## ❓ Questions and Help
Hi, I am interested in contributing to XLA community but I encounter a small challenge. After manually building `torch` and `torch_xla` on a CPU-based(CPU: **Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz**) Docker env, I noticed that the `python setup.py develop` process will take about **1 minutes** each time. So could you suggest any Dockerfile configurations or other changes that might speed up the recompilation process? Thanks for your help!
|
https://github.com/pytorch/xla/issues/7057
|
open
|
[
"question"
] | 2024-05-14T03:28:42Z
| 2025-04-17T13:41:57Z
| null |
wenboqian
|
pytorch/xla
| 7,056
|
Export nn.Module.forward with kwargs to StableHLO
|
## ❓ Questions and Help
I see in [_exported_program_to_stablehlo_bundle()](https://github.com/pytorch/xla/blob/6f0b61e5d782913a0fc7743812f2a8e522189111/torch_xla/stablehlo.py#L318) that exporting with kwargs isn't support _**yet**_.
Do you expect to support this in the near future?
If not, is there another way to lower a torch.nn.Module's `forward` method with kwargs to StableHLO?
|
https://github.com/pytorch/xla/issues/7056
|
closed
|
[
"question",
"stablehlo"
] | 2024-05-13T21:21:42Z
| 2025-04-17T13:42:55Z
| null |
johnmatter
|
huggingface/transformers.js
| 762
|
Options for the "translation" pipeline when using Xenova/t5-small
|
### Question
The translation pipeline is [documented](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TranslationPipeline) to use {src_lang and tgt_lang} options to translate from the src language to the tgt language. However, when using Xenova/t5-small none of the options seem to be used. Instead looking at the demo code it appears that you have to change the pipeline.task field to "translation_{fromLanguage}_to_{targetLanguage}" but I can't find a way to normalize the usage of the translation pipeline with different models.
Is this task pattern documented somewhere or am I missing some other option settings when calling the translation pipeline?
|
https://github.com/huggingface/transformers.js/issues/762
|
open
|
[
"question"
] | 2024-05-13T21:09:15Z
| 2024-05-13T21:09:15Z
| null |
lucapivato
|
pytorch/torchchat
| 784
|
Can't use TorchChat with Python-3.9
|
Because of https://github.com/pytorch/torchchat/blob/a276b5fdd12d0dd843fd81543ceffb57065354e3/cli.py#L318-L319
That was added by https://github.com/pytorch/torchchat/pull/746 with a very descriptive title "CLI check"
If this is indeed a product requirement, can we specify it somewhere in README.MD (and perhaps have some discussion about it?)
|
https://github.com/pytorch/torchchat/issues/784
|
closed
|
[
"launch blocker"
] | 2024-05-13T18:50:16Z
| 2024-05-13T19:01:22Z
| 2
|
malfet
|
huggingface/datasets
| 6,894
|
Better document defaults of to_json
|
Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).
Related to:
- #6891
|
https://github.com/huggingface/datasets/issues/6894
|
closed
|
[
"documentation"
] | 2024-05-13T13:30:54Z
| 2024-05-16T14:31:27Z
| 0
|
albertvillanova
|
pytorch/TensorRT
| 2,830
|
❓ [Question] How to specific aten operators must be run by LibTorch in C++?
|
## ❓ Question
When I compile the SwinTransformer model using Torch-TensorRT, an error appears:
```
terminate called after throwing an instance of 'c10::Error'
what(): 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":615, please report a bug to PyTorch. We don't have an op for aten::floor_divide but it isn't a special case. Argument types: int, int,
Candidates:
aten::floor_divide(Tensor self, Tensor other) -> Tensor
aten::floor_divide.Scalar(Tensor self, Scalar other) -> Tensor
aten::floor_divide.out(Tensor self, Tensor other, *, Tensor(a!) out) -> Tensor(a!)
aten::floor_divide.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> Tensor(a!)
```
I checked out this [link](https://github.com/facebookresearch/segment-anything/issues/446), This error is because torch-trt dont support % op.
Fine, I can select to run floor_divide using LibTorch.
```C++
torchtrt::ts::CompileSpec compile_settings({ input });
compile_settings.enabled_precisions.insert(build_type);
compile_settings.workspace_size = _1_GB;
compile_settings.truncate_long_and_double = true;
compile_settings.num_avg_timing_iters = 1;
compile_settings.torch_executed_ops.push_back("aten::floor_divide"); // here
torchtrt::ts::compile(model, compile_settings)
```
It's strange that the setting does not take effect. This error still persists.
What can I do about this mistake?
Furthermore, How to specific aten operators must be run by LibTorch in C++?
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):2.2.1
- CPU Architecture:x86
- OS (e.g., Linux):ubuntu22.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:12.2
- GPU models and configuration:
- Any other relevant information:
|
https://github.com/pytorch/TensorRT/issues/2830
|
open
|
[
"question"
] | 2024-05-13T10:10:09Z
| 2024-05-27T01:40:49Z
| null |
demuxin
|
huggingface/chat-ui
| 1,134
|
Websearch failed on retrieving from pdf files
|
On chat ui I am getting the error as shown in screenshot, on pdf files it always says "Failed to parse webpage". I set USE_LOCAL_WEBSEARCH=True in .env.local. can anyone help me.

|
https://github.com/huggingface/chat-ui/issues/1134
|
open
|
[
"support",
"websearch"
] | 2024-05-13T06:41:08Z
| 2024-06-01T09:25:59Z
| 2
|
prateekvyas1996
|
pytorch/xla
| 7,049
|
Spmd whether expert parallelism is supported?
|
torchxla spmd whether expert parallelism is supported?
If it is a moe model, how should it be computed in xla?
## ❓ Questions and Help
|
https://github.com/pytorch/xla/issues/7049
|
open
|
[
"question",
"distributed"
] | 2024-05-13T03:23:20Z
| 2025-09-03T20:34:04Z
| null |
mars1248
|
pytorch/torchchat
| 776
|
[tune/chat integration] component sharing
|
We seem to be doing the same rote stuff like manage checkpoints, download them, manager permissions, convert checkpoints and what have you...
Maybe this might be a good opportunity to reduce our joint workload by pooling some of these functions. It would likely also improve user experience thanks to consistency and because we can invest the save person-months elsewhere.
This is still early, and I'm not suggesting doing this at this very moment (or we'll never launch!), but it's something I wanted to raise both for efficiency and consistency.
|
https://github.com/pytorch/torchchat/issues/776
|
closed
|
[] | 2024-05-13T02:44:08Z
| 2024-07-21T21:50:46Z
| 0
|
mikekgfb
|
pytorch/torchchat
| 775
|
[INTEGRATION] torchtune integration for e2e workflow with torchchat
|
Hey, I’m working myself thru our documentation and try to make it run in CI. That aligns pretty well with the user experience we have in mind where users can just cut & paste commands…
Also, we have so many dependences that unless we test at least the instructions for the users nothing works…
I have a couple of questions:
1 - so you install torchtune and then you assume that you CWD is where? If we assume we’re in torchchat which our users will have been conditioned to be (at least in the first release), are they going to find torchtune? Is that abona fide package”
2 - you access the config assuming it’s in llama3/8B_lora_single_device — we don’t have that file…. should we? Can we put it somewhere like ~torchchat/tune/config/llama3 ? Any other things I should be knowing?
3 - what are you fine tuning on?
4 - our users may already have downloaded checkpoints? Can they use those? Or are you loading special versions?
5 - we run tests on-pr for every PR that’s submitted… which doesn’t work with llama3-8B because of time and cost. Is there anything that would prevent us from running stories15M (or some other very small model), not because it will have great output quality, but it will force resolution of names, finding of all the imports, and produce intelligible (if not great output). Is there anything that would prevent that?
6 - what other assumptions does your build have @ https://github.com/pytorch/torchchat/blob/main/docs/torchtune.md. Is it up to date?
7 - can I substitute CPU or MPS, or…. whatever my favorite device is? How much pain should I expect? has anybody done this on a MacBook for example?
8 - do we need any corpora or other such for finetuning?
9 - anything else I forgot to ask, but I should have?
So, the updates instructions are here => https://github.com/pytorch/torchchat/pull/774
I pull the instructions out of the markdown source by marking it up, and then have a script run….
```
python3 scripts/updown.py --file docs/torchtune.md --replace 'llama3:stories15M,-l 3:-l 2,meta-llama/Meta-Llama-3-8B-Instruct:stories15M' --suppress huggingface-cli,HF_TOKEN > ./run-torchtune.sh
```
The pattern replacers for on-pr need to be adapted for this example (another reason why I would actually love to use the ownloaded checkpoints… I have it down for thiose… but you may have intermediate results and all that should not go in the downloaded files….
Although we could just do
```
cp -r `python3 torchchat.py where llama3`/* ~/wherever-tune-needs-it
```
and it would work
Failures appear pretty benign, just a HF token issue. (And llama3->stories15M substitution not working.
Are there references to the model name and path in the config that would need to be adjusted?
This is the script generated from the markdown instructions…. https://www.internalfb.com/intern/paste/P1360945144/
Do you see any issues with it? This is not a human using it but `bash -x ./tune-script.sh` so it can’t be sorta right and user will figure it out — it needs to be 100% up to snuff
This the error at the moment? Seems benign, like updating download process?
(base) mikekg@mikekg-mbp torchchat % bash -x ./run-torchtune.sh|& pastry
P1360947478: https://www.internalfb.com/intern/paste/P1360947478/
Here's what happens in detail in CI. https://github.com/pytorch/torchchat/actions/runs/9056119551/job/24878207016?pr=774
(I know, the build bars are TMI lolol)
Here’s the error message in detail:
```
Ignoring files matching the following patterns: *.safetensors
usage: tune download <repo-id> [OPTIONS]
tune download: error: It looks like you are trying to access a gated repository. Please ensure you have access to the repository and have provided the proper Hugging Face API token using the option `--hf-token` or by running `huggingface-cli login`.You can find your token by visiting https://huggingface.co/settings/tokens
```
Thanks for working with us to build a rock-solid end-to-end story from rune to chat. Looking forward to figuring this out and build an amazing experience for our joint users!
|
https://github.com/pytorch/torchchat/issues/775
|
closed
|
[] | 2024-05-13T02:35:21Z
| 2024-07-21T21:46:30Z
| 1
|
mikekgfb
|
pytorch/torchchat
| 773
|
[DOCS] GGUF instructions in docs/ADVANCED-USERS.md
|
the instructions for GGUF in https://github.com/pytorch/torchchat/blob/main/docs/ADVANCED-USERS.md state:
> To use the quantize tool, install the GGML tools at ${GGUF} . Then, you can, for example, convert a quantized model to f16 format:
How do I do that? Can we put this in the doc, including with a definition of the GGUF environment variable, so when we extract the commands and try to run them we have all the pieces?
xref: https://github.com/pytorch/torchchat/pull/772
|
https://github.com/pytorch/torchchat/issues/773
|
closed
|
[] | 2024-05-13T01:26:16Z
| 2024-05-20T12:56:45Z
| 1
|
mikekgfb
|
huggingface/parler-tts
| 47
|
Custom pronunciation for words - any thoughts / recommendations about how best to handle them?
|
Hello! This is a really interesting looking project.
Currently there doesn't seem any way that users can help the model correctly pronounce custom words - for instance **JPEG** is something that speakers just need to know is broken down as "**Jay-Peg**" rather than **Jay-Pea-Ee-Gee**.
I appreciate this project is at an early stage but for practical uses, especially with brands and product names often having quirky ways of saying words or inventing completely new words, it's essential to be able to handle their correct pronunciation on some sort of override basis. It's not just brands - plenty of people's names need custom handling and quite a few novel computer words are non-obvious too.
Examples that cause problems in the current models: **Cillian, Joaquin, Deirdre, Versace, Tag Heuer, Givenchy, gigabytes, RAM, MPEG** etc.
Are there any suggestions on how best to tackle this?
I saw there was #33 which uses a normaliser specifically for numbers. Is there something similar for custom words? I suppose perhaps one could drop in a list of custom words and some sort of mapping to the desired pronunciation, applying that as a stage similar to how it handles abbreviations.
In espeak backed tools, it's sometimes possible to replace words with custom IPA that replaces the default IPA generated but I believe this model doesn't use IPA for controlling pronunciation.
Given the frequently varying pronunciations, I doubt that simply finetuning to include the words would be a viable approach.
Anyway, would be great to hear what others have to recommend.
_Incidentally certain mainstream terms also get completely garbled, it seems impossible to get Instagram, Linux or Wikipedia to be spoken properly, but that's more a training data issue and those are mainstream enough that you wouldn't need to cover them via custom overrides._
|
https://github.com/huggingface/parler-tts/issues/47
|
open
|
[] | 2024-05-12T15:51:05Z
| 2025-01-03T08:39:58Z
| null |
nmstoker
|
pytorch/examples
| 1,257
|
multi-node Tensor Parallel
|
Hello, could you add an new example of the tensor parallel + fsdp but using a multi-node setup?
Is it possible to do multi-node tensor parallelization with pytorch 2.3? I am trying to use 2 nodes with 4 GPUs each.
05/12/2024 04:32:52 PM Device Mesh created: device_mesh=DeviceMesh([[0, 1, 2, 3], [4, 5, 6, 7]], mesh_dim_names=('dp', 'tp'))
When I try the actual example on multiple nodes I get the following errors.
Thank you.
```
as07r1b31:3011779:3012101 [0] init.cc:871 NCCL WARN Duplicate GPU detected : rank 0 and rank 1 both on CUDA device 1b000
as07r1b31:3011783:3012102 [0] init.cc:871 NCCL WARN Duplicate GPU detected : rank 1 and rank 0 both on CUDA device 1b000
as07r1b31:3011782:3012104 [3] init.cc:871 NCCL WARN Duplicate GPU detected : rank 0 and rank 1 both on CUDA device ad000
as07r1b31:3011786:3012107 [3] init.cc:871 NCCL WARN Duplicate GPU detected : rank 1 and rank 0 both on CUDA device ad000
as07r1b31:3011780:3012106 [1] init.cc:871 NCCL WARN Duplicate GPU detected : rank 0 and rank 1 both on CUDA device 2c000
as07r1b31:3011784:3012108 [1] init.cc:871 NCCL WARN Duplicate GPU detected : rank 1 and rank 0 both on CUDA device 2c000
as07r1b31:3011781:3012110 [2] init.cc:871 NCCL WARN Duplicate GPU detected : rank 0 and rank 1 both on CUDA device 9d000
as07r1b31:3011785:3012111 [2] init.cc:871 NCCL WARN Duplicate GPU detected : rank 1 and rank 0 both on CUDA device 9d000
[rank0]: Traceback (most recent call last):
[rank0]: File "/gpfs/mn4/AE_tp/tests.py", line 91, in <module>
[rank0]: _, output = sharded_model(inp)
[rank0]: ^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 843, in forward
[rank0]: args, kwargs = _pre_forward(
[rank0]: ^^^^^^^^^^^^^
[rank0]: File "/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/fsdp/_runtime_utils.py", line 380, in _pre_forward
[rank0]: unshard_fn(state, handle)
[rank0]: File "/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/fsdp/_runtime_utils.py", line 415, in _pre_forward_unshard
[rank0]: _unshard(state, handle, state._unshard_stream, state._pre_unshard_stream)
[rank0]: File "/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/fsdp/_runtime_utils.py", line 299, in _unshard
[rank0]: handle.unshard()
[rank0]: File "/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/fsdp/_flat_param.py", line 1308, in unshard
[rank0]: padded_unsharded_flat_param = self._all_gather_flat_param(unsharded_flat_param)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/fsdp/_flat_param.py", line 1399, in _all_gather_flat_param
[rank0]: dist.all_gather_into_tensor(
[rank0]: File "/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 2948, in all_gather_into_tensor
[rank0]: work = group._allgather_base(output_tensor, input_tensor, opts)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: torch.distributed.DistBackendError: NCCL error in: /opt/conda/conda-bld/pytorch_1712608847532/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1970, invalid usage (run with NCCL_DEBUG=WARN for details), NCCL version 2.20.5
[rank0]: ncclInvalidUsage: This usually reflects invalid usage of NCCL library.
[rank0]: Last error:
[rank0]: Duplicate GPU detected : rank 0 and rank 1 both on CUDA device 1b000
[same on other ranks]
Traceback (most recent call last):
File "/home/mn4/AE_tp/mdae2.3/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==2.3.0', 'console_scripts', 'torchrun')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/mn4/AE_tp/mdae2.3/lib/python3.12/site-packages/torch/distributed/run.py", line 879, in main
run(args)
File "/home/mn4/AE_
|
https://github.com/pytorch/examples/issues/1257
|
open
|
[] | 2024-05-12T15:19:26Z
| 2024-11-05T09:15:28Z
| 1
|
PieterZanders
|
pytorch/torchchat
| 757
|
[LAUNCH DOCS] Add instructions what needs to be installed, and how to README
|
At present, running the instructions in the README will fail for the xcode project. See [#755](https://github.com/pytorch/torchchat/pull/755)
At a minimum we should specify what should be installed and what the minimum xcode version (and any other requirements) are?
Also, I would expect this to fail even then, because like this might be GUI based with no fully scriptable set of instructions (plus it's not clear we'd want the script instructions when most devs are more likely going to like to start around with the GUI builder?). So, how can/should we test iOS app build in open source?
As a corollary, how do we automate testing of README for correctness? (and maybe the answer is "it's too involved", and that's OK if that turns out to be the right answer)
cc: @byjlw @shoumikhin
|
https://github.com/pytorch/torchchat/issues/757
|
closed
|
[] | 2024-05-12T04:50:32Z
| 2024-07-27T01:53:39Z
| null |
mikekgfb
|
pytorch/executorch
| 3,585
|
How can I use ExecuTorch to deploy a model to a MicroController,such as Infineon TC3xxx ?
|
"ExecuTorch is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, **embedded devices** and **microcontrollers**"
Hello,above expression presents in [ExecuTorch doc:](https://pytorch.org/executorch/stable/intro-overview.html)
I want to know:
what types of MicroController(mainly bare metals) got supported already or will get supported?
If wanting to deploy to Infineon TC3xxx microcontroller,is it possible?If yes,any suggestion about how to do it?
|
https://github.com/pytorch/executorch/issues/3585
|
closed
|
[
"module: backend"
] | 2024-05-11T07:13:57Z
| 2025-02-05T17:22:54Z
| null |
AlexLuya
|
pytorch/torchchat
| 740
|
[FEATURE REQUEST] Could not find... Probably missing HF token/login, but if so we might indicate?
|
(base) mikekg@mikekg-mbp torchchat % python3 torchchat.py generate llama3 --device cpu --compile
Downloading meta-llama/Meta-Llama-3-8B-Instruct from HuggingFace...
Converting meta-llama/Meta-Llama-3-8B-Instruct to torchchat format...
known configs: ['13B', '70B', 'CodeLlama-7b-Python-hf', '34B', 'stories42M', '30B', 'stories110M', '7B', 'stories15M', 'Mistral-7B', 'Meta-Llama-3-8B']
Model config {'block_size': 2048, 'vocab_size': 128256, 'n_layers': 32, 'n_heads': 32, 'dim': 4096, 'hidden_dim': 14336, 'n_local_heads': 8, 'head_dim': 128, 'rope_base': 500000.0, 'norm_eps': 1e-05, 'multiple_of': 1024, 'ffn_dim_multiplier': 1.3, 'use_tiktoken': True, 'max_seq_length': 8192}
Traceback (most recent call last):
File "/Users/mikekg/m14/torchchat/torchchat.py", line 143, in <module>
check_args(args, "generate")
File "/Users/mikekg/m14/torchchat/cli.py", line 39, in check_args
download_and_convert(args.model, args.model_directory, args.hf_token)
File "/Users/mikekg/m14/torchchat/download.py", line 91, in download_and_convert
_download_hf_snapshot(model_config, temp_dir, hf_token)
File "/Users/mikekg/m14/torchchat/download.py", line 55, in _download_hf_snapshot
convert_hf_checkpoint(
File "/Users/mikekg/miniconda3/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/mikekg/m14/torchchat/build/convert_hf_checkpoint.py", line 60, in convert_hf_checkpoint
raise RuntimeError(
RuntimeError: Could not find /Users/mikekg/.torchchat/model-cache/downloads/meta-llama/Meta-Llama-3-8B-Instruct/pytorch_model.bin.index.json or /Users/mikekg/.torchchat/model-cache/downloads/meta-llama/Meta-Llama-3-8B-Instruct/original/consolidated.00.pth plus /Users/mikekg/.torchchat/model-cache/downloads/meta-llama/Meta-Llama-3-8B-Instruct/original/tokenizer.model
|
https://github.com/pytorch/torchchat/issues/740
|
closed
|
[] | 2024-05-10T22:18:51Z
| 2024-07-30T17:22:27Z
| 1
|
mikekgfb
|
huggingface/text-generation-inference
| 1,875
|
How to share memory among 2 GPUS for distributed inference?
|
# Environment Setup
Runtime environment:
Target: x86_64-unknown-linux-gnu
Cargo version: 1.75.0
Commit sha: https://github.com/huggingface/text-generation-inference/commit/c38a7d7ddd9c612e368adec1ef94583be602fc7e
Docker label: sha-6c4496a
Kubernetes Cluster deployment
2 A100 GPU with 80GB RAM
12 CPU with 32 GB RAM
TGI version: 2.0.0
TGI Parameters:
MAX_INPUT_LENGTH: "8000"
MAX_TOTAL_TOKENS: "8512"
MAX_CONCURRENT_REQUESTS: "128"
LOG_LEVEL: "INFO"
MAX_BATCH_TOTAL_TOKENS: "4294967295"
WAITING_SERVED_RATIO: "0.3"
MAX_WAITING_TOKENS: "0"
MAX_BATCH_PREFILL_TOKENS: "32768"
# Question
I am courious about how to optimize distributed inference for LLMs. I see in that in the docs you mention this:
```
### A note on Shared Memory (shm)
[`NCCL`](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/index.html) is a communication framework used by `PyTorch` to do distributed training/inference. `text-generation-inference` make use of `NCCL` to enable Tensor Parallelism to dramatically speed up inference for large language models.
In order to share data between the different devices of a `NCCL` group, `NCCL` might fall back to using the host memory if peer-to-peer using NVLink or PCI is not possible.
To allow the container to use 1G of Shared Memory and support SHM sharing, we add `--shm-size 1g` on the above command.
If you are running `text-generation-inference` inside `Kubernetes`. You can also add Shared Memory to the container by creating a volume with:
\- name: shm
emptyDir:
medium: Memory
sizeLimit: 1Gi
and mounting it to `/dev/shm`.
Finally, you can also disable SHM sharing by using the `NCCL_SHM_DISABLE=1` environment variable. However, note that this will impact performance.
```
We currently have this setup with K8s:
```
- name: m
emptyDir:
sizeLimit: 1Gi
medium: Memory
```
However, I feel like I am missing something.
Say GPU memory size is G, model weight in megabytes is M and free available memory for processing requests is F.
Then when I deploy a model with size M (where M < G) with SHARDED=True and over 2 full GPUs(G_1 and G_2). What I expect is the model weights taking M megabytes from GPU1 (G_1) and then the available/free memory, F, for processing tokens/requests should be (G_1 - M) + G_2 = F. Right?
Instead what I am seeing is that the model is replicated on both GPUs, so F = (G_1 - M) + (G_2 - M) . I believe this is not what we want. For example with Mistral7b:
| Sharded | GPU 1 | GPU 2 |
| -------- | ----- | ------ |
| False | 66553MiB / 81920MiB 81% used | Does not exist |
| True | 66553MiB / 81920MiB 81% used | 66553MiB / 81920MiB 81% used |
We would like to have the model only on 1 GPU (if it fits) and then use the extra available GPUs just for inference, i.e, increasing our memory budget at processing time by sharing the memory between the left over memory from the GPU where the model weights live and the memory from the GPU without model weights.
This is what makes me think we are not using NCCL correctly, or maybe my assumptions are wrong, and what I am saying is not possible to do?
# Visual description

|
https://github.com/huggingface/text-generation-inference/issues/1875
|
closed
|
[
"Stale"
] | 2024-05-10T08:49:05Z
| 2024-06-21T01:48:05Z
| null |
martinigoyanes
|
pytorch/pytorch
| 125,902
|
How to export onnx with fixed shape output ?
|
### 🐛 Describe the bug
```
import torch
class TRT_SCA(torch.autograd.Function):
@staticmethod
def forward(ctx,
query,
key,
value,
reference_points,
spatial_shapes,
reference_points_cam,
bev_mask,
level_start_index):
out = torch.randn(1, 1600, 256, dtype=torch.float32)
return out # I just want to assign the out shape is [1, 1600, 256]
@staticmethod
def symbolic(g,
query,
key,
value,
reference_points,
spatial_shapes,
reference_points_cam,
bev_mask,
level_start_index):
return g.op("TRT::SCATT",
query,
key,
value,
reference_points,
spatial_shapes,
reference_points_cam,
bev_mask,
level_start_index)
trt_sca = TRT_SCA.apply
class SpatialCrossAttention(torch.nn.Module):
def __init__(self):
super(SpatialCrossAttention, self).__init__()
def forward(self,
query,
key,
value,
reference_points=None,
spatial_shapes=None,
reference_points_cam=None,
bev_mask=None,
level_start_index=None):
return trt_sca(
query,
key,
value,
reference_points,
spatial_shapes,
reference_points_cam,
bev_mask,
level_start_index)
query= torch.randn(1, 1600, 256, dtype=torch.float32)
key= torch.randn(6, 5315, 1, 256, dtype=torch.float32)
value= torch.randn(6, 5315, 1, 256, dtype=torch.float32)
reference_points = torch.randn(1, 4, 1600, 3, dtype=torch.float32)
spatial_shapes= torch.tensor( [[ 40, 100],
[ 20, 50],
[ 10, 25],
[ 5, 13]], dtype=torch.int64)
reference_points_cam=torch.randn(6, 1, 1600, 4, 2, dtype=torch.float32)
bev_mask=torch.where(torch.randn(6, 1, 1600, 4) > 0.2, 1, 0)
level_start_index= torch.tensor([ 0, 4000, 5000, 5250], dtype=torch.int64)
nn_model = SpatialCrossAttention()
print("------------------------------------")
output_file = 'sca.onnx'
torch.onnx.export(
nn_model,
(query,
key,
value,
reference_points,
spatial_shapes,
reference_points_cam,
bev_mask,
level_start_index),
output_file,
export_params=True,
keep_initializers_as_inputs=True,
do_constant_folding=True,
enable_onnx_checker=True,
verbose=True,
opset_version=11,
)
print("export done")
```
### Versions
onnx 1.15.0
onnx-graphsurgeon 0.3.21
onnx-simplifier 0.4.36
onnxruntime 1.17.1
torch 1.10.0+cu113
torchaudio 0.10.0+cu113
torchvision 0.11.0+cu113
### Result

|
https://github.com/pytorch/pytorch/issues/125902
|
open
|
[
"module: onnx",
"triaged"
] | 2024-05-10T05:58:23Z
| 2024-05-17T04:35:24Z
| null |
lix19937
|
pytorch/text
| 2,264
|
t5_demo can't retrieve CNNDM from drive.google; how to use local copy?
|
## 🐛 Bug
**Describe the bug** A clear and concise description of what the bug is.
Following the [t5_demo](https://pytorch.org/text/stable/tutorials/t5_demo.html), but when it tries to access the CNN data at ` https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ`
**To Reproduce** Steps to reproduce the behavior:
1. Get notebook at [t5_demo](https://pytorch.org/text/stable/tutorials/t5_demo.html),
2. Try to run it. It gets as far as `batch = next(iter(cnndm_dataloader))` (https://pytorch.org/text/stable/tutorials/t5_demo.html#generate-summaries) where `cnndm_datapipe = CNNDM(split="test")` (https://pytorch.org/text/stable/tutorials/t5_demo.html#datasets)
3. Get error like:
> RuntimeError: Google drive link
>
> https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ&confirm=t
> internal error: headers don't contain content-disposition. This is
> usually caused by using a sharing/viewing link instead of a download
> link. Click 'Download' on the Google Drive page, which should
> redirect you to a download page, and use the link of that page.
>
> This exception is thrown by __iter__ of
> GDriveReaderDataPipe(skip_on_error=False,
> source_datapipe=OnDiskCacheHolderIterDataPipe, timeout=None)
**Expected behavior**
Looking at others with similar error messages makes it seem like there is some timeout issue retrieving from drive.google? So I went and got the `cnn_stories.tgz` and `dailymail_stories.tgz` and unpacked them:
> .
> ├── CNNDM
> │ ├── cnn
> │ │ └── stories
> │ └── dailymail
> │ └── stories
**How can I modify the calls retrieve from my local cache?**
**Environment**
> % python collect_env.py
> Collecting environment information...
> PyTorch version: 2.1.0.post100
> Is debug build: False
> CUDA used to build PyTorch: None
> ROCM used to build PyTorch: N/A
>
> OS: macOS 14.4.1 (arm64)
> GCC version: Could not collect
> Clang version: 15.0.0 (clang-1500.1.0.2.5)
> CMake version: Could not collect
> Libc version: N/A
>
> Python version: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ] (64-bit runtime)
> Python platform: macOS-14.4.1-arm64-arm-64bit
> Is CUDA available: False
> CUDA runtime version: No CUDA
> CUDA_MODULE_LOADING set to: N/A
> GPU models and configuration: No CUDA
> Nvidia driver version: No CUDA
> cuDNN version: No CUDA
> HIP runtime version: N/A
> MIOpen runtime version: N/A
> Is XNNPACK available: True
>
> CPU:
> Apple M1 Pro
>
> Versions of relevant libraries:
> [pip3] mypy-extensions==1.0.0
> [pip3] numpy==1.26.3
> [pip3] torch==2.1.0.post100
> [pip3] torchaudio==2.1.2
> [pip3] torchdata==0.7.1
> [pip3] torchtext==0.16.1
> [pip3] torchvision==0.16.2
> [conda] captum 0.7.0 0 pytorch
> [conda] numpy 1.26.2 pypi_0 pypi
> [conda] numpy-base 1.26.3 py311hfbfe69c_0
> [conda] pytorch 2.1.0 gpu_mps_py311hf322ab5_100
> [conda] torch 2.1.2 pypi_0 pypi
> [conda] torchaudio 2.1.2 pypi_0 pypi
> [conda] torchdata 0.7.1 pypi_0 pypi
> [conda] torchtext 0.16.1 pypi_0 pypi
> [conda] torchvision 0.16.2 pypi_0 pypi
>
>
**Additional context** Add any other context about the problem here.
|
https://github.com/pytorch/text/issues/2264
|
open
|
[] | 2024-05-10T03:55:13Z
| 2024-05-10T03:55:13Z
| null |
rbelew
|
huggingface/accelerate
| 2,759
|
How to specify the backend of Trainer
|
### System Info
```Shell
accelerate 0.28.0
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
I am running a multi-node, multi-gpu training code on two nodes with one A100-40GB respectively. I don't have the `NCCL` installed on this cluster, so I am trying to use the default `gloo` backend to start training. But I didn't find any documents on how to specify backend when `accelerate launch`. Any help will be very appreciated!
Here is my launching script.
```
srun -N 2 -n 2 -w xgpg2,xgpg3 accelerate launch --config_file /tmp/my_dist_config.yaml --gradient_accumulation_steps 8 --gradient_clipping 1.0 --mixed_precision bf16 train.py ...my training arguments..
```
Here is my accelerate config on each node.
```
# `/tmp/my_dist_config.yaml` on xgpg2
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0
main_process_ip: xgpg2
main_process_port: 9999
main_training_function: main
mixed_precision: bf16
num_machines: 2
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
# `/tmp/my_dist_config.yaml` on xgpg3
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 1
main_process_ip: xgpg2
main_process_port: 9999
main_training_function: main
mixed_precision: bf16
num_machines: 2
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
Here is the main body of my training code
```
...
tokenizer = load_tokenizer(model_args.tokenizer_dir, train_mode=model_args.do_train)
model = load_model(model_args, quant_config, peft_config)
logger.info(f"Model Architecture:\n{model}")
print_trainable_parameters(model)
trainer = Trainer(
model=model,
train_dataset=train_data,
eval_dataset=eval_data,
args=trainer_config,
data_collator=PaddToMaxLenCollator(tokenizer, model_args.max_length),
)
# Training
if model_args.do_train:
train_result = trainer.train(resume_from_checkpoint=model_args.resume_from_checkpoint)
trainer.log_metrics("train", train_result.metrics)
trainer.save_metrics("train", train_result.metrics)
...
```
I tried to run this directly, but it went into some NCCL error like this:
```
torch.distributed.DistBackendError: NCCL error in: /opt/conda/conda-bld/pytorch_1704987394225/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1691, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.19.3
```
I think the NCCL isn't installed on the system by system administrator, but there is a `nccl` library in my conda environment, which could probably be installed as some other library's dependency. I am not familiar with NCCL, but my understanding is this won't work because NCCL should be installed on system level. Am I right?
```
# Name Version Build Channel
nccl 2.21.5.1 h3a97aeb_0 conda-forge
```
### Expected behavior
Hope to know how to use the 'gloo' backend for Trainer. And also hope to know if I can use Trainer's Deepspeed Integration with gloo backend
|
https://github.com/huggingface/accelerate/issues/2759
|
closed
|
[] | 2024-05-10T03:18:08Z
| 2025-01-16T10:29:19Z
| null |
Orion-Zheng
|
huggingface/lerobot
| 167
|
python3.10 how to install rerun-sdk
|
### System Info
```Shell
ubuntu18.04
python3.10
ERROR: Could not find a version that satisfies the requirement rerun-sdk>=0.15.1 (from lerobot) (from versions: none)
ERROR: No matching distribution found for rerun-sdk>=0.15.1
```
### Information
- [X] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
pip install .
ERROR: Could not find a version that satisfies the requirement rerun-sdk>=0.15.1 (from lerobot) (from versions: none)
ERROR: No matching distribution found for rerun-sdk>=0.15.1
### Expected behavior
I want to know how to solve this problem
|
https://github.com/huggingface/lerobot/issues/167
|
closed
|
[
"dependencies"
] | 2024-05-10T03:07:30Z
| 2024-05-13T01:25:09Z
| null |
MountainIntelligent
|
huggingface/safetensors
| 478
|
Can't seem to skip parameter initialization while using the `safetensors.torch.load_model` API!
|
### System Info
- `transformers` version: 4.40.0
- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.22.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.2+cu121 (True)
- Tensorflow version (GPU?): 2.16.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.8.2 (cpu)
- Jax version: 0.4.26
- JaxLib version: 0.4.21`
### Reproduction
In order to load a serialized model, I use the `safetensors.torch.load_model` API which requires a `torch.nn.Module` type as the first argument.
I create this model while ensuring that the parameters are **not** initialized since they will get overridden anyway. I do this by using the `init_empty_weights` context manager from the `accelerate` package.
```
from transformers import LlamaConfig, LlamaForCausalLM
from accelerate import init_empty_weights
config = LlamaConfig()
with init_empty_weights():
model = LlamaForCausalLM(config)
safetensors.torch.load_model(model, <path-to-file>) //throws an error
```
The last line throws the error
```
warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta '
UserWarning: for model.norm.weight: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass `assign=True` to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?)
```
Turns out the loading of the state_dict is a no-op which could be resolved by using the `assign=True` argument however the current API doesn't provide a way to set that. Any ideas on how to overcome this issue?
### Expected behavior
`load_model` API returns a model object where the state_dict is initialized from the stored checkpoint.
|
https://github.com/huggingface/safetensors/issues/478
|
closed
|
[
"Stale"
] | 2024-05-09T19:12:05Z
| 2024-06-15T01:49:24Z
| 1
|
goelayu
|
pytorch/tutorials
| 2,861
|
Performance Tuning Guide is very out of date
|
### 🚀 Descirbe the improvement or the new tutorial
The first thing you see when you Google PyTorch performance is this. The recipe is well written but it's very much out of data today
https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html
Some concrete things we should fix
1. For fusions we should talk about torch.compile instead of jit.script
2. We should mention overhead reduction with cudagraphs
3. We should talk about the *-fast series as places people can learn more
4. For CPU specific optimization the most important one is launcher core pinning so we should either make that a default or explain the point more
5. Instead of the CPU section we can instead go more into the inductor CPU backend
6. AMP section is fine but maybe expand to quantization
7. DDP section needs to be moved somewhere else with some FSDP performance guide
8. GPU sync section is good
9. Mention tensor cores and how to enable them and why they're not enabled by default
cc @sekyondaMeta @svekars @kit1980 @drisspg who first made me aware of this with an internal note that was important enough to make public
### Existing tutorials on this topic
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/tutorials/issues/2861
|
closed
|
[
"medium",
"docathon-h1-2024"
] | 2024-05-09T16:57:35Z
| 2024-06-12T16:11:31Z
| 9
|
msaroufim
|
pytorch/xla
| 7,042
|
model.to(xla_device) increases the number of named_parameters
|
## 🐛 Bug
Copy model to xla device affects the number of model's parameters.

## To Reproduce
```bash
python xla/benchmarks/experiment_runner.py --suite-name torchbench --accelerator cuda --dynamo openxla --dynamo None --test train --repeat 30 --iterations-per-run 5 --print-subprocess --no-resume --model-config='{"model_name": "hf_Bart"}' --experiment-config='{"accelerator": "cuda", "xla": "PJRT", "xla_flags": null, "dynamo": "openxla", "test": "train"}'
```
Steps to reproduce the behavior:
1. Run the above command
2. insert pdb hook at `xla/benchmarks/benchmark_model.py`
```python
110 def prepare_for_experiment(self, dynamo_compilation_opts):
111 self.device = self.benchmark_experiment.get_device()
112 self.dtype = self.conversion_dtype()
113
114 if self.dtype is not None:
115 self.module = self.module.to(self.dtype)
116 self.example_inputs = cast_to_dtype(self.example_inputs, self.dtype)
117
118 import pdb
119 pdb.set_trace()
120 self.module = self.module.to(self.device)
121 self.example_inputs = move_to_device(self.example_inputs, self.device)
122
123 if self.benchmark_experiment.test == "eval":
124 self._prepare_for_eval()
125 elif self.benchmark_experiment.test == "train":
126 self._prepare_for_train()
127 else:
128 raise NotImplementedError
129
130 if self.benchmark_experiment.dynamo:
131 compilation_opts = dynamo_compilation_opts.copy()
132 compilation_opts['backend'] = self.benchmark_experiment.dynamo
133
134 logger.info(f"Running torch.compile with opts {compilation_opts}")
135 self.model_iter_fn = torch.compile(self.model_iter_fn, **compilation_opts)
```
3. print the number of named_parameter of model before the copy to xla device and after the copy like the picture above shows.
```bash
(Pdb) new_model = copy.deepcopy(self.module).to("cpu").to(self.device) │105 self.optimizer = self.optimizer_class(self.module.parameters(), lr=0.01)
(Pdb) len([param for param, value in new_model.named_parameters()]) │106
262 │107 def conversion_dtype(self):
(Pdb) len([param for param, value in self.module.named_parameters()]) │108 return None
259 │109
(Pdb) len([param for param, value in self.module.named_buffers()]) │110 def prepare_for_experiment(self, dynamo_compilation_opts):
1 │111 self.device = self.benchmark_experiment.get_device()
(Pdb) len([param for param, value in new_model.named_buffers()]) │112 self.dtype = self.conversion_dtype()
1
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. Or better use the Colab template: https://github.com/pytorch/xla/blob/master/contrib/colab/issue-report.ipynb -->
## Expected behavior
`len([param for param, value in new_model.named_parameters()])` is expected to return 259
## Environment
- Reproducible on XLA backend [CPU/TPU/CUDA]: CUDA
- torch_xla version:
2.3.0-rc12
|
https://github.com/pytorch/xla/issues/7042
|
closed
|
[
"question"
] | 2024-05-09T13:53:03Z
| 2025-04-17T13:51:16Z
| null |
shenh10
|
pytorch/xla
| 7,040
|
[torchbench] The official benchmark for performance and accuracy check
|
## ❓ Questions and Help
Hi I found two available codebases for testing torchbench with pytorch/xla:
1. The one provided by pytorch official: https://github.com/pytorch/pytorch/tree/main/benchmarks/dynamo
2. Another one provided by pytorch/xla team: https://github.com/pytorch/xla/tree/master/benchmarks
However for the first codebase, it seems the support for dynamo + openxla backend would not trigger xla compilation actually. Is it no longer maintained?
And for the second one, I found it is able to test the performance, but has no way to validate the accuracy comparing to eager mode, while the first benchmark tool is able to do that. Any support for this?
Looking forward to your feedback.
|
https://github.com/pytorch/xla/issues/7040
|
closed
|
[
"question",
"benchmarking"
] | 2024-05-09T08:33:21Z
| 2025-04-17T13:53:39Z
| null |
shenh10
|
huggingface/tokenizers
| 1,525
|
How to write custom Wordpiece class?
|
My aim is get the rwkv5 model‘s "tokenizer.json",but it implemented through slow tokenizer(class Pretrainedtokenizer).
I want to convert "slow tokenizer" to "fast tokenizer",it needs to use "tokenizer = Tokenizer(Wordpiece())",but rwkv5 has it‘s own Wordpiece file.
So I want to create a custom Wordpiece
the code is here
```python
from tokenizers.models import Model
class MyWordpiece(Model):
def __init__(self,vocab,unk_token):
self.vocab = vocab
self.unk_token = unk_token
test = MyWordpiece('./vocab.txt',"<s>")
```
```
Traceback (most recent call last):
File "test.py", line 78, in <module>
test = MyWordpiece('./vocab.txt',"<s>")
TypeError: Model.__new__() takes 0 positional arguments but 2 were given
```
|
https://github.com/huggingface/tokenizers/issues/1525
|
closed
|
[
"Stale"
] | 2024-05-09T03:48:27Z
| 2024-07-18T01:53:23Z
| null |
xinyinan9527
|
huggingface/trl
| 1,635
|
How to use trl\trainer\kto_trainer.py
|
If I want to use KTO trainer, I could set the parameter [loss_type == "kto_pair"] in dpo_trainer.py. Then what is kto_trainer.py used for? And how to use it?
|
https://github.com/huggingface/trl/issues/1635
|
closed
|
[] | 2024-05-09T02:40:14Z
| 2024-06-11T10:17:51Z
| null |
mazhengyufreedom
|
pytorch/tutorials
| 2,860
|
requires_grad=True for an input datapoint?
|
https://github.com/pytorch/tutorials/blob/f4ebb4d007792f5bc302affa7b360a9710e4a88b/advanced_source/super_resolution_with_onnxruntime.py#L144
It is obscure to me why there is the need to set the flag requires_grad to True for datapoint "x", which has no parameters to be learnt.
Is it something required to export the model in onnx?
Thanks.
cc @titaiwangms @xadupre @justinchuby @BowenBao
|
https://github.com/pytorch/tutorials/issues/2860
|
closed
|
[
"question",
"onnx"
] | 2024-05-08T15:25:54Z
| 2025-04-16T21:22:11Z
| null |
ggbioing
|
huggingface/datasets
| 6,882
|
Connection Error When Using By-pass Proxies
|
### Describe the bug
I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides🤔, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f969d391870>: Failed to establish a new connection: [Errno 111] Connection refused'))")))"
I have already read the documentation provided on the hugginface, but I think I didn't see the detailed instruction on how to set up proxies for this library.
### Steps to reproduce the bug
1. Turn on any proxy software like Clash / ShadosocksR etc.
2. export system varibles to the port provided by your proxy software in wsl (It's ok for other applications to use proxy expect dataset-library)
3. load any dataset from hugginface online
### Expected behavior
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
Cell In[33], [line 3](vscode-notebook-cell:?execution_count=33&line=3)
[1](vscode-notebook-cell:?execution_count=33&line=1) from datasets import load_metric
----> [3](vscode-notebook-cell:?execution_count=33&line=3) metric = load_metric("seqeval")
File ~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46, in deprecated.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
[44](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:44) warnings.warn(warning_msg, category=FutureWarning, stacklevel=2)
[45](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:45) _emitted_deprecation_warnings.add(func_hash)
---> [46](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46) return deprecated_function(*args, **kwargs)
File ~/.local/lib/python3.10/site-packages/datasets/load.py:2104, in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, trust_remote_code, **metric_init_kwargs)
[2101](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2101) warnings.filterwarnings("ignore", message=".*https://huggingface.co/docs/evaluate$", category=FutureWarning)
[2103](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2103) download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS)
-> [2104](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2104) metric_module = metric_module_factory(
[2105](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2105) path,
[2106](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2106) revision=revision,
[2107](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2107) download_config=download_config,
[2108](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2108) download_mode=download_mode,
[2109](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2109) trust_remote_code=trust_remote_code,
[2110](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2110) ).module_path
[2111](https://vscode-remote+wsl-002bubuntu-002d22-00
|
https://github.com/huggingface/datasets/issues/6882
|
open
|
[] | 2024-05-08T06:40:14Z
| 2024-05-17T06:38:30Z
| 1
|
MRNOBODY-ZST
|
huggingface/datatrove
| 180
|
how to turn log/traceback color off?
|
Trying datatrove for the first time and the program spews a bunch of logs and tracebacks in yellow and cyan which are completely unreadable on the b&w console.
Does the program make an assumption that the user is using w&b (dark) console?
I tried to grep for `color` to see how it controls the colors but found nothing relevant, so it's probably some 3rd party component that does that.
If the coloring logic doesn't bother to check what the console colors are to keep the output readable, any idea how to turn it off completely? I RTFM'ed - didn't find any docs that address that aspect.
Thanks a lot!
|
https://github.com/huggingface/datatrove/issues/180
|
closed
|
[] | 2024-05-08T03:51:11Z
| 2024-05-17T17:53:20Z
| null |
stas00
|
pytorch/TensorRT
| 2,822
|
❓ [Question] Model inference is much slower after updating to TensorRT 9.3
|
## ❓ Question
I have a VIT model for object detection. The model inference speed in the tensort 8.5 environment is 190ms per frame. However when I updated to TensorRT 9.3, Inference slowed down to 250ms per frame.
I acquired the C++ dynamic library by compiling the latest Torch-TensorRT source code.
What might be causing this issue?
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- Libtorch Version (e.g., 1.0): 2.2.1
- CPU Architecture:
- OS (e.g., Linux): ubuntu22.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
- Build command you used (if compiling from source):
- Are you using local sources or building from archives: Yes
- Python version:
- CUDA version: 12.2
- GPU models and configuration:
- Any other relevant information:
|
https://github.com/pytorch/TensorRT/issues/2822
|
open
|
[
"question"
] | 2024-05-08T03:20:18Z
| 2025-09-03T20:08:33Z
| null |
demuxin
|
pytorch/expecttest
| 18
|
How to use it in pytest based testing?
|
The readme seems to be written for testcase only.
|
https://github.com/pytorch/expecttest/issues/18
|
closed
|
[] | 2024-05-07T22:27:37Z
| 2024-05-07T23:09:38Z
| null |
youkaichao
|
huggingface/candle
| 2,171
|
How to run LLama-3 or Phi with more then 4096 prompt tokens?
|
Could you please show me an example where LLama-3 model used (better GGUF quantized) and initial prompt is more then 4096 tokens long? Or better 16-64K long (for RAG). Currently everything I do ends with error:
In this code:
let logits = model.forward(&input, 0); // input is > 4096 tokens
Error:
narrow invalid args start + len > dim_len: [4096, 64], dim: 0, start: 0, len:4240
Model used:
https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF
Thank you a lot in advance!
|
https://github.com/huggingface/candle/issues/2171
|
open
|
[] | 2024-05-07T20:15:28Z
| 2024-05-07T20:16:13Z
| null |
baleksey
|
pytorch/xla
| 7,033
|
constant folding for AvgPool2d
|
## ❓ Questions and Help
exporting simple `AvgPool2d` using `torch_xla 2.3` results in two different `stablehlo.reduce_window` ops, the second one only takes args as constants. Is there a way to fold it into a constant in `exported_program_to_stablehlo`? @lsy323 @qihqi
e.g. `%4` in the following example.
```python
import torch
import torch.nn as nn
from torch_xla.stablehlo import exported_program_to_stablehlo
m = nn.AvgPool2d(kernel_size=2)
inp_args = (torch.randn(1, 4, 4),)
em = torch.export.export(m, inp_args)
stablehlo_program = exported_program_to_stablehlo(em)
print(stablehlo_program.get_stablehlo_text())
```
```cpp
module @IrToHlo.26 attributes {mhlo.cross_program_prefetches = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} {
func.func @main(%arg0: tensor<1x4x4xf32>) -> tensor<1x2x2xf32> {
%0 = stablehlo.constant dense<1.000000e+00> : tensor<4x4xf32>
%1 = stablehlo.constant dense<0.000000e+00> : tensor<f32>
%2 = stablehlo.reshape %arg0 : (tensor<1x4x4xf32>) -> tensor<1x1x4x4xf32>
%3 = "stablehlo.reduce_window"(%2, %1) ({
^bb0(%arg1: tensor<f32>, %arg2: tensor<f32>):
%8 = stablehlo.add %arg1, %arg2 : tensor<f32>
stablehlo.return %8 : tensor<f32>
}) {base_dilations = array<i64: 1, 1, 1, 1>, padding = dense<0> : tensor<4x2xi64>, window_dilations = array<i64: 1, 1, 1, 1>, window_dimensions = array<i64: 1, 1, 2, 2>, window_strides = array<i64: 1, 1, 2, 2>} : (tensor<1x1x4x4xf32>, tensor<f32>) -> tensor<1x1x2x2xf32>
%4 = "stablehlo.reduce_window"(%0, %1) ({
^bb0(%arg1: tensor<f32>, %arg2: tensor<f32>):
%8 = stablehlo.add %arg1, %arg2 : tensor<f32>
stablehlo.return %8 : tensor<f32>
}) {base_dilations = array<i64: 1, 1>, padding = dense<0> : tensor<2x2xi64>, window_dilations = array<i64: 1, 1>, window_dimensions = array<i64: 2, 2>, window_strides = array<i64: 2, 2>} : (tensor<4x4xf32>, tensor<f32>) -> tensor<2x2xf32>
%5 = stablehlo.reshape %4 : (tensor<2x2xf32>) -> tensor<1x1x2x2xf32>
%6 = stablehlo.divide %3, %5 : tensor<1x1x2x2xf32>
%7 = stablehlo.reshape %6 : (tensor<1x1x2x2xf32>) -> tensor<1x2x2xf32>
return %7 : tensor<1x2x2xf32>
}
}
```
|
https://github.com/pytorch/xla/issues/7033
|
closed
|
[
"stablehlo"
] | 2024-05-07T07:34:11Z
| 2024-09-23T21:45:42Z
| 10
|
thong3le
|
huggingface/chat-ui
| 1,115
|
[v0.8.4] IMPORTANT: Talking to PDFs and general Roadmap?
|
Hi @nsarrazin
I have a couple of questions that I could not get answers to in the repo and on the web.
1. Is there a plan to enable file uploads (PDFs, etc) so that users can talk to those files? Similar to ChatGPT, Gemini etc?
2. Is there a feature roadmap available somewhere?
Thanks!
|
https://github.com/huggingface/chat-ui/issues/1115
|
open
|
[] | 2024-05-07T06:10:20Z
| 2024-09-10T15:44:16Z
| 4
|
adhishthite
|
huggingface/candle
| 2,167
|
How to do a Axum's sse function for Candle?
|
fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> {
use std::io::Write;
self.tokenizer.clear();
let mut tokens = self
.tokenizer
.tokenizer()
.encode(prompt, true)
.map_err(E::msg)?
.get_ids()
.to_vec();
for &t in tokens.iter() {
if let Some(t) = self.tokenizer.next_token(t)? {
print!("{t}")
}
}
std::io::stdout().flush()?;
let mut generated_tokens = 0usize;
let eos_token = match self.tokenizer.get_token("<|endoftext|>") {
Some(token) => token,
None => anyhow::bail!("cannot find the <|endoftext|> token"),
};
let start_gen = std::time::Instant::now();
for index in 0..sample_len {
let context_size = if index > 0 { 1 } else { tokens.len() };
let start_pos = tokens.len().saturating_sub(context_size);
let ctxt = &tokens[start_pos..];
let input = Tensor::new(ctxt, &self.device)?.unsqueeze(0)?;
let logits = self.model.forward(&input, start_pos)?;
let logits = logits.squeeze(0)?.squeeze(0)?.to_dtype(DType::F32)?;
let logits = if self.repeat_penalty == 1. {
logits
} else {
let start_at = tokens.len().saturating_sub(self.repeat_last_n);
candle_transformers::utils::apply_repeat_penalty(
&logits,
self.repeat_penalty,
&tokens[start_at..],
)?
};
let next_token = self.logits_processor.sample(&logits)?;
tokens.push(next_token);
generated_tokens += 1;
if next_token == eos_token {
break;
}
if let Some(t) = self.tokenizer.next_token(next_token)? {
print!("{t}");
std::io::stdout().flush()?;
}
}
let dt = start_gen.elapsed();
if let Some(rest) = self.tokenizer.decode_rest().map_err(E::msg)? {
print!("{rest}");
}
std::io::stdout().flush()?;
println!(
"\n{generated_tokens} tokens generated ({:.2} token/s)",
generated_tokens as f64 / dt.as_secs_f64(),
);
Ok(())
}
How to rewrite above function to sse?
|
https://github.com/huggingface/candle/issues/2167
|
closed
|
[] | 2024-05-07T02:38:50Z
| 2024-05-08T04:27:14Z
| null |
sunnyregion
|
pytorch/torchchat
| 708
|
--num-samples xxx does not work for getting multiple prompt responses
|
Previouslu, users could use --num-samples to get reliable benchmarking. WIth recent updates, num-samples no longer appears to work.
https://github.com/pytorch/pytorch/pull/125611 shows nice performance gains on gpt-fast, and @helloguo would like to validate on torchchat to ensure this also accelerates our code. Is there another way he can run multiple prompts to avoid cold start effects?
```
(py311) mikekg@mikekg-mbp torchchat % python3 torchchat.py generate stories15M --device fast --num-samples 20
Using device=cpu Apple M1 Max
Loading model...
Time to load model: 0.09 seconds
Hello, my name is Pete the mouse. He was a very curious mouse, and he loved to explore. One day, he saw a big, white sign. He had never seen it before, and he was curious to get a closer look.
He decided to take a look, and he squealed with joy when he reached for the sign. On the sign, there was a big, white, friendly door. He was so excited, he quickly ran over to it and opened the door.
On the other side of the door, he found a room filled with toys, cars and people. He cheered with joy, and he could not wait to explore.
But then, something unexpected happened - the door suddenly closed, and Pete was so scared. He tried to push the door open, but it just wouldn't budge. He looked around and spotted a small, white house.
Pete pushed the door open, and there he was - a friendly
Max Sequence Length Reached. Ending Conversation.
==========
```
|
https://github.com/pytorch/torchchat/issues/708
|
closed
|
[] | 2024-05-06T23:45:52Z
| 2024-05-12T21:23:06Z
| 1
|
mikekgfb
|
huggingface/optimum
| 1,847
|
Static Quantization for Seq2Seq models like T5
|
I'm currently trying to static quantize T5 but it seem in the optimum doc last committed 10 months ago said it don't support static only dynamic. Is there anyone ever try this before or has optimum updated any related recently, may be help me take a look?
|
https://github.com/huggingface/optimum/issues/1847
|
open
|
[
"question",
"quantization"
] | 2024-05-06T19:34:30Z
| 2024-10-14T12:24:28Z
| null |
NQTri00
|
pytorch/torchtitan
| 312
|
Question on Model Init
|
I noticed that there are two parts of implementation that are related to model initialization.
### Instancing the model with meta tensor
https://github.com/pytorch/torchtitan/blob/f72a2a0da0bdfc394faaab9b3c0f35d0b6f5be50/train.py#L177-L181
### Doing explicit model initalization
https://github.com/pytorch/torchtitan/blob/f72a2a0da0bdfc394faaab9b3c0f35d0b6f5be50/train.py#L209-L210
The issue is that if we do any weight initalization when instancing the module, it will ineffective becuase of the `meta tensor`.
As a result, we have to do ***all*** initalization explicitly in the `model.init_weights()`.
My question is why we want to instance model with `meta tensor`?
If effencicy is not an issue, can we simply remove the `with torch.device("meta"):`
|
https://github.com/pytorch/torchtitan/issues/312
|
open
|
[
"question"
] | 2024-05-06T17:35:15Z
| 2024-05-13T13:30:51Z
| null |
XinDongol
|
huggingface/optimum
| 1,846
|
Low performance of THUDM/chatglm3-6b onnx model
|
I ran the chatglm3-6b model by exporting it to ONNX framework using custom onnx configuration. Although the functionality is correct, the latency of the model is very high, much higher than the pytorch model.
I have attached a minimal reproducible code which exports and run the model. Can someone take a look into it and suggest how to rectify the performance degradation.
```
from optimum.exporters.onnx import main_export
from transformers import AutoConfig
from optimum.exporters.onnx.config import TextDecoderOnnxConfig,TextDecoderWithPositionIdsOnnxConfig
from optimum.exporters.onnx.base import ConfigBehavior
from optimum.utils import NormalizedTextConfig, DummyPastKeyValuesGenerator
from typing import Dict
import os
import shutil
import time
class ChatGLM2DummyPastKeyValuesGenerator(DummyPastKeyValuesGenerator):
def generate(self, input_name: str, framework: str = "pt"):
past_key_shape = (
self.batch_size,
self.num_attention_heads,
self.hidden_size // self.num_attention_heads,
self.sequence_length,
)
past_value_shape = (
self.batch_size,
self.num_attention_heads,
self.sequence_length,
self.hidden_size // self.num_attention_heads,
)
return [
(
self.random_float_tensor(past_key_shape, framework=framework),
self.random_float_tensor(past_value_shape, framework=framework),
)
for _ in range(self.num_layers)
]
class CustomChatGLM2OnnxConfig(TextDecoderOnnxConfig):
DUMMY_INPUT_GENERATOR_CLASSES = (
ChatGLM2DummyPastKeyValuesGenerator,
) + TextDecoderOnnxConfig.DUMMY_INPUT_GENERATOR_CLASSES
DUMMY_PKV_GENERATOR_CLASS = ChatGLM2DummyPastKeyValuesGenerator
DEFAULT_ONNX_OPSET = 15 # aten::tril operator requires opset>=14
NORMALIZED_CONFIG_CLASS = NormalizedTextConfig.with_args(
hidden_size="hidden_size",
num_layers="num_layers",
num_attention_heads="num_attention_heads",
)
def add_past_key_values(
self, inputs_or_outputs: Dict[str, Dict[int, str]], direction: str
):
if direction not in ["inputs", "outputs"]:
raise ValueError(
f'direction must either be "inputs" or "outputs", but {direction} was given'
)
if direction == "inputs":
decoder_sequence_name = "past_sequence_length"
name = "past_key_values"
else:
decoder_sequence_name = "past_sequence_length + 1"
name = "present"
for i in range(self._normalized_config.num_layers):
inputs_or_outputs[f"{name}.{i}.key"] = {
0: "batch_size",
3: decoder_sequence_name,
}
inputs_or_outputs[f"{name}.{i}.value"] = {
0: "batch_size",
2: decoder_sequence_name,
}
model_id = "THUDM/chatglm3-6b"
config = AutoConfig.from_pretrained(model_id, trust_remote_code=True)
onnx_config = CustomChatGLM2OnnxConfig(
config=config,
task="text-generation",
use_past_in_inputs=False,
)
onnx_config_with_past = CustomChatGLM2OnnxConfig(
config, task="text-generation", use_past=True
)
custom_onnx_configs = {
"model": onnx_config,
}
main_export(
model_id,
output="chatglm",
task="text-generation-with-past",
trust_remote_code=True,
custom_onnx_configs=custom_onnx_configs,
no_post_process=True,
opset=15
)
### Running
from transformers import AutoTokenizer, AutoModelForCausalLM
from optimum.utils import NormalizedTextConfig, NormalizedConfigManager
NormalizedConfigManager._conf["chatglm"] = NormalizedTextConfig
import torch
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
tokenizer.add_special_tokens({"pad_token": "[PAD]"})
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
start = time.perf_counter()
inputs = tokenizer("What is the meaning of life?", return_tensors="pt", padding=True)
input_ids = inputs.input_ids
# Generate
generate_ids = model.generate(
input_ids,
max_length=64,
pad_token_id=tokenizer.eos_token_id,
)
# Stop timer
end = time.perf_counter()
generate_time = end - start
# Num of tokens
prompt_tokens = input_ids.shape[1]
num_tokens_out = generate_ids.shape[1]
new_tokens_generated = num_tokens_out - prompt_tokens
time_per_token = (generate_time / new_tokens_generated) * 1e3
print(time_per_token)
```
|
https://github.com/huggingface/optimum/issues/1846
|
open
|
[
"inference",
"onnxruntime",
"onnx"
] | 2024-05-06T17:18:58Z
| 2024-10-14T12:25:29Z
| 0
|
tuhinp-amd
|
pytorch/torchchat
| 692
|
[LAUNCH BLOCKER] TorchChat results seems less connected than they could have been
|
For example generating text from the same prompt using llama.cpp and TorchChat produces following results:
```
Hello, my name is **Marcus**, and I am a 33-year-old software developer from California. I have been using the internet for the past 20 years, and I have seen it evolve into a powerful tool for communication, entertainment, and information. However, I have also seen the darker side of the internet, including cyberbullying, harassment, and the spread of misinformation.
As a software developer, I have a unique perspective on the internet and its potential impact on society. I believe that the internet can be a force for good, but it must be used responsibly and ethically. This is why I am passionate about promoting digital citizenship and raising awareness about the importance of online safety and security.
In my free time, I enjoy writing, hiking, and playing music. I am also a volunteer firefighter, and I have seen firsthand the impact of the
```
vs
```
Hello, my name is _______________ and I'm here to talk about my experience with ______________ (addiction, trauma, mental health issue, etc.).
I understand that you are here to help me and I appreciate your willingness to listen. It takes a lot of courage to share my story, but I hope that by doing so, it will help me heal and move forward.
Can you tell me more about the support groups you offer? How do they work? What kind of people attend them? Are they confidential?
I'm still not sure if this is the right place for me, but I'm willing to give it a try. Can you tell me more about your program and how it can help me?
I've tried other programs before, but they didn't work for me. What makes your program different?
I'm worried that if I share my story, people will judge me or think less of me. Can you guarantee confidentiality?
Thank you for being here for me and supporting me on this journey. I really appreciate it. [end of text]
```
It's very subjective, but 2nd text (about person who wants to find more information about metal health/addiction programs, feels more believable/coherent then story about 33 SWE who is also a volunteer firefighter. What it looks like is that by 3rd paragraph TorchChat lost context about two previous ones, which sounds like a context size of stories15M, but not of Llama-2
|
https://github.com/pytorch/torchchat/issues/692
|
closed
|
[
"launch blocker"
] | 2024-05-06T16:31:38Z
| 2024-07-21T22:00:21Z
| 9
|
malfet
|
pytorch/TensorRT
| 2,813
|
❓ [Question] How to solve this warning: Detected this engine is being instantitated in a multi-GPU system with multi-device safe mode disabled.
|
## ❓ Question
I used Torch-TensorRT to compile the torchscript model in C++. When compiling or loading torchtrt model, it displays many warnings.
```
WARNING: [Torch-TensorRT] - Detected this engine is being instantitated in a multi-GPU system with multi-device safe mode disabled. For more on the implications of this as well as workarounds, see the linked documentation (https://pytorch.org/TensorRT/user_guide/runtime.html#multi-device-safe-mode)
WARNING: [Torch-TensorRT] - Detected this engine is being instantitated in a multi-GPU system with multi-device safe mode disabled. For more on the implications of this as well as workarounds, see the linked documentation (https://pytorch.org/TensorRT/user_guide/runtime.html#multi-device-safe-mode)
WARNING: [Torch-TensorRT] - Detected this engine is being instantitated in a multi-GPU system with multi-device safe mode disabled. For more on the implications of this as well as workarounds, see the linked documentation (https://pytorch.org/TensorRT/user_guide/runtime.html#multi-device-safe-mode)
```
## What you have already tried
I found this [link](https://pytorch.org/TensorRT/user_guide/runtime.html#multi-device-safe-mode) is useful, but it only provides Python API.
I checked the source code, but I still haven't figured out how to set up MULTI_DEVICE_SAFE_MODE in C++.
What can I do to address this warning?
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):
- CPU Architecture: x86
- OS (e.g., Linux): ubuntu18
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): libtorch
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version: 12.2
- GPU models and configuration: 1080Ti
- Any other relevant information:
|
https://github.com/pytorch/TensorRT/issues/2813
|
closed
|
[
"question"
] | 2024-05-06T09:39:02Z
| 2024-05-21T17:02:12Z
| null |
demuxin
|
huggingface/dataset-viewer
| 2,775
|
Support LeRobot datasets?
|
Currently:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'VideoFrame' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image']
```
eg on https://huggingface.co/datasets/lerobot/aloha_static_towel
Requires datasets to support `VideoFrame`
|
https://github.com/huggingface/dataset-viewer/issues/2775
|
open
|
[
"question",
"feature request",
"dependencies",
"P2"
] | 2024-05-06T09:16:40Z
| 2025-07-24T03:36:41Z
| null |
severo
|
huggingface/peft
| 1,712
|
how to finetune whisper model with 'initial_prompt'
|
when use 'initial_prompt', the decoding result of finetuning with my data on whisper model v2 is bad, on the contrary, the result is good.
however, when use 'initial_prompt' the decoding result of based whisper model v2 is also good, so it means If want to use 'initial_prompt' during decoding , must add it when training?
|
https://github.com/huggingface/peft/issues/1712
|
closed
|
[] | 2024-05-06T06:28:20Z
| 2024-06-13T15:03:43Z
| null |
zyb8543d
|
pytorch/torchchat
| 685
|
[PRE-LAUNCH] Test for quantization.md does not work... is attempt to install et when it has already been installed to blame?
| ERROR: type should be string, got "https://github.com/pytorch/torchchat/actions/runs/8961642013/job/24609465486?pr=684\r\n\r\nAs part of the setup for this test, we build and install et. But, et is already installed. Should this pass?\r\nAnd if not, should it? Are we condemning everybody who re-runs install_et to fail?\r\n```\r\n -- Detecting CXX compile features - done\r\n -- Downloading FXdiv to /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/FXdiv-source (define FXDIV_SOURCE_DIR to avoid it)\r\n -- Configuring done (0.1s)\r\n -- Generating done (0.0s)\r\n -- Build files have been written to: /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/FXdiv-download\r\n [ 11%] Creating directories for 'fxdiv'\r\n [ 22%] Performing download step (git clone) for 'fxdiv'\r\n Cloning into 'FXdiv-source'...\r\n Already on 'master'\r\n Your branch is up to date with 'origin/master'.\r\n [ 33%] Performing update step for 'fxdiv'\r\n [ 44%] No patch step for 'fxdiv'\r\n [ 55%] No configure step for 'fxdiv'\r\n [ 66%] No build step for 'fxdiv'\r\n [ 77%] No install step for 'fxdiv'\r\n [ 88%] No test step for 'fxdiv'\r\n [100%] Completed 'fxdiv'\r\n [100%] Built target fxdiv\r\n -- Performing Test CMAKE_HAVE_LIBC_PTHREAD\r\n -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success\r\n -- Found Threads: TRUE\r\n -- Using python executable '/Library/Frameworks/Python.framework/Versions/3.10/bin/python'\r\n -- Resolved buck2 as /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/buck2-bin/buck2-99e407b49dc432eda0cbddd67ea78346.\r\n -- Killing buck2 daemon\r\n -- executorch: Generating source lists\r\n -- executorch: Generating source file list /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/executorch_srcs.cmake\r\n\r\n Error while generating /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/executorch_srcs.cmake. Exit code: 1\r\n Output:\r\n \r\n Error:\r\n Traceback (most recent call last):\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/buck_util.py\", line 26, in run\r\n cp: subprocess.CompletedProcess = subprocess.run(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py\", line 526, in run\r\n raise CalledProcessError(retcode, process.args,\r\n subprocess.CalledProcessError: Command '['/Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/buck2-bin/buck2-99e407b49dc432eda0cbddd67ea78346', 'cquery', \"inputs(deps('//runtime/executor:program'))\"]' returned non-zero exit status 2.\r\n \r\n The above exception was the direct cause of the following exception:\r\n \r\n Traceback (most recent call last):\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/extract_sources.py\", line 218, in <module>\r\n main()\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/extract_sources.py\", line 203, in main\r\n target_to_srcs[name] = sorted(target.get_sources(graph, runner))\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/extract_sources.py\", line 116, in get_sources\r\n sources: set[str] = set(runner.run([\"cquery\", query]))\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/buck_util.py\", line 31, in run\r\n raise RuntimeError(ex.stderr.decode(\"utf-8\")) from ex\r\n RuntimeError: Command failed:\r\n Error validating working directory\r\n \r\n Caused by:\r\n 0: Failed to stat `/Users/runner/work/torchchat/torchchat/et-build/src/executorch/buck-out/v2`\r\n 1: ENOENT: No such file or directory\r\n \r\n \r\n CMake Error at build/Utils.cmake:191 (message):\r\n executorch: source list generation failed\r\n Call Stack (most recent call first):\r\n CMakeLists.txt:311 (extract_sources)\r\n ```"
|
https://github.com/pytorch/torchchat/issues/685
|
closed
|
[
"bug"
] | 2024-05-05T23:01:07Z
| 2024-05-12T20:40:53Z
| 1
|
mikekgfb
|
huggingface/dataspeech
| 17
|
UnboundLocalError: cannot access local variable 't' where it is not associated with a value """
|
### What i do
Hello. I tried to annotate my own dataset. And I got an error that I don't understand.
I'm a newbie. He is generally unable to understand what happened and why it happened.
I am attaching all the materials that I have
I have CSV-Scheme
| audio | text | speeker_id |
| ------------- | ------------- | ------------- |
| ./audio/audio_427.wav | Текст на кириллице | 1111 |
I upload CSV and cast csv as written in the documentation.
Uploading to HgFace. I start dataspeech with arguments.
He loaded it, he started doing something, and then that was it.
### What i group dataset
```sh
python group_dataset.py from_audio to_csv
```
Out. It save datasets.csv:
```csv
./audio/audio_427.wav, а затем базальта!. ,1111
./audio/audio_231.wav, razus!. ,1111
```
#### Cast and upload dataset to HG
```sh
python group_dataset.py from_csv cast_audio push_to_hub
```
```py
# In short it does this >
df = Dataset.from_csv("./datasets.csv")
df = df.cast_column("audio", Audio(32000))
df.push_to_hub(repo_id="", token="")
```
### Start dataspeach
```sh
python main.py "Anioji/testra" \
--configuration "default" \
--output_dir /root/dataspeech/tmp_stone_base/ \
--text_column_name "text_original" \
--audio_column_name "audio" \
--cpu_num_workers 4 \
--num_workers_per_gpu 4 \
--rename_column \
```
### Tracelog
```pyhon
/root/dataspeech/venv/lib/python3.11/site-packages/pyannote/audio/core/io.py:43: UserWarning: torchaudio._backend.set_audio_backend has been deprecated. With dispatcher enabled, this function is no-op. You can remove the function call.
torchaudio.set_audio_backend("soundfile")
WARNING - torchvision is not available - cannot save figures
Compute speaking rate
Compute snr and reverb
Map (num_proc=4): 0%| | 0/534 [00:00<?, ? examples/s]/root/dataspeech/venv/lib/python3.11/site-packages/pyannote/audio/core/io.py:43: UserWarning: torchaudio._backend.set_audio_backend has been deprecated. With dispatcher enabled, this function is no-op. You can remove the function call.
torchaudio.set_audio_backend("soundfile")
/root/dataspeech/venv/lib/python3.11/site-packages/pyannote/audio/core/io.py:43: UserWarning: torchaudio._backend.set_audio_backend has been deprecated. With dispatcher enabled, this function is no-op. You can remove the function call.
torchaudio.set_audio_backend("soundfile")
WARNING - torchvision is not available - cannot save figures
WARNING - torchvision is not available - cannot save figures
INFO - Lightning automatically upgraded your loaded checkpoint from v1.6.5 to v2.2.2. To apply the upgrade to your files permanently, run `python -m pytorch_lightning.utilities.upgrade_checkpoint ../.cache/huggingface/hub/models--ylacombe--brouhaha-best/snapshots/99bf97b13fd4dda2434a6f7c50855933076f2937/best.ckpt`
Model was trained with pyannote.audio 0.0.1, yours is 3.1.1. Bad things might happen unless you revert pyannote.audio to 0.x.
Model was trained with torch 1.12.1+cu102, yours is 2.2.2+cu121. Bad things might happen unless you revert torch to 1.x.
Using default parameters optimized on Brouhaha
Map (num_proc=4): 3%|█▏ | 16/534 [00:08<04:39, 1.85 examples/s]Using default parameters optimized on Brouhaha
Map (num_proc=4): 6%|██▍ | 32/534 [00:09<02:00, 4.16 examples/s]Using default parameters optimized on Brouhaha
Map (num_proc=4): 9%|███▋ | 48/534 [00:09<01:10, 6.91 examples/s]Using default parameters optimized on Brouhaha
Map (num_proc=4): 12%|████▉ | 64/534 [00:10<00:46, 10.02 examples/s]Using default parameters optimized on Brouhaha
Map (num_proc=4): 15%|██████▏ | 80/534 [00:10<00:35, 12.97 examples/s]Using default parameters optimized on Brouhaha
Map (num_proc=4): 18%|███████▎ | 96/534 [00:11<00:28, 15.57 examples/s]Using default parameters optimized on Brouhaha
Map (num_proc=4): 18%|███████▎ | 96/534 [00:12<00:57, 7.58 examples/s]
multiprocess.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/root/dataspeech/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "/root/dataspeech/venv/lib/python3.11/site-packages/datasets/utils/py_utils.py", line 675, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "/root/dataspeech/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3547, in _map_single
batch = apply_function_on_filtered_inputs(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/dataspeech/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3416, in apply_function_on_filtered_inputs
|
https://github.com/huggingface/dataspeech/issues/17
|
closed
|
[] | 2024-05-05T20:49:26Z
| 2024-05-28T11:31:37Z
| null |
anioji
|
pytorch/vision
| 8,409
|
Mask r-cnn training runs infinitely without output or error
|
### 🐛 Describe the bug
Here’s a brief overview of my process:
1.I generated a dataset using PyTorch by applying the SAM mask from bounding boxes to my images.
2.After creating the dataset, I split it into training and testing sets.
3.I loaded both sets using torch.utils.data.DataLoader.
4.I’m using a pre-trained model with 11 classes.
However, I’m encountering an issue during training. The process seems to take an unusually long time, and I’m not seeing any progress or error messages to troubleshoot from.

What might be going wrong or how to improve my training process?
### Versions
--2024-05-05 11:05:17-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 22068 (22K) [text/plain]
Saving to: ‘collect_env.py’
collect_env.py 100%[===================>] 21.55K --.-KB/s in 0.002s
2024-05-05 11:05:18 (12.6 MB/s) - ‘collect_env.py’ saved [22068/22068]
Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.9
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.58+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
CPU family: 6
Model: 63
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 0
BogoMIPS: 4599.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 45 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.2.1+cu121
[pip3] torchaudio==2.2.1+cu121
[pip3] torchdata==0.7
|
https://github.com/pytorch/vision/issues/8409
|
closed
|
[] | 2024-05-05T11:09:04Z
| 2024-05-07T10:48:07Z
| 1
|
MontassarTn
|
pytorch/examples
| 1,253
|
Drawbacks of making the C++ API look like Python
|
Thank you for creating a C++ version of Pytorch. However, I wonder if you could create an example that looks like C++ and not like Python?
The [DCGAN sample project](https://github.com/pytorch/examples/blob/main/cpp/dcgan/dcgan.cpp) makes extensive use of ```auto``` so that it can show how it can be made to look and feel like Python by avoiding standard C++ things like unique_ptr<>, shared_ptr<> etc.
However, I am a C++ programmer, not a Python programmer. I am very happy working with standard C++ things like classes with methods and smart pointers. The noble attempt to make "feel like Python" with ```auto``` variables isn't helpful for me. For example, it assumes that I will be able to put my entire program into a single method. That's an unfortunate restriction, as I want to build, store and pass objects between a number of different methods.
I have tried unwrapping the ```auto``` using some decltype() statements, but the Pytorch C++ templating makes this quite laborious. Perhaps that is an unavoidable result of the way that the underlying library is built? If so, could you create an C++ example that shows how to unwrap the various templates in one case, splitting the operations across several methods of a class for me?
Would that be straightforward to do? It would be a great help for me to get an idea of how your templating structure works and I can then build up from that.
I've only just started working with the library (that's why I'm looking at the example), so maybe I've missed something in the tutorial? I apologize if that's the case and ask if you would point me at the example that I should be looking at?
Many thanks,
Dan
|
https://github.com/pytorch/examples/issues/1253
|
closed
|
[] | 2024-05-04T15:39:22Z
| 2024-05-11T09:39:36Z
| 10
|
dannypike
|
pytorch/torchchat
| 676
|
[PRE-LAUNCH] On some MacOS/xcode version install fails with an error
|
This happens in our cloud runners. Does not affect most users, but only those that have certain versions of the Apple linker installed. Do we need to cover this in common problems?
Fixing this may not be a launch blocker, but being intentional about it probably is.
|
https://github.com/pytorch/torchchat/issues/676
|
closed
|
[
"documentation"
] | 2024-05-04T15:31:19Z
| 2024-05-12T20:43:17Z
| 4
|
mikekgfb
|
huggingface/parler-tts
| 38
|
how to use common voice mozilla dataset train for Parler-TTS
|
how to use common voice mozilla dataset train for Parler-TTS ?can you help me ?
|
https://github.com/huggingface/parler-tts/issues/38
|
open
|
[] | 2024-05-04T12:36:30Z
| 2024-05-04T12:36:30Z
| null |
herbiel
|
pytorch/torchchat
| 674
|
[LAUNCH BLOCKER] Build of ET - Commands from README fail
|
#670 adds building on MacOS for the entire flow but fails very much towards the end of macOS ci.
However the status is reported as green/correct execution. Why, and how do we make it red when it fails?
Building ET fails according to readme logs, witj an error we have seen before from the linker:
https://github.com/pytorch/torchchat/actions/runs/8949063846/job/24582907497?pr=670
```
[ 64%] Building C object backends/xnnpack/third-party/XNNPACK/CMakeFiles/microkernels-all.dir/src/x32-zip/x32-zip-xm-neon.c.o
0 0x10107f648 __assert_rtn + 72
1 0x100fa7c5c ld::Fixup::applyFixup(ld::Atom const*, ld::LayoutLinkedImage const&, unsigned char*) const + 8268
2 0x10103a7d8 ___ZN2ld16LayoutExecutable27writeContentWithoutLinkEditENSt3__14spanIhLm18446744073709551615EEEy_block_invoke + 332
3 0x195836428 _dispatch_client_callout2 + 20
4 0x19584a850 _dispatch_apply_invoke3 + 336
5 0x1958363e8 _dispatch_client_callout + 20
6 0x195837c68 _dispatch_once_callout + 32
7 0x19584aeec _dispatch_apply_invoke_and_wait + 372
8 0x195849e9c _dispatch_apply_with_attr_f + 1212
9 0x19584a08c dispatch_apply + 96
10 0x10103a9e4 void mapReduce<ld::Atom const*, mach_o::Error>(std::__1::span<ld::Atom const*, 18446744073709551615ul>, unsigned long, void (unsigned long, mach_o::Error&, std::__1::span<ld::Atom const*, 18446744073709551615ul>) block_pointer, void (std::__1::span<mach_o::Error, 18446744073709551615ul>) block_pointer) + 336
11 0x10103a594 ld::LayoutExecutable::writeContentWithoutLinkEdit(std::__1::span<unsigned char, 18446744073709551615ul>, unsigned long long) + 1180
12 0x101040020 ld::LayoutExecutable::writeToFile(char const*) + 15248
13 0x100ff22e8 main + 9424
ld: Assertion failed: (extras.otherInstrOffset != 0 && "Kind::arm64_adrp_ldr missing extra info"), function applyFixup, file Fixup.cpp, line 793.
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [executor_runner] Error 1
make[1]: *** [CMakeFiles/executor_runner.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[...]
[100%] Building C object backends/xnnpack/third-party/XNNPACK/CMakeFiles/microkernels-all.dir/src/tables/vlog.c.o
[100%] Built target microkernels-all
make: *** [all] Error 2
error: command '/Users/runner/work/_temp/miniconda/bin/cmake' failed with exit code 2
error: subprocess-exited-with-error
× Building wheel for executorch (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
```
|
https://github.com/pytorch/torchchat/issues/674
|
closed
|
[] | 2024-05-04T10:30:39Z
| 2024-05-05T20:27:32Z
| 2
|
mikekgfb
|
pytorch/torchchat
| 663
|
[PRE-LAUNCH] Why is necessary to disable int8pack_mm with compilation? Is it not working or slow ?
|
Curious why we're disabling the int4pack_mm for CPU compilation - are we thinking generated code is more performant? (Then we should document that someplace...) Or is it not working to call this operator from AOTI?
Why not? I thought there was an automatic fallback. @desertfire
|
https://github.com/pytorch/torchchat/issues/663
|
closed
|
[] | 2024-05-04T03:34:20Z
| 2024-05-17T13:08:15Z
| 1
|
mikekgfb
|
pytorch/torchchat
| 660
|
[LABEL TBD] torchchat redownloads model when rebased?
|
A few days ago, I played with torchchat as follows (in the context of https://github.com/pytorch/torchchat/issues/621):
`python3 torchchat.py download llama3`
`python3 torchchat.py generate llama3`
Today, I rebased and continued where I left of. In particular, i called the following command:
`python3 torchchat.py generate llama3 --quantize config/data/desktop.json --prompt "Hello, my name is"`
But interestingly, it redownloads the 16GB llama3 model even though the model already exists in `.model-artifacts` folder from a few days ago.
Is this a bug or a feature? Please label appropriately.
Internal Task: [T187938966](https://www.internalfb.com/intern/tasks/?t=187938966)
|
https://github.com/pytorch/torchchat/issues/660
|
closed
|
[] | 2024-05-03T22:01:22Z
| 2024-05-06T15:13:30Z
| 2
|
mergennachin
|
huggingface/setfit
| 519
|
how to optimize setfit inference
|
hi,
im currently investigating what the options we have to optimize setfit inference and have a few questions about it:
- gpu:
- torch compile: https://huggingface.co/docs/transformers/en/perf_torch_compile
is the following the only way to use setfit with torch.compile?
```
model.model_body[0].auto_model = torch.compile(model.model_body[0].auto_model)
```
info above was provided by Tom Aarsen.
does torch.compile also work for cpu? edit: looks like it should work for cpu too...
https://pytorch.org/docs/stable/generated/torch.compile.html
does torch compile change anything about the accuracy of the model inference?
i see different modes here:
Can be either “default”, “reduce-overhead”, “max-autotune” or “max-autotune-no-cudagraphs” ... so far reduce-overhead gives best results....
- cpu:
what are the options to optimize cpu inference?
- BetterTransformer: https://huggingface.co/docs/transformers/en/perf_infer_cpu
is BetterTransformer really not available for setFit? i dont see setFit in this list: https://huggingface.co/docs/optimum/bettertransformer/overview#supported-models
are there any other resources to speedup setfit model inference? where can you run a setFit model except torchServe?
Thanks,
Gerald
|
https://github.com/huggingface/setfit/issues/519
|
closed
|
[] | 2024-05-03T19:19:21Z
| 2024-06-02T20:30:34Z
| null |
geraldstanje
|
huggingface/chat-ui
| 1,097
|
Katex fails to render math expressions from ChatGPT4.
|
I am using Chat UI version 0.8.3 and ChatGPT version gpt-4-turbo-2024-04-09.
ChatGPT is outputting formula delimiters as `\[`, `\]`, `\(`, `\)` and katex in the current version of ChatUI is not rendering them correctly. Based on my experiments, katex renders only formulas with `$` delimiters correctly.
I did a quick test with the following prompts
```echo following text as is: \[ D_i \]``` <- Fail to render
```echo following text as is: $ D_i $``` <- Successful
Thank you in advance.
|
https://github.com/huggingface/chat-ui/issues/1097
|
closed
|
[
"bug",
"help wanted",
"front"
] | 2024-05-03T08:19:40Z
| 2024-11-22T12:18:44Z
| 5
|
haje01
|
huggingface/chat-ui
| 1,096
|
error in login redirect
|
I am running chat-ui in online vps ubuntu 22
I am stuck at login redirection
I went through google authorization page and confirm my Gmail then redirect to my main domain again
The problem is simply it back with no action, not logged on and the URL been like that:
mydomain.com/login/callback?state=xxxxxxxxx
when I try again it redirect me to my main domain with 500 internal error
is there something that I missed in .env file ?
This is parts from env
COOKIE_NAME=SP-chat
HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxxxxxx
HF_API_ROOT=https://api-inference.huggingface.co/models
OPENID_CONFIG=`{
"PROVIDER_URL": "https://accounts.google.com",
"CLIENT_ID": "xxxxxxxxxxx.apps.googleusercontent.com",
"CLIENT_SECRET": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"SCOPES": "",
"NAME_CLAIM": ""
}`
USE_CLIENT_CERTIFICATE=false
CERT_PATH=/etc/letsencrypt/live/xxxxxxxxxx/fullchain.pem
KEY_PATH=/etc/letsencrypt/live/xxxxxxxxxx/privkey.pem
CA_PATH=#
CLIENT_KEY_PASSWORD=#
REJECT_UNAUTHORIZED=true
PUBLIC_ORIGIN=https://xxxxxxxxxx.com
PUBLIC_SHARE_PREFIX=https://xxxxxxxxx.com/
PUBLIC_GOOGLE_ANALYTICS_ID=#G-XXXXXXXX / Leave empty to disable
PUBLIC_PLAUSIBLE_SCRIPT_URL=#/js/script.js / Leave empty to disable
|
https://github.com/huggingface/chat-ui/issues/1096
|
open
|
[
"support"
] | 2024-05-02T22:19:13Z
| 2024-05-07T20:50:28Z
| 0
|
abdalladorrah
|
huggingface/trl
| 1,614
|
How to do fp16 training with PPOTrainer?
|
I modified the example from the official website to do PPO training with llama3 using lora. When I use fp16, the weights go to nan after the first update, which does not occur when using fp32.
Here is the code
```python
# 0. imports
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead, PPOConfig, PPOTrainer
from copy import deepcopy
from peft import LoraConfig, TaskType, get_peft_model
# 1. load a pretrained model
model_name = "meta-llama/Meta-Llama-3-8B-Instruct"
current_device = Accelerator().local_process_index
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.float16,
trust_remote_code=True,
attn_implementation="flash_attention_2",
)
lora_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
inference_mode=False,
r=8,
target_modules=["q_proj", "v_proj"],
lora_alpha=16,
lora_dropout=0,
)
model = get_peft_model(model, lora_config)
model = AutoModelForCausalLMWithValueHead.from_pretrained(model)
model_ref = deepcopy(model).eval()
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
# 2. initialize trainer
ppo_config = {"mini_batch_size": 1, "batch_size": 1}
config = PPOConfig(**ppo_config)
ppo_trainer = PPOTrainer(config, model, model_ref, tokenizer)
# 3. encode a query
query_txt = "This morning I went to the "
query_tensor = tokenizer.encode(query_txt, return_tensors="pt").to(
model.pretrained_model.device
)
# 4. generate model response
generation_kwargs = {
"min_length": -1,
"top_k": 0.0,
"top_p": 1.0,
"do_sample": True,
"pad_token_id": tokenizer.eos_token_id,
"max_new_tokens": 20,
}
response_tensor = ppo_trainer.generate(
[item for item in query_tensor], return_prompt=False, **generation_kwargs
)
response_txt = tokenizer.decode(response_tensor[0])
# 5. define a reward for response
# (this could be any reward such as human feedback or output from another model)
reward = [torch.tensor(1.0, device=model.pretrained_model.device)]
# 6. train model with ppo
train_stats = ppo_trainer.step([query_tensor[0]], [response_tensor[0]], reward)
```
What is the correct way to do fp16 ppo training?
|
https://github.com/huggingface/trl/issues/1614
|
closed
|
[] | 2024-05-02T17:52:16Z
| 2024-11-18T08:28:08Z
| null |
KwanWaiChung
|
huggingface/optimum
| 1,843
|
Support for speech to text models.
|
### Feature request
Hi, it would be really useful if speech to text models could be supported by optimum, specifically to ONNX. I saw a repo that managed to do it and they claimed they used optimum to do it.
https://huggingface.co/Xenova/speecht5_tts
Is there a way to do this?
### Motivation
I am finding it very difficult to convert any speech to text models to ONNX format and this would be very useful for both optimising serving them and also possibly running them with transformers.js.
### Your contribution
I don't think I would be able to do this myself unfortunately.
|
https://github.com/huggingface/optimum/issues/1843
|
open
|
[
"feature-request",
"onnx"
] | 2024-05-02T11:43:49Z
| 2024-10-14T12:25:52Z
| 0
|
JamesBowerXanda
|
huggingface/datasets
| 6,854
|
Wrong example of usage when config name is missing for community script-datasets
|
As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example:
```python
>>> ds = load_dataset("google/fleurs")
ValueError: Config name is missing.
Please pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all']
Example of usage:
`load_dataset('fleurs', 'af_za')`
```
Note the example of usage in the error message suggests loading "fleurs" instead of "google/fleurs".
|
https://github.com/huggingface/datasets/issues/6854
|
closed
|
[
"bug"
] | 2024-05-02T06:59:39Z
| 2024-05-03T15:51:59Z
| 0
|
albertvillanova
|
pytorch/xla
| 7,014
|
Export debug information to StableHLO
|
## ❓ Questions and Help
Hi team, the debugging information is lost during `exported_program_to_stablehlo`, is there a way to export this information?
For example, `torch.export` generates file and line number for each op,
```python
import torch
import torch.nn as nn
from torch_xla.stablehlo import exported_program_to_stablehlo
class Test(nn.Module):
def forward(self, a, b):
a += 1
b += 2
return a + b
ep = torch.export.export(Test(), (torch.randn(1, 5), torch.randn(1, 5)))
print(ep)
# ExportedProgram:
# class GraphModule(torch.nn.Module):
# def forward(self, arg0_1: "f32[1, 5]", arg1_1: "f32[1, 5]"):
# # File: /home/thonle/ai/data/stablehlo/add/add.py:7 in forward, code: a += 1
# add: "f32[1, 5]" = torch.ops.aten.add.Tensor(arg0_1, 1); arg0_1 = None
# # File: /home/thonle/ai/data/stablehlo/add/add.py:8 in forward, code: b += 2
# add_1: "f32[1, 5]" = torch.ops.aten.add.Tensor(arg1_1, 2); arg1_1 = None
# # File: /home/thonle/ai/data/stablehlo/add/add.py:9 in forward, code: return a + b
# add_2: "f32[1, 5]" = torch.ops.aten.add.Tensor(add, add_1)
# return (add, add_1, add_2)
```
however, when we export to stablehlo, we couldn't find this information in `StableHLOModelBundle`.
```python
om = exported_program_to_stablehlo(ep)
print(om._bundle)
# StableHLOModelBundle(state_dict={}, additional_constants=[array(2., dtype=float32)], stablehlo_funcs=[StableHLOFunc(meta=StableHLOFunctionMeta(name='forward', stablehlo_version='0.0.0', input_signature=[VariableSignature(shape=[1, 5], dtype='float32', dynamic_dims=[]), VariableSignature(shape=[], dtype='float32', dynamic_dims=[]), VariableSignature(shape=[1, 5], dtype='float32', dynamic_dims=[])], output_signature=[VariableSignature(shape=[1, 5], dtype='float32', dynamic_dims=[]), VariableSignature(shape=[1, 5], dtype='float32', dynamic_dims=[]), VariableSignature(shape=[1, 5], dtype='float32', dynamic_dims=[])], input_locations=[InputLocation(type_=<VariableType.INPUT_ARG: 'input_arg'>, position=0, name=''), InputLocation(type_=<VariableType.CONSTANT: 'constant'>, position=0, name=''), InputLocation(type_=<VariableType.INPUT_ARG: 'input_arg'>, position=1, name='')], unused_inputs=[], input_pytree_spec='[1, {"type": "builtins.tuple", "context": "null", "children_spec": [{"type": "builtins.tuple", "context": "null", "children_spec": [{"type": null, "context": null, "children_spec": []}, {"type": null, "context": null, "children_spec": []}]}, {"type": "builtins.dict", "context": "[]", "children_spec": []}]}]', output_pytree_spec='[1, {"type": null, "context": null, "children_spec": []}]'), bytecode=b"ML\xefR\rStableHLO_v0.19.1\x00\x01\x1d\x05\x01\x05\r\x01\x03\x0b\x03\x0b\x0f\x13\x17\x1b\x1f\x03S1\x0f\x01%\x07\x0f#\x0b\x0b\x0b\x0b\x0b\x0f\x0b\x0f\x0b\x0f\x0b\x0f\x0b\x0f\x0b\x03\r\x0b\x0b\x0b\x0b\x1f\x0f\x01\x03\x0b\x03\r\x17\x07\x0f'\x13\x07\x02\xb5\x1f\x11\x01\x00\x03\x07\x07\t\x0b\x03\r\x03\x05\x11\x01\x01\x05\x13\x05\x15\x05\x17\x1d\x13\x01\x05\x19\x1d\x17\x01\x05\x1b\x1d\x1b\x01\x05\x1d\x1d\x1f\x01\x05\x1f\x1d#\x01\x05!\x03\x01#\t\x1d#\x1d%\x1f\x03\t\x00\x00\x80?\x1f\x0b\x01\x01\t)\x05\x05\x15\x05\t)\x01\x05\x11\x07\x03\x07\x03\x07\x03\x03\x03)\x03\x01\r\x1d\x04\x91\x05\x01Q\x01\x05\x01\x07\x04\x7f\x03\x01\x05\x05P\x01\x03\x07\x04k\x03\x11\x1b\x07\x05\r\x05\x00\x07B\x11\x05\x03\x03\x03\x06\x15\x03\x03\x05\x01\x07\tF\x19\x07\x03\x03\x03\x03\x03\x06\x1d\x03\x03\x05\x05\x0b\x03\x06!\x03\x03\x05\t\r\x0b\x04\x01\x07\t\r\x0f\x06\x03\x01\x05\x01\x00\xb6\x03'\x03\x0b\x0f\x0f\x1b\r\x19\x17A!=\x15)\x19\x11\x0f\x0f\x0b\x11builtin\x00vhlo\x00module\x00add_v1\x00func_v1\x00constant_v1\x00broadcast_in_dim_v1\x00return_v1\x00mhlo.cross_program_prefetches\x00mhlo.is_dynamic\x00mhlo.use_auto_spmd_partitioning\x00IrToHlo.18\x00broadcast.5\x00add.6\x00broadcast.11\x00add.12\x00add.16\x00main\x00\x00\x08\x1d\t\x05\x1f\x01\x0b%'%)+\x03-\x03/", text='module @IrToHlo.18 attributes {mhlo.cross_program_prefetches = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} {\n func.func @main(%arg0: tensor<1x5xf32>, %arg1: tensor<f32>, %arg2: tensor<1x5xf32>) -> (tensor<1x5xf32>, tensor<1x5xf32>, tensor<1x5xf32>) {\n %0 = stablehlo.constant dense<1.000000e+00> : tensor<1x5xf32>\n %1 = stablehlo.add %arg0, %0 : tensor<1x5xf32>\n %2 = stablehlo.broadcast_in_dim %arg1, dims = [] : (tensor<f32>) -> tensor<1x5xf32>\n %3 = stablehlo.add %arg2, %2 : tensor<1x5xf32>\n %4 = stablehlo.add %1, %3 : tensor<1x5xf32>\n return %1, %3, %4 : tensor<1x5xf32>, tensor<1x5xf32>, tensor<1x5xf32>\n }\n}\n')])
```
|
https://github.com/pytorch/xla/issues/7014
|
closed
|
[
"stablehlo"
] | 2024-05-01T21:27:11Z
| 2024-05-14T16:45:17Z
| 11
|
thong3le
|
huggingface/distil-whisper
| 130
|
How to set the target language for examples in README?
|
The code examples in the README do not make it obvious how to set the language of the audio to transcribe.
The default settings create garbled english text if the audio language is different.
|
https://github.com/huggingface/distil-whisper/issues/130
|
open
|
[] | 2024-05-01T11:52:00Z
| 2024-05-22T11:59:09Z
| null |
clstaudt
|
huggingface/transformers
| 30,596
|
AutoModal how to enable TP for extremly large models?
|
Hi, I have 8V100s, but a single one can not fit InternVL1.5 model which has 28B parameters.
So that, I just wonder if I can fit all of them into 8 V100 with TP?
I found that Deepspeed can be used to do tensor parallel like this:
```
# create the model
if args.pre_load_checkpoint:
model = model_class.from_pretrained(args.model_name_or_path)
else:
model = model_class()
...
import deepspeed
# Initialize the DeepSpeed-Inference engine
ds_engine = deepspeed.init_inference(model,
tensor_parallel={"tp_size": 2},
dtype=torch.half,
checkpoint=None if args.pre_load_checkpoint else args.checkpoint_json,
replace_with_kernel_inject=True)
model = ds_engine.module
output = model('Input String')
```
I didn't succeed because of it just support built in model which can be imported, but for custom model which have to `fromPretrained` it does support.
But as I mentioned at start, my V100 will OOM when load model.
Does there any convenient way to loading hf model which is customized with tp enable ?
|
https://github.com/huggingface/transformers/issues/30596
|
closed
|
[] | 2024-05-01T10:06:45Z
| 2024-06-09T08:03:23Z
| null |
MonolithFoundation
|
huggingface/transformers
| 30,595
|
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
|
### System Info
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
### Who can help?
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
### Expected behavior
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
|
https://github.com/huggingface/transformers/issues/30595
|
closed
|
[] | 2024-05-01T09:17:58Z
| 2024-05-01T09:31:39Z
| null |
ldh127
|
huggingface/transformers.js
| 732
|
What does "Error: failed to call OrtRun(). error code = 6." mean? I know it is ONNX related, but how to fix?
|
### Question
I keep running into the same issue when using transformers.js Automatic Speech Recognition pipeline. I've tried solving it multiple ways. But pretty much hit a wall every time. I've done lots of googling, LLMs, and used my prior knowledge of how this stuff functions in python. But I can't seem to get it to work.
I've tried setting up my environment with and without vite. I've tried with react javascript. I've tried with with react typescript. Nothing.
Am i missing a dependency or something? is there a place I can find what the error code means? because I couldn't find it anywhere.
I've fed it an array. I've fed it a .wav file. Nothing works. No matter what I do. No matter if it's an array or a wav file. I always get the same error:
```
An error occurred during model execution: "Error: failed to call OrtRun(). error code = 6.".
Inputs given to model: {input_features: Proxy(Tensor)}
Error transcribing audio: Error: failed to call OrtRun(). error code = 6.
at e.run (wasm-core-impl.ts:392:1)
at e.run (proxy-wrapper.ts:212:1)
at e.OnnxruntimeWebAssemblySessionHandler.run (session-handler.ts:99:1)
at InferenceSession.run (inference-session-impl.ts:108:1)
at sessionRun (models.js:207:1)
at encoderForward (models.js:520:1)
at Function.seq2seqForward [as _forward] (models.js:361:1)
at Function.forward (models.js:820:1)
at Function.seq2seqRunBeam [as _runBeam] (models.js:480:1)
at Function.runBeam (models.js:1373:1)
```
It seems to be a ONNX Runtime issue. But don't know how to fix it. Any guidance will be appreciated.
Note: I'm currently testing with English. Nothing fancy.
|
https://github.com/huggingface/transformers.js/issues/732
|
closed
|
[
"question"
] | 2024-05-01T07:01:06Z
| 2024-05-11T09:18:35Z
| null |
jquintanilla4
|
huggingface/transformers
| 30,591
|
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
|
### Feature request
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
### Motivation
x
### Your contribution
x
|
https://github.com/huggingface/transformers/issues/30591
|
closed
|
[] | 2024-05-01T04:27:47Z
| 2024-06-08T08:03:17Z
| null |
ldh127
|
huggingface/chat-ui
| 1,093
|
I want to get the html of a website https://bit.ly/4bgmLb9 in huggingchat web search
|
I want to get the html of a website https://bit.ly/4bgmLb9 in hugging-chat web search. In chrome, I can put https://bit.ly/4bgmLb9 in the address bar and get the result. But I do not know how to do that in hugging-chat web search?
I try in hugging-chat and the screenshot

how to write the prompt so that huggingchat can fullfill the requirement
|
https://github.com/huggingface/chat-ui/issues/1093
|
closed
|
[] | 2024-05-01T03:00:29Z
| 2024-05-02T14:26:16Z
| 1
|
ghost
|
huggingface/dataset-viewer
| 2,756
|
Upgrade pyarrow to 16?
|
Release notes here: https://arrow.apache.org/blog/2024/04/20/16.0.0-release/
Are we affected by any change? Does it enable something for us?
|
https://github.com/huggingface/dataset-viewer/issues/2756
|
open
|
[
"question",
"dependencies",
"P2"
] | 2024-04-30T10:20:45Z
| 2024-04-30T16:19:31Z
| null |
severo
|
pytorch/TensorRT
| 2,798
|
Convert torchscript model to tensorrt
|
Can I convert the torchscript model to tensorrt format through torch_tensorrt? Is there any corresponding script that you can give me for reference?
|
https://github.com/pytorch/TensorRT/issues/2798
|
open
|
[
"question"
] | 2024-04-30T08:11:09Z
| 2024-04-30T20:59:03Z
| null |
pengxin233
|
huggingface/peft
| 1,693
|
How to convert a loha safetensor trained from diffusers to webui format
|
Hello, when I finetune SDXL (actually that is InstantID) with PEFT method, I use lora、loha and lokr for PEFT in [diffuser](https://github.com/huggingface/diffusers).
I have a question, how to convert a loha safetensor trained from diffusers to webui format?
In the training process:
the loading way:
`peft_config = LoHaConfig(
r=args.rank,
alpha=args.rank //2,
target_modules=["to_k", "to_q", "to_v", "to_out.0"],
) `
`unet = get_peft_model(unet, peft_config)
`
when train process finished, the saving way as:
`unet.save_pretrained(args.output_dir)`
and I get the safetensor as

But [webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui/) can't recognize it, I can't use it in webui.
How can I fix this promblem!
|
https://github.com/huggingface/peft/issues/1693
|
closed
|
[] | 2024-04-30T07:17:48Z
| 2024-06-08T15:03:44Z
| null |
JIAOJIAYUASD
|
pytorch/torchchat
| 579
|
[User Experience] User does not know what is expected by prompts
|
@ali-khosh user report:
I’m being asked “Do you want to enter a system prompt? Enter y for yes and anything else for no.” not sure what this means. When I hit yes, it asks “what is your system prompt?” still don’t know what that means. I entered “hello my name is” and it’s now asking me for “User:” no clue what that is. I entered some text. And it’s thinking, without doing anything, or telling me I should wait. I gave up after ~10 minutes, killed the process, and tried again this time answering no to that question. It again asked me for “User:”, I typed “ali” and have been waiting for some time with no response from my laptop.
|
https://github.com/pytorch/torchchat/issues/579
|
open
|
[] | 2024-04-30T06:39:23Z
| 2024-04-30T06:39:50Z
| null |
mikekgfb
|
pytorch/torchchat
| 575
|
unimplemented operators - workarounds and long term perspective
|
Today users have to set PYTORCH_ENABLE_MPS_FALLBACK=1 when they call torchchat if they want to use _weight_int4pack_mm. Can we set that automatically, from inside the program. This is a crude workaround, maybe we can get an implementation of _weight_int4pack_mm for MPS? (This would also be goodness for mobile.)
|
https://github.com/pytorch/torchchat/issues/575
|
open
|
[] | 2024-04-30T05:58:13Z
| 2024-07-30T20:44:26Z
| 0
|
mikekgfb
|
pytorch/torchchat
| 565
|
[LAUNCH BLOCKER] Llama3 8B Instruct model hangs on chat
|
(.venv) (base) mikekg@mikekg-mbp torchchat % # Llama 3 8B Instruct
python3 torchchat.py chat llama3
zsh: command not found: #
Using device=cpu Apple M1 Max
Loading model...
Time to load model: 10.23 seconds
Entering Chat Mode. Will continue chatting back and forth with the language model until the models max context length of 8192 tokens is hit or until the user says /bye
Do you want to enter a system prompt? Enter y for yes and anything else for no.
y
What is your system prompt?
You are a techer and you treat every interaction as a teachable moment, providing lots of unrequested extra info
User: what are the 7 continents
|
https://github.com/pytorch/torchchat/issues/565
|
closed
|
[] | 2024-04-29T22:15:12Z
| 2024-04-29T22:42:26Z
| 2
|
mikekgfb
|
pytorch/torchchat
| 561
|
[FEATURE REQUEST] raise connection error fails download / we don't offer. plan b, or a way to resume
|
so, does this have a common error instruction? Should we tell people to download another model if they can’t get Meta approval, or there’s an error like in my case?
Also, this engineer having been on the slwo end of a pipe before.... are there any instructions how to resume a failed download that's say, frustratingly 95% complete? Or am I don't and I need to load the whole thing again?
(If there's no way to retsart, ok. Also, if I'm on a slow pipe I would like to retry more often and get a byute at a time, per retry if that's what I need)
```
File "/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/tqdm/std.py", line 1181, in _iter_
for obj in iterable:
File "/Users/mikekg/miniconda3/lib/python3.12/concurrent/futures/_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mikekg/miniconda3/lib/python3.12/concurrent/futures/_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
^^^^^^^^^^^^^^^^^^^
File "/Users/mikekg/miniconda3/lib/python3.12/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Users/mikekg/miniconda3/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/mikekg/miniconda3/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/_snapshot_download.py", line 290, in _inner_hf_hub_download
return hf_hub_download(
^^^^^^^^^^^^^^^^
File "/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 119, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 1492, in hf_hub_download
http_get(
File "/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 552, in http_get
return http_get(
^^^^^^^^^
File "/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 552, in http_get
return http_get(
^^^^^^^^^
File "/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 552, in http_get
return http_get(
^^^^^^^^^
[Previous line repeated 1 more time]
File "/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 456, in http_get
r = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 392, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 68, in send
return super().send(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mikekg/test/torchchat/.venv/lib/python3.12/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: (MaxRetryError('HTTPSConnectionPool(host=\'[cdn-lfs-us-1.huggingface.co](http://cdn-lfs-us-1.huggingface.co/)\', port=443): Max retries exceeded with url: /repos/55/ac/55acddbb5c2ac2041b89a858eeba82e6130c6160294d75fe51bfa8bd7a4e4518/be52262c9289304f3e8240e0749bf257bc04264405a86cd4de38efb9068724ee?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27consolidated.00.pth%3B+filename%3D%22consolidated.00.pth%22%3B&Expires=1714684610&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcxNDY4NDYxMH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzU1L2FjLzU1YWNkZGJiNWMyYWMyMDQxYjg5YTg1OGVlYmE4MmU2MTMwYzYxNjAyOTRkNzVmZTUxYmZhOGJkN2E0ZTQ1MTgvYmU1MjI2MmM5Mjg5MzA0ZjNlODI0MGUwNzQ5YmYyNTdiYzA0MjY0NDA1YTg2Y2Q0ZGUzOGVmYjkwNjg3MjRlZT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoifV19&Signature=IroiN6zXZ5iOHhJDLMhkzINjI11juBcZpCX0B6Q4iBrlcWwJ2oXA6~hKRp0uqo34u3AHE1LPI7sxss3HV8ICqNUtKJ9~5u0bWjoqSh7eqn1xqJ77Drg5BmnCKYSB2sF-5QBC2tMM~PKfaE7AeieeFD73Pz3JQomD7EnFe5veAxHKQxGT8WD2bMMy4lx5r5
|
https://github.com/pytorch/torchchat/issues/561
|
closed
|
[] | 2024-04-29T21:36:59Z
| 2024-05-12T20:45:02Z
| 1
|
mikekgfb
|
huggingface/safetensors
| 474
|
How to fully load checkpointed weights in memory?
|
### System Info
- `transformers` version: 4.40.0
- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.22.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.2+cu121 (True)
- Tensorflow version (GPU?): 2.16.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.8.2 (cpu)
- Jax version: 0.4.26
- JaxLib version: 0.4.21
### Reproduction
1. Load a checkpointed `.safetensor` file using `safetensors.torch.load_file` API in the CPU memory.
2. Negligible increase in the CPU memory usage
### Expected behavior
The CPU memory should increase by exactly the size of the file being read.
I think the negligible increase in the CPU memory might be the expected behavior, due to safetensors' lazy loading feature? However if I want to load the entire model in host memory, is there another way to do that? I am running some benchmarks with safetensor APIs, and need to ensure that the model is fully loaded in the CPU memory.
|
https://github.com/huggingface/safetensors/issues/474
|
closed
|
[] | 2024-04-29T21:30:37Z
| 2024-04-30T22:12:29Z
| null |
goelayu
|
pytorch/data
| 1,247
|
[StatefulDataLoader] macOS tests are too slow
|
### 🐛 Describe the bug
test_state_dict is very slow on macOS (and slows down CI), likely because of macOS default multiprocessing_context being spawn instead of fork. The StatefulDataLoader tests on macOS take ~1.5 hours, vs 10 minutes on Linux and Windows.
Example of test-runtimes on my local mac:
<img width="870" alt="image" src="https://github.com/pytorch/data/assets/5349063/8f881702-e812-4e2c-b61e-efac8596054b">
We should a) update CI to log test times, b) for macOS, drop some of the tests. Each test_mp* test runs 6x, and if we have coverage from Linux + Win then we probably don't need all of them for mac
### Versions
Nightly
|
https://github.com/meta-pytorch/data/issues/1247
|
closed
|
[
"stateful_dataloader"
] | 2024-04-29T18:10:35Z
| 2024-04-30T19:11:57Z
| 0
|
andrewkho
|
huggingface/dataset-viewer
| 2,754
|
Return partial dataset-hub-cache instead of error?
|
`dataset-hub-cache` depends on multiple previous steps, and any error in one of them makes it fail. It provokes things like https://github.com/huggingface/moon-landing/issues/9799 (internal): in the datasets list, a dataset is not marked as "supporting the dataset viewer", whereas the only issue is that we didn't manage to list the compatible libraries, to create the tags.
https://github.com/huggingface/dataset-viewer/blob/main/services/worker/src/worker/job_runners/dataset/hub_cache.py
In this case, we could return a partial response, or maybe return an empty list of libraries or modalities if we have an error.
What do you think @lhoestq?
|
https://github.com/huggingface/dataset-viewer/issues/2754
|
closed
|
[
"question",
"P2"
] | 2024-04-29T17:10:09Z
| 2024-06-13T13:57:20Z
| null |
severo
|
pytorch/torchchat
| 549
|
[CI] add dtype tests for runner-aoti and runner-et
|
We are reverting ##539 which added more dtype tests for runner-aoti + runner-et,
because of fails - there's no point in having failing tests. That being said, we should figure out which ones should work, and if they don't today, how to make them work.
|
https://github.com/pytorch/torchchat/issues/549
|
open
|
[] | 2024-04-29T16:42:19Z
| 2024-04-29T18:01:09Z
| 2
|
mikekgfb
|
pytorch/torchchat
| 547
|
Can we make sure native runner binary commands in README work directly as written?
|
It would be great if
```
cmake-out/aoti_run model.so -z tokenizer.model -l 3 -i "Once upon a time"
```
and
```
cmake-out/et_run llama3.pte -z tokenizer.model -l 3 -i "Once upon a time"
```
were changed to include a known location of a model.so and tokenizer.model file. For example, include download and export instructions directly before it or those downloaded before in the README file.
cc @byjlw @mikekgfb
|
https://github.com/pytorch/torchchat/issues/547
|
closed
|
[] | 2024-04-29T15:33:15Z
| 2024-05-12T21:03:08Z
| 1
|
orionr
|
pytorch/torchchat
| 546
|
Move legal disclaimer down to license section?
|
I think we can move
Disclaimer: The torchchat Repository Content is provided without any guarantees about performance or compatibility. In particular, torchchat makes available model architectures written in Python for PyTorch that may not perform in the same manner or meet the same standards as the original versions of those models. When using the torchchat Repository Content, including any model architectures, you are solely responsible for determining the appropriateness of using or redistributing the torchchat Repository Content and assume any risks associated with your use of the torchchat Repository Content or any models, outputs, or results, both alone and in combination with any other technologies. Additionally, you may have other legal obligations that govern your use of other content, such as the terms of service for third-party models, weights, data, or other technologies, and you are solely responsible for complying with all such obligations.
down to the bottom of the license section? Having it at so close to the top is likely not required? Check with others, though. Thanks
cc @mikekgfb
|
https://github.com/pytorch/torchchat/issues/546
|
closed
|
[] | 2024-04-29T15:29:37Z
| 2024-05-12T21:06:46Z
| 1
|
orionr
|
huggingface/datasets
| 6,848
|
Cant Downlaod Common Voice 17.0 hy-AM
|
### Describe the bug
I want to download Common Voice 17.0 hy-AM but it returns an error.
```
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_name='hfds_config', config_path=None)
/usr/local/lib/python3.10/dist-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
ret = run_job(
/usr/local/lib/python3.10/dist-packages/datasets/load.py:1429: FutureWarning: The repository for mozilla-foundation/common_voice_17_0 contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/mozilla-foundation/common_voice_17_0
You can avoid this message in future by passing the argument `trust_remote_code=True`.
Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.
warnings.warn(
Reading metadata...: 6180it [00:00, 133224.37it/s]les/s]
Generating train split: 0 examples [00:00, ? examples/s]
HuggingFace datasets failed due to some reason (stack trace below).
For certain datasets (eg: MCV), it may be necessary to login to the huggingface-cli (via `huggingface-cli login`).
Once logged in, you need to set `use_auth_token=True` when calling this script.
Traceback error for reference :
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1743, in _prepare_split_single
example = self.info.features.encode_example(record) if self.info.features is not None else record
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1878, in encode_example
return encode_nested_example(self, example)
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in encode_nested_example
{
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in <dictcomp>
{
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in <genexpr>
yield key, tuple(d[key] for d in dicts)
KeyError: 'sentence_id'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/nemo/scripts/speech_recognition/convert_hf_dataset_to_nemo.py", line 358, in main
dataset = load_dataset(
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2549, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1767, in _download_and_prepare
super()._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1605, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1762, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
```
from datasets import load_dataset
cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hy-AM")
```
### Expected behavior
It works fine with common_voice_16_1
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35
- Python version: 3.11.6
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0
|
https://github.com/huggingface/datasets/issues/6848
|
open
|
[] | 2024-04-29T10:06:02Z
| 2025-04-01T20:48:09Z
| 3
|
mheryerznkanyan
|
huggingface/optimum
| 1,839
|
why does ORTModelForCausalLM assume new input length is 1 when past_key_values is passed
|
https://github.com/huggingface/optimum/blob/c55f8824f58db1a2f1cfc7879451b4743b8f206b/optimum/onnxruntime/modeling_decoder.py#L649
``` python
def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs):
if past_key_values is not None:
past_length = past_key_values[0][0].shape[2]
# Some generation methods already pass only the last input ID
if input_ids.shape[1] > past_length:
remove_prefix_length = past_length
else:
# Default to old behavior: keep only final ID
remove_prefix_length = input_ids.shape[1] - 1
input_ids = input_ids[:, remove_prefix_length:]
```
while in non-onnx modeling, it's not.
https://github.com/huggingface/transformers/blob/a98c41798cf6ed99e1ff17e3792d6e06a2ff2ff3/src/transformers/models/mistral/modeling_mistral.py#L1217
```python
# Keep only the unprocessed tokens:
# 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where
# some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as
# input)
if attention_mask is not None and attention_mask.shape[1] > input_ids.shape[1]:
input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]
# 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard
# input_ids based on the past_length.
elif past_length < input_ids.shape[1]:
input_ids = input_ids[:, past_length:]
```
|
https://github.com/huggingface/optimum/issues/1839
|
open
|
[
"question",
"onnxruntime"
] | 2024-04-29T07:06:04Z
| 2024-10-14T12:28:51Z
| null |
cyh-ustc
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.