repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/unity-api
| 23
|
I need to specify text or text_target in text classification
|
I try calling the api by huggingfaceapi.textclassification("some string", response =>...) but got the error"you need to specify text or text_target". Where can I specify that in my unity C# code?
|
https://github.com/huggingface/unity-api/issues/23
|
open
|
[
"question"
] | 2024-01-27T19:24:25Z
| 2024-01-27T19:24:25Z
| null |
helenawsu
|
huggingface/transformers.js
| 543
|
Converting a model to onnx using given script is hard(fails most of the time)
|
### Question
I have tried to use starcoder model by bundling it using your ONNX script but it failed with some exception.
Model: https://huggingface.co/HuggingFaceH4/starchat-beta
or
https://huggingface.co/bigcode/starcoderbase
logs:
```bash
$ python -m scripts.convert --quantize --model_id HuggingFaceH4/starchat-beta
Framework not specified. Using pt to export to ONNX.
model-00001-of-00004.safetensors: 3%|█▏ | 346M/9.96G [03:20<1:33:01, 1.72MB/s]
Downloading shards: 0%| | 0/4 [03:23<?, ?it/s]
Loading TensorFlow model in PyTorch before exporting.
Traceback (most recent call last):
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/urllib3/response.py", line 712, in _error_catcher
yield
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/urllib3/response.py", line 833, in _raw_read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
urllib3.exceptions.IncompleteRead: IncompleteRead(351738674 bytes read, 9606258302 more expected)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/requests/models.py", line 816, in generate
yield from self.raw.stream(chunk_size, decode_content=True)
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/urllib3/response.py", line 934, in stream
data = self.read(amt=amt, decode_content=decode_content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/urllib3/response.py", line 905, in read
data = self._raw_read(amt)
^^^^^^^^^^^^^^^^^^^
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/urllib3/response.py", line 811, in _raw_read
with self._error_catcher():
File "/usr/lib/python3.11/contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/urllib3/response.py", line 729, in _error_catcher
raise ProtocolError(f"Connection broken: {e!r}", e) from e
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(351738674 bytes read, 9606258302 more expected)', IncompleteRead(351738674 bytes read, 9606258302 more expected))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/optimum/exporters/tasks.py", line 1708, in get_model_from_task
model = model_class.from_pretrained(model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 563, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2876, in from_pretrained
resolved_archive_file, sharded_metadata = get_checkpoint_shard_files(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/transformers/utils/hub.py", line 1040, in get_checkpoint_shard_files
cached_filename = cached_file(
^^^^^^^^^^^^
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/transformers/utils/hub.py", line 429, in cached_file
resolved_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1457, in hf_hub_download
http_get(
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 524, in http_get
for chunk in r.iter_content(chunk_size=DOWNLOAD_CHUNK_SIZE):
File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/requests/models.py", line 818, in generate
raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(351738674 bytes read, 9606258302 more expected)', IncompleteRead(351738674 bytes read, 9606258302 more expected))
During handling of the above exception, another except
|
https://github.com/huggingface/transformers.js/issues/543
|
open
|
[
"question"
] | 2024-01-27T07:32:42Z
| 2024-01-30T06:48:44Z
| null |
bajrangCoder
|
huggingface/candle
| 1,624
|
How to run the quantized Solar model?
|
I am trying to run the Solar model, but I am constantly failing. Here are my attempts:
1. [quantized] example (modified) with the Quantized Solar model (local)
: Failed. It only outputs nonsense that is unrelated to the question.
2. [llama] example with the Quantized Solar model (local)
: Failed. The process was Killed. Either because of ①a "Quantized" model or ②a low-spec PC (16GB of RAM, etc.).
3. [llama] example with the Solar model
: Failed. The process was Killed. The most likely cause is ①a low-spec PC.
4. oobabooga with the Quantized Solar model (local)
: Success. Confirmed that my PC can run the Quantized Solar model.
5. oobabooga with the Solar model
: Failed. The process was Killed. Confirmed that my PC cannot run the Solar model.
Conclusion: Is there any way to run the Quantized Solar model? I know I only wrote about 5 attempts, but I actually tried several different variations of the code in step 1. I also downloaded the model several times in my poor internet speed.
|
https://github.com/huggingface/candle/issues/1624
|
open
|
[] | 2024-01-27T04:57:50Z
| 2024-01-27T22:41:12Z
| null |
555cider
|
huggingface/peft
| 1,401
|
Where is `self.generation_config`coming from?
|
https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/src/peft/peft_model.py#L1136
`self.generation` variable is not initialized in the model, it is also not part of a class up in the inheritance hierarchy.
So I assume it is retrieved from the base model via the implemented `\_\_getattr\_\_` method.
If that's the case, doesn't this make the code redundant? Also, how could this code work if we have to go down 1 level deeper? The only reason I can imagine doing this if `generation_config` is set after the model was initialized, but why would you need to do this?
Could you help me with this and explain how `generation_config` is supposed to be initialized and used?
Thank you :) Best
Simon
|
https://github.com/huggingface/peft/issues/1401
|
closed
|
[] | 2024-01-27T02:02:30Z
| 2024-03-11T15:04:29Z
| null |
simon-lund
|
huggingface/transformers.js
| 541
|
Sharpe Linux-x86
|
### Question
Hi,
Firstly, many thanks for all your work.
My use case is to generate sentence embeddings for semantic matching. I develop on Mac but deploy to AWS Lambda.
Your package runs fine out the box on my Mac but fails to load Sharp on Lambda. I spent a couple of days trying lots of different things (fetching and building for Linux x86 and moving files around), but I never got it to work. In the end I removed the dependency on Sharp and it worked.
All's well, at present, but I do have a requirement in the future to embed images.
Sorry, I realise this may be more of a NPM issue (or more likely my knowledge of it), but any help would be appreciated.
Thanks
Dave
|
https://github.com/huggingface/transformers.js/issues/541
|
closed
|
[
"question"
] | 2024-01-26T11:36:05Z
| 2024-10-18T13:30:10Z
| null |
Damibu
|
pytorch/pytorch
| 118,357
|
How to modify this framework to support using CUDA unified memory?
|
### 🚀 The feature, motivation and pitch
Hi all,
I am a PyTorch user and use open-sourced GPU-based GNN frameworks based on PyTorch. I want to ask if the latest GPU-based Pytorch support CUDA unified memory allocation for tensors?
I found a PR https://github.com/pytorch/pytorch/pull/106200 has supported this to PyTorch, but it seems that it hasn't been merged.
What should users do to enable this mode?
Would you please suggest some instructions or lines of example code?
Thank you very much, sir!
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck @0x804d8000
|
https://github.com/pytorch/pytorch/issues/118357
|
closed
|
[
"module: cuda",
"triaged",
"module: CUDACachingAllocator"
] | 2024-01-26T03:41:02Z
| 2024-02-01T03:41:58Z
| null |
zlwu92
|
pytorch/vision
| 8,232
|
Input Norms and Channel Order for EfficientNet
|
### 📚 The doc issue
The documentation for all pretrained models lacks clear details regarding the order of color channels for input images, as well as the specific normalization mean and standard deviation values. I am particularly looking for this information in relation to the EfficientNet model.
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/vision/issues/8232
|
closed
|
[] | 2024-01-25T22:17:07Z
| 2024-01-26T10:10:49Z
| 2
|
ivanstepanovftw
|
huggingface/text-generation-inference
| 1,487
|
How to run docker on a DPO model
|
### Discussed in https://github.com/huggingface/text-generation-inference/discussions/1481
<div type='discussions-op-text'>
<sup>Originally posted by **tamanna-mostafa** January 24, 2024</sup>
1. I fine-tuned mistral 7b model with preference data (32k).
2. Then I ran DPO on the fine tuned model with 12k data.
This is the command I used to run docker:
```
accelerate launch --config_file ./accelerate_configs/ds_zero3.yaml rlhf_dpo.py \
--model_name_or_path="/mnt/efs/data/tammosta/files_t/output_sft_32k" \
--output_dir="/mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k" \
--data_path="/mnt/efs/data/tammosta/files_t/DPO_data_rbs_clean_AIF.json" \
--use_lamma2_peft_config False \
--beta 0.1 \
--optimizer_type adamw_hf \
--learning_rate 1e-6 \
--warmup_steps 50 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lora_alpha 16 \
--lora_dropout 0.05 \
--lora_r 8 \
--max_prompt_length 2048 \
--max_length 4096 \
--num_train_epochs 4 \
--logging_steps 20 \
--save_steps 100 \
--save_total_limit 8 \
--eval_steps 50 \
--gradient_checkpointing True \
--report_to "wandb"
```
3. Now, I need to run inference on the DPO model.
I ran the following commands for this:
```
model=/data/DPO_output_mistral_32k
volume=/mnt/efs/data/tammosta/files_t:/data
num_shard=8
docker run --gpus all --shm-size 1g -p 172.31.8.218:80:80 -v $volume ghcr.io/huggingface/text-generation-inference:1.1.0 --model-id $model --num-shard $num_shard --max-input-length 4095 --max-total-tokens 12000
```
However, the docker failed to initialize the model with the following error:
`OSError: /data/DPO_output_mistral_32k does not appear to have a file named config.json. Checkout ' https://huggingface.co//data/DPO_output_mistral_32k/None ' for available files.`
Does anyone know how to create/find the config.json file?
I'll highly appreciate any help.</div>
|
https://github.com/huggingface/text-generation-inference/issues/1487
|
closed
|
[] | 2024-01-25T17:11:52Z
| 2024-01-31T16:44:32Z
| null |
tamanna-mostafa
|
huggingface/transformers.js
| 539
|
How can i use this Model?
|
### Question
How can i use this Model? https://huggingface.co/shibing624/macbert4csc-base-chinese
|
https://github.com/huggingface/transformers.js/issues/539
|
closed
|
[
"question"
] | 2024-01-25T13:12:08Z
| 2025-10-13T04:58:48Z
| null |
wfk007
|
huggingface/text-generation-inference
| 1,483
|
how to pdb text-generation-server
|
### System Info
```
2024-01-25T09:10:08.096040Z INFO text_generation_launcher: Runtime environment:
Target: x86_64-unknown-linux-gnu
Cargo version: 1.70.0
Commit sha: 9f18f4c00627e1a0ad696b6774e5ad7ca8f4261c
Docker label: sha-9f18f4c
nvidia-smi:
Thu Jan 25 09:10:08 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.113.01 Driver Version: 535.113.01 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3090 Off | 00000000:1A:00.0 Off | N/A |
| 30% 28C P8 24W / 350W | 5MiB / 24576MiB | 0% Default |
| | | N/A |
```
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
When i add `pdb.set_trace()` in .py of text-generation-server, text-generation-launcher repeats the following log and seems to be stuck:
```
2024-01-25T09:07:04.875448Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:07:14.894477Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:07:24.911704Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:07:34.928347Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:07:44.947306Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:07:54.965355Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:08:04.984481Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:08:15.004175Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:08:25.022317Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:08:35.041246Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:08:45.059839Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:08:55.078293Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:09:05.097024Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:09:15.117255Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:09:25.136635Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:09:35.156270Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:09:45.175864Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:09:55.194405Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-01-25T09:10:05.214396Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
```
### Expected behavior
I want to know how to debug .py of text-generation-server except logger?
|
https://github.com/huggingface/text-generation-inference/issues/1483
|
closed
|
[] | 2024-01-25T09:21:32Z
| 2024-02-19T07:23:14Z
| null |
jessiewiswjc
|
pytorch/serve
| 2,907
|
How to use torchserve metrics
|
### 📚 The doc issue
When I call curl http://127.0.0.1:8082/metrics, it always returns empty results, even if it is called after model inference. But there is clearly a corresponding log in model_metrics.log. I saw that the previous Issue said that prometheus is currently supported as a plug-in? I would like to ask if there is any corresponding documentation.
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/2907
|
closed
|
[] | 2024-01-25T07:50:39Z
| 2024-03-20T21:53:20Z
| null |
pengxin233
|
pytorch/serve
| 2,905
|
Can i use multiple workers in single GPU?
|
Thanks for your great project.
I'm newbie and this is my first experience using Torchserve for my project.
I tried to deploy my model using torchserve-gpu.
If I want better performance, I can increase the number of workers.
When processing with a single worker, GPU usage was not high, so I added more workers to get more inference throughput.
I think my short experience and knowledge can affect GPU resource scheduling.
But I'm asking because I think there may be more problems than I thought.
- Is it ok to use more workers in single GPU environment?
- Are there any other side effects of more workers settings?
|
https://github.com/pytorch/serve/issues/2905
|
closed
|
[
"question",
"triaged"
] | 2024-01-25T01:40:42Z
| 2024-01-30T06:15:08Z
| null |
Twinparadox
|
huggingface/datasets
| 6,614
|
`datasets/downloads` cleanup tool
|
### Feature request
Splitting off https://github.com/huggingface/huggingface_hub/issues/1997 - currently `huggingface-cli delete-cache` doesn't take care of cleaning `datasets` temp files
e.g. I discovered having millions of files under `datasets/downloads` cache, I had to do:
```
sudo find /data/huggingface/datasets/downloads -type f -mtime +3 -exec rm {} \+
sudo find /data/huggingface/datasets/downloads -type d -empty -delete
```
could the cleanup be integrated into `huggingface-cli` or a different tool provided to keep the folders tidy and not consume inodes and space
e.g. there were tens of thousands of `.lock` files - I don't know why they never get removed - lock files should be temporary for the duration of the operation requiring the lock and not remain after the operation finished, IMHO.
Also I think one should be able to nuke `datasets/downloads` w/o hurting the cache, but I think there are some datasets that rely on files extracted under this dir - or at least they did in the past - which is very difficult to manage since one has no idea what is safe to delete and what not.
Thank you
@Wauplin (requested to be tagged)
|
https://github.com/huggingface/datasets/issues/6614
|
open
|
[
"enhancement"
] | 2024-01-24T18:52:10Z
| 2024-01-24T18:55:09Z
| 0
|
stas00
|
huggingface/transformers
| 28,663
|
How to set stopping criteria in model.generate() when a certain word appear
|
### Feature request
stopping criteria in model.generate() when a certain word appear
The word I need to stop the generation when found is : [/SENTENCE]
But the model doesn't generate the word itself, instead, it generates the subwords
[ [/,SEN,TE,NC,E] ]
like this .
corresponding ids from the tokenizer are,
( Id and subword word)
28792 => [
28748 => /
28759 => SEN
2654 => TE
1197 => NC
28793 => E]
so how can i put the condition in **StoppingCriteriaList** that i should stop the generation when the [/SENTENCE] found.
### Motivation
stopping criteria in model.generate() when a certain word appear
The word I need to stop the generation when found is : [/SENTENCE]
But the model doesn't generate the word itself, instead, it generates the subwords
[ [/,SEN,TE,NC,E] ]
like this .
corresponding ids from the tokenizer are,
( Id and subword word)
28792 => [
28748 => /
28759 => SEN
2654 => TE
1197 => NC
28793 => E]
so how can i put the condition in **StoppingCriteriaList** that i should stop the generation when the [/SENTENCE] found.
### Your contribution
stopping criteria in model.generate() when a certain word appear
The word I need to stop the generation when found is : [/SENTENCE]
But the model doesn't generate the word itself, instead, it generates the subwords
[ [/,SEN,TE,NC,E] ]
like this .
corresponding ids from the tokenizer are,
( Id and subword word)
28792 => [
28748 => /
28759 => SEN
2654 => TE
1197 => NC
28793 => E]
so how can i put the condition in **StoppingCriteriaList** that i should stop the generation when the [/SENTENCE] found.
|
https://github.com/huggingface/transformers/issues/28663
|
closed
|
[] | 2024-01-23T15:16:38Z
| 2024-03-02T08:03:44Z
| null |
pradeepdev-1995
|
pytorch/TensorRT
| 2,618
|
❓ [Question] How to compile a model with A16W8?
|
Hi Torch-TensorRT team:
I'm wondering how can I compile a model with 8 bit weights, but using 16 bit activations?
Thanks a lot!
|
https://github.com/pytorch/TensorRT/issues/2618
|
open
|
[
"question"
] | 2024-01-23T12:53:23Z
| 2024-01-25T20:47:14Z
| null |
jiangwei221
|
huggingface/dataset-viewer
| 2,333
|
Replace TypedDict with dataclass?
|
Do we want to replace the TypedDict objects with dataclasses?
If so: note that the objects we serialize should be serialized too without any change by orjson, at the price of a small overhead (15% in their example: https://github.com/ijl/orjson#dataclass)
|
https://github.com/huggingface/dataset-viewer/issues/2333
|
closed
|
[
"good first issue",
"question",
"refactoring / architecture",
"P2"
] | 2024-01-23T10:49:52Z
| 2024-06-19T14:30:53Z
| null |
severo
|
huggingface/optimum
| 1,664
|
Bitsandbytes integration in ORTModelForCausalLM.from_pretrained()
|
### System Info
```shell
optimum==1.17.0.dev0
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
The given code
```
from optimum.onnxruntime import ORTModelForCausalLM
from transformers import BitsAndBytesConfig
finetuned_model_name = "path"
import torch
compute_dtype = getattr(torch, "float16")
bnb_config = BitsAndBytesConfig(load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=False)
ort_model = ORTModelForCausalLM.from_pretrained(
finetuned_model_name,
use_io_binding=True,
quantization_config=bnb_config,
export=True,
use_cache=True,
from_transformers=True
)
```
shows the errror
```
TypeError: _from_transformers() got an unexpected keyword argument 'quantization_config'
```
so how to do quantization while loading with **ORTModelForCausalLM**
### Expected behavior
The given code
```
from optimum.onnxruntime import ORTModelForCausalLM
from transformers import BitsAndBytesConfig
finetuned_model_name = "path"
import torch
compute_dtype = getattr(torch, "float16")
bnb_config = BitsAndBytesConfig(load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=False)
ort_model = ORTModelForCausalLM.from_pretrained(
finetuned_model_name,
use_io_binding=True,
quantization_config=bnb_config,
export=True,
use_cache=True,
from_transformers=True
)
```
shows the errror
```
TypeError: _from_transformers() got an unexpected keyword argument 'quantization_config'
```
so how to do quantization while loading with **ORTModelForCausalLM**
|
https://github.com/huggingface/optimum/issues/1664
|
open
|
[
"bug"
] | 2024-01-23T08:56:45Z
| 2024-01-23T08:56:45Z
| 0
|
pradeepdev-1995
|
pytorch/xla
| 6,362
|
How to do multi-machine spmd training?
|
## ❓ Questions and Help
At present, I have passed the single-machine spmd training, but I do not know how to run the multi-machine spmd training. Could you give me a running example?
@vanbasten23
|
https://github.com/pytorch/xla/issues/6362
|
closed
|
[] | 2024-01-23T03:33:52Z
| 2024-03-13T09:21:25Z
| null |
mars1248
|
pytorch/text
| 2,223
|
The Future of torchtext
|
## ❓ Questions and Help
**Description**
<!-- Please send questions or ask for help here. -->
As of September 2023 development efforts on torchtext has been stopped. I am wondering what's the future plans in this regard. To opt in for hugging face libraries such as tokenizers? Currently without using the torchtext library it's not really unclear how to work on simple task like text classfication where we don't use a LLM. I can do the preprocessing in spacy and connect to pytorch but somehow it feels different. I'd prefer to do all in pytorch but so far it doesn't seem possible. I didn't invest time into torchtext since so far there is no future for this library and also tutorials just don't work. Perhaps an update/pointers would be nice.
Thanks in advance
|
https://github.com/pytorch/text/issues/2223
|
closed
|
[] | 2024-01-22T20:40:10Z
| 2024-03-15T16:18:22Z
| 1
|
lordsoffallen
|
huggingface/peft
| 1,382
|
How to set a predefined weight for LoRA and the linear layer
|
Hi,
Thanks for your great job!
I have a question: When adding LoRA on a linear layer, how to set a predefined weight for LoRA and the linear layer, instead of just 0.5 : 0.5 ?
|
https://github.com/huggingface/peft/issues/1382
|
closed
|
[] | 2024-01-22T13:24:31Z
| 2024-02-06T08:37:49Z
| null |
quqxui
|
huggingface/accelerate
| 2,367
|
how to prevent accelerate from concatenating tensors in batch?
|
My `collate_fn` in dataloader returns a list of image tensors with different height and width. After using `accelerator.prepare(model, optimizer, dataloader)`, I noticed that accelerate seems to automatically concatenate the tensors during `for step, batch in enumerate(train_dataloader)` iteration, and the size-mismatch leads to Exceptions.
Is there any parameter to prevent the auto-concatenating?
Or, should I remove `dataloader` from `accelerator.prepare` params?
|
https://github.com/huggingface/accelerate/issues/2367
|
closed
|
[] | 2024-01-22T11:26:06Z
| 2024-01-23T03:24:08Z
| null |
feiyangsuo
|
pytorch/serve
| 2,899
|
How torchserve uses grpc in java
|
### 📚 The doc issue
I want to use grpc in the java service to call torchserve's model, but I don't seem to have found any relevant documentation.
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/2899
|
closed
|
[] | 2024-01-22T08:54:02Z
| 2024-03-20T21:53:35Z
| 2
|
pengxin233
|
huggingface/trl
| 1,264
|
How to train the model and ref_model on multiple GPUs with averaging?
|
For example,I have two RTX 3090 GPUs, and both the model and ref_model are 14 billion parameter models. I need to distribute these two models evenly across the two cards for training.
this is my code,but have an error:
```
"""
CUDA_VISIBLE_DEVICES=0 python Sakura_DPO.py \
--base_model Qwen-14B-Chat \
--ref_model Qwen-14B-Chat \
--data-path distilabel-intel-orca-dpo-pairs.json \
--output_dir distilabel-intel-orca-dpo-pairs \
--num_epochs 1 \
--batch_size 16 \
--micro_batch_size 1 \
--learning_rate 1e-6 \
--lora_r 32 \
--lora_alpha 32 \
--lora_dropout 0.05 \
--lr_scheduler 'cosine' \
--warmup_ratio 0.1 \
--cutoff_len 768
##########################
transformers
bitsandbytes
evaluate
peft
transformers_stream_generator
tiktoken
fire
trl
accelerate
deepspeed
"""
import os
import sys
from typing import List
import fire
import torch
import transformers
#import kosy_transformers
from datasets import load_dataset, Dataset
from transformers import TrainerCallback, TrainingArguments, TrainerState, TrainerControl
from transformers.trainer_utils import PREFIX_CHECKPOINT_DIR
from torch.nn import functional as F
from peft import (
LoraConfig,
get_peft_model,
prepare_model_for_kbit_training,
set_peft_model_state_dict
)
from transformers import LlamaForCausalLM, LlamaTokenizer
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from trl import DPOTrainer
import bitsandbytes as bnb
#torch.autograd.set_detect_anomaly(True)
def find_all_linear_names(model):
#cls = bnb.nn.Linear8bitLt
cls = bnb.nn.Linear4bit
lora_module_names = set()
for name, module in model.named_modules():
if isinstance(module, cls):
names = name.split('.')
lora_module_names.add(names[0] if len(names) == 1 else names[-1])
if 'lm_head' in lora_module_names: # needed for 16-bit
lora_module_names.remove('lm_head')
return list(lora_module_names)
#os.environ["TOKENIZERS_PARALLELISM"] = "false"
from accelerate import Accelerator
from accelerate import PartialState
def train(
# model/data params
base_model: str = "",
ref_model: str = "None",
data_path: str = "",
output_dir: str = "",
# training hyperparams
batch_size: int = 128,
micro_batch_size: int = 8,
num_epochs: int = 1,
learning_rate: float = 3e-4,
cutoff_len: int = 4096,
val_set_size: int = 0,
lr_scheduler: str = "cosine",
warmup_ratio: float = 0.1,
# lora hyperparams
lora_r: int = 16,
lora_alpha: int = 16,
lora_dropout: float = 0.05,
# from peft docs: ["q_proj", "k_proj", "v_proj", "o_proj", "fc_in", "fc_out", "wte", "gate_proj", "down_proj", "up_proj"]
lora_target_modules: List[str] = ["gate_proj", "down_proj", "up_proj"],
# llm hyperparams
train_on_inputs: bool = False, # if False, masks out inputs in loss
add_eos_token: bool = False,
group_by_length: bool = False, # faster, but produces an odd training loss curve
gradient_checkpointing: bool = True,
# wandb params
#wandb_project: str = "",
#wandb_run_name: str = "",
#wandb_watch: str = "", # options: false | gradients | all
#wandb_log_model: str = "", # options: false | true
resume_from_checkpoint: str = None, # either training checkpoint or final adapter
prompt_template_name: str = "alpaca",
# NEFTune params
noise_alpha: int = 5
):
if int(os.environ.get("LOCAL_RANK", 0)) == 0:
print(
f"Params using prompt template {prompt_template_name}:\n"
f"base_model: {base_model}\n"
f"ref_model: {ref_model}\n"
f"data_path: {data_path}\n"
f"output_dir: {output_dir}\n"
f"batch_size: {batch_size}\n"
f"micro_batch_size: {micro_batch_size}\n"
f"num_epochs: {num_epochs}\n"
f"learning_rate: {learning_rate}\n"
f"cutoff_len: {cutoff_len}\n"
f"val_set_size: {val_set_size}\n"
f"lr_scheduler: {lr_scheduler}\n"
f"warmup_ratio: {warmup_ratio}\n"
f"lora_r: {lora_r}\n"
f"lora_alpha: {lora_alpha}\n"
f"lora_dropout: {lora_dropout}\n"
f"lora_target_modules: {lora_target_modules}\n"
f"train_on_inputs: {train_on_inputs}\n"
f"add_eos_token: {add_eos_token}\n"
f"group_by_length: {group_by_length}\n"
f"gradient_checkpointing: {gradient_checkpointing}\n"
#f"wandb_project: {wandb_project}\n"
#f"wandb_run_name: {wandb_run_name}\n"
#f"wandb_watch: {wandb_watch}\n"
#f"wandb_log_model: {wandb_log_model}\n"
f"resume_from_checkpoint: {resume_from_checkpoint or False}\n"
)
assert (
base_model
), "Please spe
|
https://github.com/huggingface/trl/issues/1264
|
closed
|
[] | 2024-01-22T07:54:18Z
| 2024-08-27T16:08:49Z
| null |
Minami-su
|
huggingface/transformers.js
| 528
|
Preloading / Lazy loading model before generate requested
|
### Question
Hi @xenova
I've been looking around for this type of functionality for ages and didn't realize you had this type of front-end inferencing locked down in such awesome fashion on browsers. Brilliant!!!
In the demo at https://xenova.github.io/transformers.js/, the model is loaded one-time when sending the first request/inference.
I want to pre-load a model in the background when a user opens the page, but not sure on the whether there is a method in your API for https://cdn.jsdelivr.net/npm/@xenova/transformers@2.14.0, or whether model loading is purely contingent on a first inference.
I've checked your API link: https://huggingface.co/docs/transformers.js/api/env, and nothing there that I can see so I'm assuming it requires a first run.
If it requires a first-run I can think of a couple workarounds, but wanted to check with you before heading down that rabbit hole.
Cheers
|
https://github.com/huggingface/transformers.js/issues/528
|
closed
|
[
"question"
] | 2024-01-20T23:09:13Z
| 2024-01-29T23:23:44Z
| null |
gidzr
|
huggingface/sentence-transformers
| 2,429
|
How to additional special tokens using CrossEncoder?
|
I am using cross encoder.
I would like add a new special token (e.g., '[EOT]') on top of the pre-trained model & tokenizer (e.g., 'bert-base-uncased').
I am wondering what is the best way to do it?
|
https://github.com/huggingface/sentence-transformers/issues/2429
|
open
|
[] | 2024-01-20T15:52:39Z
| 2024-01-20T16:25:00Z
| null |
mucun1988
|
pytorch/serve
| 2,898
|
Low GPU utilization due to CPU-bound preprocessing
|
I am running torchserve with batch size = 32 and delay = 30ms
My preprocessing is CPU bound and my inference is GPU bound.
The GPU cannot start until the batch is ready on the CPU.
Currently, this leads to a serialized workflow where each stage blocks on the previous one:
* Wait for batch to accumulate in the "front end"
* preprocessing - CPU bound
* inference - GPU bound
Problem
======
I am getting rather low GPU utilization
This is because GPU is idle while batch is being prepared on the CPU.
What I tried
=========
Running multiple workers - Helps, but limited by # of cores and GPU memory.
Using threadpool for preprocessing - helps, but requires having at least 2-3X cores than workers to avoid contention
Question
=======
How can I increase GPU utilization given that I need to wait for the pre-processing on the CPU?
Any best practice or rules of thumb for this case?
Idea
====
Starting processing the batch as it's being built up on the frontend vs. idle until the entire batch is ready on the frontend:
* Start accumulating a new batch
* Immediately call handle() with a *generator* rather than wait for the batch to accumulate
* Start preprocessing on the CPU from the generator (block as long as payloads are not yet available)
* When generator is exhausted, pass the entire batch of tensors to the GPU and infer.
I don't know if this idea is possible without major changes in the core, but putting it out there..
|
https://github.com/pytorch/serve/issues/2898
|
open
|
[] | 2024-01-20T14:01:30Z
| 2024-01-24T05:15:05Z
| 2
|
assapin
|
huggingface/optimum
| 1,658
|
TextStreamer not supported for ORTCausalLM?
|
### System Info
```shell
System: IBM Power10
`5.14.0-362.13.1.el9_3.ppc64le`
OS: RHEL 9.3
Framework versions:
optimum==1.16.2
transformers==4.36.2
torch==2.0.1
onnx==1.13.1
onnxruntime==1.15.1
```
### Who can help?
@JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
This is a minimal repoducable example based on the official huggingface streamer example:
https://huggingface.co/docs/transformers/internal/generation_utils#transformers.TextStreamer.example
I exported the model before using `optimum-cli`:
`optimum-cli export onnx --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 /data/LLMs/onnx/tinyllama_onnx/`
```python
from transformers import AutoTokenizer, TextStreamer
from optimum.onnxruntime import ORTModelForCausalLM
model_id = "/data/LLMs/onnx/tinyllama_onnx"
tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side="left")
tokenizer.pad_token = tokenizer.eos_token
model = ORTModelForCausalLM.from_pretrained(model_id, use_cache=True, use_merged=False, use_io_binding=False)
text = "My name is William and I live in"
inp = tokenizer(text, return_tensors="pt", padding=True)
streamer = TextStreamer(inp)
_ = model.generate(**inp, streamer=streamer, max_new_tokens=256)
```
Error Message:
```python
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~/micromamba/envs/gen-ai/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:266, in BatchEncoding.__getattr__(self, item)
265 try:
--> 266 return self.data[item]
267 except KeyError:
KeyError: 'decode'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
Cell In[7], line 5
3 inp = tokenizer(text, return_tensors="pt", padding=True)
4 streamer = TextStreamer(inp)
----> 5 _ = model.generate(**inp, streamer=streamer, max_new_tokens=256)
File ~/micromamba/envs/gen-ai/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File ~/micromamba/envs/gen-ai/lib/python3.10/site-packages/transformers/generation/utils.py:1611, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)
1608 input_ids = inputs_tensor if model_input_name == "input_ids" else model_kwargs.pop("input_ids")
1610 if streamer is not None:
-> 1611 streamer.put(input_ids.cpu())
1613 # 6. Prepare `max_length` depending on other stopping criteria.
1614 input_ids_length = input_ids.shape[-1]
File ~/micromamba/envs/gen-ai/lib/python3.10/site-packages/transformers/generation/streamers.py:97, in TextStreamer.put(self, value)
95 # Add the new token to the cache and decodes the entire thing.
96 self.token_cache.extend(value.tolist())
---> 97 text = self.tokenizer.decode(self.token_cache, **self.decode_kwargs)
99 # After the symbol for a new line, we flush the cache.
100 if text.endswith("\n"):
File ~/micromamba/envs/gen-ai/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:268, in BatchEncoding.__getattr__(self, item)
266 return self.data[item]
267 except KeyError:
--> 268 raise AttributeError
AttributeError:
```
### Expected behavior
I would expect a streaming of tokens instead of waiting for the whole text to be processed/generated upfront :)
|
https://github.com/huggingface/optimum/issues/1658
|
closed
|
[
"bug"
] | 2024-01-20T11:50:11Z
| 2024-01-29T12:28:40Z
| 1
|
mgiessing
|
huggingface/optimum
| 1,657
|
Clarity on the convert.py for a model to ONNX.py.. documentation issue
|
### Feature request
I need some help understanding how this script is supposed to be run / implemented?
https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/convert.py
Questions:
1. is this already included when I pip install optimum? .. which is implemented using the instructions at:
https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization#quantizing-a-model-to-be-used-with-optimums-cli
2. or is it the script that's called on from the modal.save when inferencing/calling onnx model?
3. or is this a separate script that can be called independently like the convert.py that xenova has?
Also, in order to run the optimum/exporters/onnx/convert.py script, do I need to download the full exporters folder, just the onnx folder, or can I just copy-paste the script and run that indepdently?
Much appreciated
### Motivation
Deeper understanding to use the resources in this github
### Your contribution
None
|
https://github.com/huggingface/optimum/issues/1657
|
closed
|
[] | 2024-01-20T04:59:10Z
| 2024-02-07T04:13:20Z
| 2
|
gidzr
|
huggingface/candle
| 1,608
|
How to keep the model loaded in memory?
|
Hi guys,
I'm trying to setup a local instance of Phi-2 to use it as an autocomplete provider for my text editor.
The problem that I have is that each time I call the command to complete a text, the files have to be retrieved and the model loaded - which is a lot of time wasted for real time autocompletion.
`/.../candle/target/release/examples$ ./phi --model 2 --quantized --sample-len 12 --prompt "$(cat text-to-complete.md)"`
avx: false, neon: true, simd128: false, f16c: false
temp: 0.00 repeat-penalty: 1.10 repeat-last-n: 64
retrieved the files in 455.042µs
loaded the model in 2.127639167s
starting the inference loop
# The World History
Have you ever wondered how people lived in the past? ...
Do you know how to keep the model loaded in memory?
Like... Is there a possibility to start a server accepting post requests with prompts to complete - or something like this?
Thanks
|
https://github.com/huggingface/candle/issues/1608
|
open
|
[] | 2024-01-19T19:16:54Z
| 2024-01-20T00:27:22Z
| null |
tdkbzh
|
huggingface/peft
| 1,374
|
How to activate, and keep frozen, multiple adapters?
|
Hello all,
I have been working on multiple adapters and part of my project requires that I activate all the loaded adapters. However, they must be frozen. I am running this code:
```python
adapters_items = iter(tqdm.tqdm(adapters.items()))
first_item = next(adapters_items)
model_peft = PeftModel.from_pretrained(model, first_item[1], first_item[0], is_trainable=False)
for adapter_name, model_id in adapters_items:
model_peft.load_adapter(model_id, adapter_name, is_trainable=False)
model_peft.base_model.set_adapter(list(adapters.keys()))
```
After some debugging, I see that the adapters are frozen (requires_grad=False) until the last line where I set the active adapters. After they are set to be active, requires_grad=True.
I see that `set_adapter` calls this function on all the LoraLayers, and how it sets the adapters to trainable.
> https://github.com/huggingface/peft/blob/ebbff4023ad276cbcb2466fd7e99be7d3ae0ae11/src/peft/tuners/tuners_utils.py#L464-L484
How can I set the active adapter(s) while keeping them frozen?
|
https://github.com/huggingface/peft/issues/1374
|
closed
|
[] | 2024-01-19T11:28:15Z
| 2024-02-07T11:13:24Z
| null |
EricLBuehler
|
pytorch/kineto
| 857
|
Why PyTorch TensorBoard Profiler (Deprecated)
|
What is the reson to deptecate PyTorch TensorBoard Profiler ?
https://github.com/pytorch/kineto#pytorch-tensorboard-profiler-deprecated
|
https://github.com/pytorch/kineto/issues/857
|
closed
|
[
"question"
] | 2024-01-19T11:26:43Z
| 2024-04-11T08:51:34Z
| null |
GuWei007
|
huggingface/text-generation-inference
| 1,457
|
How to use a finetuned model from my local directory
|
### System Info
text-generation 0.6.1
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
```
from text_generation import InferenceAPIClient
client = InferenceAPIClient( "/mylocalpath/finetunedmodel")
test_prompt = """sample prompt"""
text = client.generate(test_prompt).generated_text
print(text)
```
it showing the
```
NotFoundError: Model "/mylocalpath/finetunedmodel" does not exist
```
This finetuned model is tuned in the base model - Mistral
### Expected behavior
Expect to load the finetuned model from the local path
|
https://github.com/huggingface/text-generation-inference/issues/1457
|
closed
|
[
"Stale"
] | 2024-01-19T06:18:41Z
| 2024-03-10T01:45:51Z
| null |
pradeepdev-1995
|
huggingface/transformers
| 28,598
|
what is the correct format of input when fine-tuning GPT2 for text generation with batch input?
|
### System Info
- `transformers` version: 4.33.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I want to fine-tune GPT2 for text generation with batch input. And I use follow code to format batch input:
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained(r'E:\pythonWork\models\gpt2')
max_length = 8
datas = [
"The dog.",
"The cute dog.",
]
model_input = tokenizer(datas)
print('original input:\n', model_input)
# prepare for batch input
# I add bos token at the start and eos token at the end, and add pad token at the right to pad the sentences to the
# same length. bos_token_id=eos_token_id=50256, and there is not a pad token, so i also use 50256 as pad token.
labels_list = []
for i in range(len(datas)):
input_ids = [tokenizer.bos_token_id] + model_input['input_ids'][i] + [tokenizer.eos_token_id] # add bos and eos token
input_ids = input_ids + max(0, max_length-len(input_ids))*[tokenizer.eos_token_id] # add padding token
attention_mask = [1] + model_input['attention_mask'][i] + [1] # atten bos and eos token
attention_mask = attention_mask + max(0, max_length - len(attention_mask)) * [0] # dose't atten padding token
labels = [tokenizer.bos_token_id] + model_input['input_ids'][i] + [tokenizer.eos_token_id] # take loss for bos and eos
labels = labels + max(0, max_length - len(labels)) * [-100] # padding dose't take loss
model_input['input_ids'][i] = input_ids
model_input['attention_mask'][i] = attention_mask
labels_list.append(labels)
model_input['labels'] = labels_list
print('batch input:\n', model_input)
```
print message
```
original input:
{'input_ids': [[464, 3290, 13], [464, 13779, 3290, 13]],
'attention_mask': [[1, 1, 1], [1, 1, 1, 1]]}
batch input:
{'input_ids': [[50256, 464, 3290, 13, 50256, 50256, 50256, 50256], [50256, 464, 13779, 3290, 13, 50256, 50256, 50256]],
'attention_mask': [[1, 1, 1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 1, 1, 0, 0]],
'labels': [[50256, 464, 3290, 13, 50256, -100, -100, -100], [50256, 464, 13779, 3290, 13, 50256, -100, -100]]}
``
### Expected behavior
my question:
1. the method I take to format batch input, is it right?
2. why can't gpt2 tokenizer auto format batch input like bert tokenzier do?
3. in this pre-training [demo](https://huggingface.co/learn/nlp-course/en/chapter7/6?fw=pt#preparing-the-dataset),
I found that it dose't add bos and eos tokens, and add pad token only at the end of the sequence.
So I think, in the pre-training time only need to add pad token to keep the sequence length consistent.
But when it comes to fine-tuning, additional eos tokens need to be added, and eos needs take loss because the model needs to learn when to stop generating.
Am I right?
|
https://github.com/huggingface/transformers/issues/28598
|
closed
|
[] | 2024-01-19T06:17:29Z
| 2024-01-22T01:49:43Z
| null |
minmie
|
pytorch/xla
| 6,331
|
How to choose XRT runtime when using Torch/XLA 2.1.0?
|
The PJRT docs say that setting `XRT_TPU_CONFIG` would choose the XRT runtime, but even when I set it I see the following warnings in the logs, and PJRT gets enabled. My model trains faster on XRT but I'd like to upgrade to 2.1.0. Thanks!
```
WARNING:root:PJRT is now the default runtime. For more information, see https://github.com/pytorch/xla/blob/master/docs/pjrt.md
WARNING:root:libtpu.so and TPU device found. Setting PJRT_DEVICE=TPU.
```
|
https://github.com/pytorch/xla/issues/6331
|
closed
|
[] | 2024-01-19T02:59:11Z
| 2024-01-19T23:56:54Z
| null |
andrey-klochkov-liftoff
|
huggingface/transformers
| 28,597
|
How to find or create the `model_state_dict.bin` file for the `convert_llava_weights_to_hf.py` script
|
Hi @younesbelkada,
Following up on the [fix to the LLaVA convert script](https://github.com/huggingface/transformers/pull/28570) and thanks for all the help with the PR!
I encountered some issue with the convert script and wanted to ask about the recommended way to create the `model_state_dict.bin` file specified here: https://github.com/huggingface/transformers/blob/772307be7649e1333a933cfaa229dc0dec2fd331/src/transformers/models/llava/convert_llava_weights_to_hf.py#L74
In order to create the `model_state_dict.bin` I tried something like the following with the original https://github.com/haotian-liu/LLaVA code:
```python
import torch
from llava.model.language_model.llava_llama import LlavaLlamaForCausalLM
# load model
kwargs = {"device_map": "auto", "torch_dtype": torch.float16}
model = LlavaLlamaForCausalLM.from_pretrained("liuhaotian/llava-v1.5-7b", low_cpu_mem_usage=True, **kwargs)
# load vision tower
model.get_vision_tower().load_model()
# Save state dict
torch.save(model.state_dict(), "tmp/hf_models/llava-v1.5-7b/model_state_dict.bin")
```
It works but when I used the convert script I had to make the following changes:
* Remove keys that ended with `.inv_freq` (e.g. `language_model.model.layers.0.self_attn.rotary_emb.inv_freq`)
* Comment out the update to the `model.config.vocab_size` and `model.config.text_config.vocab_size` with the `pad_shape` here: https://github.com/huggingface/transformers/blob/772307be7649e1333a933cfaa229dc0dec2fd331/src/transformers/models/llava/convert_llava_weights_to_hf.py#L96-L97 otherwise, when I would try to load the converted model, it will error with the following:
```python
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "Shopify/llava-1.5-7b"
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(0)
```
```console
ValueError: Trying to set a tensor of shape torch.Size([32064, 5120]) in "weight" (which has shape torch.Size([32128, 5120])), this look incorrect.
```
Am I doing something wrong when I create the `model_state_dict.bin` file or am I missing something else?
Thanks again in advance.
|
https://github.com/huggingface/transformers/issues/28597
|
closed
|
[] | 2024-01-19T02:38:31Z
| 2024-01-22T14:28:20Z
| null |
isaac-vidas
|
huggingface/chat-ui
| 708
|
Add support for other API endpoints
|
It would be nice if HuggingChat could be used locally, but calling other remote LLM endpoints other than OpenAI.
For instance, this could be mistral.ai 's API endpoints (same as OpenAI - only difference is model name), or a custom server configured for it.
Perhaps just adding a variable in the .env file defining the server? This seems like an easy feature, I could try implementing it myself if I get the time to look a bit more into the code (for instance, figuring out where the model name can be change)
https://github.com/huggingface/chat-ui/blob/ee47ff37fddb70f78d1ef8a293d8ed3fbcd24ff9/src/lib/server/endpoints/openai/endpointOai.ts#L13C1-L13C65
|
https://github.com/huggingface/chat-ui/issues/708
|
open
|
[
"support",
"models"
] | 2024-01-18T18:27:27Z
| 2024-01-25T17:28:28Z
| 4
|
fbarbe00
|
pytorch/TensorRT
| 2,606
|
❓ [Question] mlp running with torch_tensorrt slower than with inductor?
|
## ❓ Question
I am within the nvcr.io/nvidia/pytorch:23.12-py3 container. The performance of torch_tensorrt is wrose than inductor.
Details:
example code
```python
import torch
import torch_tensorrt
import torch.nn as nn
class MLPBlocks(nn.Module):
def __init__(self, window_dim, hidden_dim):
super().__init__()
self.mlp_1 = nn.Sequential(
nn.Linear(window_dim, window_dim * 4),
nn.ReLU(),
nn.Linear(window_dim * 4, window_dim),
)
self.mlp_2 = nn.Sequential(
nn.Linear(hidden_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim),
)
def forward(self, x):
x = self.mlp_1(x.transpose(1, 2)).transpose(1, 2)
x = self.mlp_2(x)
return x
class MLP(nn.Module):
def __init__(self, *_args):
super(MLP, self).__init__()
self.hidden_dim = 256
self.window_dim = 50
self.n_feature = 800
self.fc_first = nn.Linear(self.n_feature, self.hidden_dim)
self.fc_last = nn.Linear(self.hidden_dim, 1)
self.blocks = nn.ModuleList([MLPBlocks(window_dim=self.window_dim, hidden_dim=self.hidden_dim) for _ in range(8)])
def forward(self, input_x):
net_x = self.fc_first(input_x.transpose(0, 1))
for mlp_block in self.blocks:
net_x = mlp_block(net_x)
net_x = self.fc_last(torch.mean(net_x, dim=1))
return net_x
def run_model(x, model):
for _ in range(10):
with torch.no_grad():
res = model(x)
torch.cuda.synchronize()
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
for i in range(50):
with torch.no_grad():
res = model(x)
end.record()
torch.cuda.synchronize()
return start.elapsed_time(end)/50
def test_inductor(data, model):
x = data.float().cuda()
m = model.float().cuda()
torch._dynamo.reset()
opt_model = torch.compile(m)
print(f"inductor fp32 time: {run_model(x, opt_model)}")
x = x.half()
m = m.half()
torch._dynamo.reset()
opt_model = torch.compile(m)
print(f"inductor fp16 time: {run_model(x, opt_model)}")
def test_trt_script(data, model):
x = data.float().cuda()
m = model.float().cuda()
script_model = torch.jit.trace(m, x)
trt_ts_model = torch_tensorrt.compile(script_model, ir="torchscript", inputs=[x], enabled_precisions={torch.float})
print(f"trt_script fp32 time: {run_model(x, trt_ts_model)}")
x = x.half()
m = m.half()
script_model = torch.jit.trace(m, x)
trt_ts_model = torch_tensorrt.compile(script_model, ir="torchscript", inputs=[x], enabled_precisions={torch.half})
print(f"trt script fp16 time: {run_model(x, trt_ts_model)}")
def test_trt_dynamo(data, model):
x = data.float().cuda()
m = model.float().cuda()
torch._dynamo.reset()
opt_model = torch_tensorrt.compile(m, ir="torch_compile", inputs=[x], enabled_precisions={torch.float})
print(f"trt_dynamo fp32 time: {run_model(x, opt_model)}")
x = data.half().cuda()
m = model.half().cuda()
torch._dynamo.reset()
opt_model = torch_tensorrt.compile(m, ir="torch_compile", inputs=[x], enabled_precisions={torch.half})
print(f"trt_dynamo fp16 time: {run_model(x, opt_model)}")
if __name__ == "__main__":
model = MLP()
x = torch.randn(50, 5000, 800)
test_inductor(x, model)
test_trt_script(x, model)
test_trt_dynamo(x, model)
```
result

## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.2.0a0
- CPU Architecture:
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.10
- CUDA version: 12.3
- GPU models and configuration: A100
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/2606
|
open
|
[
"question"
] | 2024-01-18T11:29:42Z
| 2024-01-19T19:27:17Z
| null |
johnzlli
|
huggingface/text-generation-inference
| 1,451
|
How to run text generation inference locally
|
### System Info
I completed the steps for local installation of Text Generation Inference as in here: https://github.com/huggingface/text-generation-inference#local-install
I did all the installation on my local Linux (WSL). The model endpoint that I want to draw inference from is on my EC2. (I trained Mistral 7b model).
When I run `text-generation-launcher --env` , I get the following:
```
(text-generation-inference) tammosta@SEA-1801247735:~/text-generation-inference$ text-generation-launcher --env
error: invalid value 'True' for '--disable-custom-kernels'
[possible values: true, false]
tip: a similar value exists: 'true'
For more information, try '--help'.
(text-generation-inference) tammosta@SEA-1801247735:~/text-generation-inference$ export DISABLE_CUSTOM_KERNELS=true
(text-generation-inference) tammosta@SEA-1801247735:~/text-generation-inference$ text-generation-launcher --env
2024-01-17T19:54:02.802338Z INFO text_generation_launcher: Runtime environment:
Target: x86_64-unknown-linux-gnu
Cargo version: 1.70.0
Commit sha: 0eabc83541225979209ff7183b4b4442e47adf92
Docker label: N/A
nvidia-smi:
N/A
2024-01-17T19:54:02.802403Z INFO text_generation_launcher: Args { model_id: "bigscience/bloom-560m", revision: None, validation_workers: 2, sharded: None, num_shard: None, quantize: None, speculate: None, dtype: None, trust_remote_code: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_top_n_tokens: 5, max_input_length: 1024, max_total_tokens: 2048, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: None, max_waiting_tokens: 20, hostname: "0.0.0.0", port: 3000, shard_uds_path: "/tmp/text-generation-server", master_addr: "localhost", master_port: 29500, huggingface_hub_cache: None, weights_cache_override: None, disable_custom_kernels: true, cuda_memory_fraction: 1.0, rope_scaling: None, rope_factor: None, json_output: false, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None, ngrok: false, ngrok_authtoken: None, ngrok_edge: None, env: true }
2024-01-17T19:54:02.802591Z INFO download: text_generation_launcher: Starting download process.
2024-01-17T19:54:09.019117Z INFO text_generation_launcher: Download file: model.safetensors
2024-01-17T19:54:51.649553Z INFO text_generation_launcher: Downloaded /home/tammosta/.cache/huggingface/hub/models--bigscience--bloom-560m/snapshots/ac2ae5fab2ce3f9f40dc79b5ca9f637430d24971/model.safetensors in 0:00:42.
2024-01-17T19:54:51.649696Z INFO text_generation_launcher: Download: [1/1] -- ETA: 0
2024-01-17T19:54:52.249742Z INFO download: text_generation_launcher: Successfully downloaded weights.
2024-01-17T19:54:52.250108Z INFO shard-manager: text_generation_launcher: Starting shard rank=0
2024-01-17T19:54:56.525795Z WARN text_generation_launcher: We're not using custom kernels.
2024-01-17T19:54:56.534344Z WARN text_generation_launcher: Could not import Flash Attention enabled models: No module named 'vllm'
2024-01-17T19:55:01.117291Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-0
2024-01-17T19:55:01.167200Z INFO shard-manager: text_generation_launcher: Shard ready in 8.916023926s rank=0
2024-01-17T19:55:01.265832Z INFO text_generation_launcher: Starting Webserver
2024-01-17T19:55:01.366710Z INFO text_generation_router: router/src/main.rs:178: Using the Hugging Face API
2024-01-17T19:55:01.366788Z INFO hf_hub: /home/tammosta/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hf-hub-0.3.2/src/lib.rs:55: Token file not found "/home/tammosta/.cache/huggingface/token"
2024-01-17T19:55:02.294337Z INFO text_generation_router: router/src/main.rs:416: Serving revision ac2ae5fab2ce3f9f40dc79b5ca9f637430d24971 of model bigscience/bloom-560m
2024-01-17T19:55:02.294415Z INFO text_generation_router: router/src/main.rs:234: Using the Hugging Face API to retrieve tokenizer config
2024-01-17T19:55:02.315279Z INFO text_generation_router: router/src/main.rs:277: Warming up model
2024-01-17T19:55:46.211550Z ERROR shard-manager: text_generation_launcher: Shard complete standard error output:
/home/tammosta/anaconda3/envs/text-generation-inference/lib/python3.9/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
config.json: 100%|██████████| 693/693 [00:00<00:00, 189kB/s]
tokenizer_config.json: 100%|██████████| 222/222 [00:00<00:00, 106kB/s]
tokenizer.json: 100%|██████████| 14.5M/14.5M [00:00<00:00, 23.4MB/s]
special_tokens_map.json: 100%|██████████| 85.0/85.0 [00:00<00:00, 34.9kB/s]
/home/tammosta/text-generation-inference/server/text_generation_server/models/custom_modeling/bloom_modeling.py:882: FutureWarning: `position_ids` have no functionalit
|
https://github.com/huggingface/text-generation-inference/issues/1451
|
closed
|
[
"Stale"
] | 2024-01-17T20:12:35Z
| 2024-02-22T01:44:26Z
| null |
tamanna-mostafa
|
huggingface/diffusers
| 6,614
|
How to train text_to_image with images which is resolution of 512x768 ?
|
I want to finetune the sd1.5 with 50k images, all the image is resolution of 512x768. But I got error like this:
`train_text_to_image.py:` error: argument --resolution: invalid int value: '[512,768]'`
so, how to train text_to_image with images which is resolution of 512x768?
|
https://github.com/huggingface/diffusers/issues/6614
|
closed
|
[] | 2024-01-17T13:51:16Z
| 2024-01-25T14:28:01Z
| null |
lingxuan630
|
huggingface/accelerate
| 2,347
|
How to load model to specified GPU devices?
|
I'm trying a large model LLaVA1.5.
I know that if I set the parameter `device_map='auto'` in `LlavaMPTForCausalLM.from_pretrained`, the model will be loaded on all visible GPUs (FSDP).
Now I hope to load LLaVA1.5 on some of the visible GPUs, still in the FSDP mode, and automatically decide device_map like `device_map='auto'`. Note that the GPUs can be **arbitrarily assigned**, i.e. GPU 2, 3, 4, but not starting with GPU 0.
I try to achieve this by passing a `max_memory`, like
`model = LlavaMPTForCausalLM.from_pretrained(model_path,device_map='auto', max_memory={2: 33271054336, 3: 33271054336, 4: 33271054336})`
However, an error occured

I think the loop should be modified?
Or is there are any simpler ways to achieve my goal?
|
https://github.com/huggingface/accelerate/issues/2347
|
closed
|
[] | 2024-01-17T09:23:04Z
| 2024-02-26T15:06:36Z
| null |
davidluciolu
|
huggingface/transformers
| 28,546
|
How to use fp32 and qLora to fine-tune models
|
### System Info
I'm using transformers version 4.32.0 and I want to fine-tune the Qwen/Qwen-VL-Chat-Int4 model, but my 1080ti GPU doesn't support fp16. When I want to use "training_args.fp16 = False" to modify the parameters, the error "dataclasses.FrozenInstanceError: cannot assign to field fp16" will be reported. I guess this parameter cannot be changed manually. What should I do besides changing the GPU so that it can use fp16?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using the fine-tuning code given by Qwen:
```python
parser = transformers.HfArgumentParser(
(ModelArguments, DataArguments, TrainingArguments, LoraArguments)
)
(
model_args,
data_args,
training_args,
lora_args,
) = parser.parse_args_into_dataclasses()
if getattr(training_args, 'deepspeed', None) and getattr(lora_args, 'q_lora', False):
training_args.distributed_state.distributed_type = DistributedType.DEEPSPEED
training_args.fp16 = False
compute_dtype = (
torch.float16
if training_args.fp16
else (torch.bfloat16 if training_args.bf16 else torch.float32)
)
local_rank = training_args.local_rank
device_map = None
world_size = int(os.environ.get("WORLD_SIZE", 1))
ddp = world_size != 1
if lora_args.q_lora:
device_map = {"": int(os.environ.get("LOCAL_RANK") or 0)} if ddp else None
if len(training_args.fsdp) > 0 or deepspeed.is_deepspeed_zero3_enabled():
logging.warning(
"FSDP or ZeRO3 are not incompatible with QLoRA."
)
# Set RoPE scaling factor
config = transformers.AutoConfig.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
trust_remote_code=True,
)
config.use_cache = False
# Load model and tokenizer
model = transformers.AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
config=config,
cache_dir=training_args.cache_dir,
device_map=device_map,
trust_remote_code=True,
quantization_config=GPTQConfig(
bits=4, disable_exllama=True
)
if training_args.use_lora and lora_args.q_lora
else None,
)
```
### Expected behavior
I want a solution
|
https://github.com/huggingface/transformers/issues/28546
|
closed
|
[] | 2024-01-17T07:16:11Z
| 2024-02-26T08:04:39Z
| null |
guoyunqingyue
|
pytorch/pytorch
| 117,602
|
If I use torch.compile to compile the whole graph,in the my own compiler, how to manage the memory in my own compiler?
|
### 🐛 Describe the bug
if I use torch.compile to compile the whole graph,in the my own compiler ,in forward stage,
1.if I enable memory reuse in the forward pass,how the backwards get the activation to calcute the gradient?has there some example in pytorch?
2.if i disable memory reuse,if i enable some op fusion,A op+B op fuison to one op, so the A output value is in sram or local memory or gloabal memory , torch can’t get the activation, in backwards how to calcute the gradient?has there some example in pytorch?
3.how to manage the memory in my own compiler to use torch.compile to speed up the training?
4.how the backwards (autogradient) get the activation from my own compiler ? the memory of every op in the graph must be in ddr?
### Error logs
none
### Minified repro
None
### Versions
None
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
|
https://github.com/pytorch/pytorch/issues/117602
|
closed
|
[
"oncall: pt2"
] | 2024-01-17T02:23:18Z
| 2024-01-19T17:55:06Z
| null |
mollon650
|
huggingface/sentence-transformers
| 2,416
|
How to specify class weights in model training?
|
I am having a very imbalanced training dataset. Is there a way I could specify class weights (e.g., class 0: 0.1, class 1: 1) for cross encoder training?
|
https://github.com/huggingface/sentence-transformers/issues/2416
|
closed
|
[] | 2024-01-16T21:00:27Z
| 2024-01-20T15:49:54Z
| null |
mucun1988
|
huggingface/chat-ui
| 697
|
Add streaming support for SageMaker endpoints
|
Would be nice to have support for streaming tokens from sagemaker. here are some ressources from my conversation with @philschmid
### Code sample (Python Code)
```
body = {"inputs": "what is life", "parameters": {"max_new_tokens":400}}
resp = smr.invoke_endpoint_with_response_stream(EndpointName=endpoint_name, Body=json.dumps(body), ContentType="application/json")
event_stream = resp['Body']
for line in LineIterator(event_stream):
resp = json.loads(line)
print(resp.get("outputs")[0], end='')
```
### Docs (JS)
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/sagemaker-runtime/command/InvokeEndpointWithResponseStreamCommand/
|
https://github.com/huggingface/chat-ui/issues/697
|
open
|
[
"enhancement",
"back"
] | 2024-01-16T10:59:47Z
| 2024-01-16T11:00:32Z
| 0
|
nsarrazin
|
huggingface/transformers.js
| 522
|
Is it possible to fine-tune the hosted pretrained models?
|
### Question
Hello,
If we have a large dataset in our domain, can we use it to fine-tune the hosted pretrained models(for example: Xenova/nllb-200-distilled-600M) with optimum? or is it possible to convert our own translation Pytorch model to ONNX which can be compatible with transformer.js?
|
https://github.com/huggingface/transformers.js/issues/522
|
open
|
[
"question"
] | 2024-01-16T03:55:39Z
| 2024-01-16T12:54:53Z
| null |
lhohoz
|
huggingface/datasets
| 6,594
|
IterableDataset sharding logic needs improvement
|
### Describe the bug
The sharding of IterableDatasets with respect to distributed and dataloader worker processes appears problematic with significant performance traps and inconsistencies wrt to distributed train processes vs worker processes.
Splitting across num_workers (per train process loader processes) and world_size (distributed training processes) appears inconsistent.
* worker split: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/iterable_dataset.py#L1266-L1283
* distributed split: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/iterable_dataset.py#L1335-L1356
In the case of the distributed split, there is a modulus check that flips between two very different behaviours, why is this different than splitting across the data loader workers? For IterableDatasets the DataLoaders worker processes are independent, so whether it's workers within one train process or across a distributed world the shards should be distributed the same, across `world_size * num_worker` independent workers in either case...
Further, the fallback case when the `n_shards % world_size == 0` check fails is a rather extreme change. I argue it is not desirable to do that implicitly, it should be an explicit case for specific scenarios (ie reliable validation). A train scenario would likely be much better handled with improved wrapping / stopping behaviour to eg also fix #6437. Changing from stepping shards to stepping samples means that every single process reads ALL of the shards. This was never an intended default for sharded training, shards gain their performance advantage in large scale distributed training by explicitly avoiding the need to have every process overlapping in the data they read, by default, only the data allocated to each process via their assigned shards should be read in each pass of the dataset.
Using a large scale CLIP example, some of the larger datasets have 10-20k shards across 100+TB of data. Training with 1000 GPUs we are switching between reading 100 terabytes per epoch to 100 petabytes if say change 20k % 1000 and drop one gpu-node to 20k % 992.
The 'step over samples' case might be worth the overhead in specific validation scenarios where gaurantees of at least/most once samples seen are more important and do not make up a significant portion of train time or are done in smaller world sizes outside of train.
### Steps to reproduce the bug
N/A
### Expected behavior
We have an iterable dataset with N shards, to split across workers
* shuffle shards (same seed across all train processes)
* step shard iterator across distributed processes
* step shard iterator across dataloader worker processes
* shuffle samples in every worker via shuffle buffer (different seed in each worker, but ideally controllable (based on base seed + worker id + epoch).
* end up with (possibly uneven) number of shards per worker but each shard only ever accessed by 1 worker per pass (epoch)
### Environment info
N/A
|
https://github.com/huggingface/datasets/issues/6594
|
open
|
[] | 2024-01-15T22:22:36Z
| 2025-11-10T14:55:20Z
| 7
|
rwightman
|
pytorch/pytorch
| 117,490
|
What is the next plan of FP8 support in PyTorch?
|
### 🚀 The feature, motivation and pitch
Now PyTorch only supports FP8 data type conversion without scaling. The accuracy is not that good.
What is the plan of FP8 support in PyTorch? Will FP8 DelayedScaling from TransformerEngine be taken into account? Thanks!
### Alternatives
_No response_
### Additional context
_No response_
cc @svekars @brycebortree @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @albanD @kadeng
|
https://github.com/pytorch/pytorch/issues/117490
|
closed
|
[
"module: docs",
"oncall: quantization",
"triaged",
"actionable",
"module: floatx (formerly float8)"
] | 2024-01-15T10:02:37Z
| 2024-01-26T01:48:45Z
| null |
yanbing-j
|
huggingface/alignment-handbook
| 103
|
Does QLora DPO Training support reference model?
|
Hello! Thanks for your awesome work!
I meet an issue when I run dpo with qlora. I notice there is a setting:
```
if model_args.use_peft is True:
ref_model = None
ref_model_kwargs = None
```
I also notice that the `use_peft` is set to true only in config_qlora.yaml. This means if we use qlora to do dpo training, we do not use reference model at all.
I wonder if this code support qlora training with reference model? Thanks!
|
https://github.com/huggingface/alignment-handbook/issues/103
|
open
|
[] | 2024-01-15T09:22:32Z
| 2024-01-15T09:27:08Z
| 0
|
Harry-mic
|
huggingface/swift-coreml-diffusers
| 91
|
How to import new .SAFETENSORS model?
|
How can I import a safetensor formatted model into the diffusers app?
I tried copying the safetensor file to the folder loaded by the dropdown menu. But when I relaunch the app, it doesn't show the new model in the menu.
|
https://github.com/huggingface/swift-coreml-diffusers/issues/91
|
open
|
[] | 2024-01-15T08:24:53Z
| 2024-07-07T09:03:27Z
| null |
mcandre
|
huggingface/candle
| 1,585
|
Extension request: How to construct Tensor for n-dimensional Vec
|
How do I best create a Tensor from a &Vec<Vec<u8>> type? Everything above 1D is quite hard to manage for index based value setting.
|
https://github.com/huggingface/candle/issues/1585
|
closed
|
[] | 2024-01-14T17:46:57Z
| 2025-11-23T20:22:09Z
| null |
BDUG
|
huggingface/nanotron
| 21
|
Save checkpoint before terminating the training run
|
Why don't we save a model checkpoint before terminating the training run? [[link]](https://github.com/huggingface/nanotron/blob/fd99571e3769cb1876d5c9d698b512e85a6e4896/src/nanotron/trainer.py#L429)
<img width="769" alt="image" src="https://github.com/huggingface/nanotron/assets/22252984/9eb78431-4df9-4795-8ac7-6947f71f6bae">
|
https://github.com/huggingface/nanotron/issues/21
|
closed
|
[
"question"
] | 2024-01-13T11:28:20Z
| 2024-01-13T11:28:54Z
| null |
xrsrke
|
huggingface/accelerate
| 2,331
|
How to share non-tensor data between processes?
|
I am running a training on 2 GPUs on the same machine. I need a way to share some float values and maybe dicts between the two processes. I saw that there is a `gather` method, but this only works for tensors.
Is there any way to do inter-process communication that is not directly related to the training?
EDIT: What I want to do is log the AVERAGE training error of my model after each epoch. The problem is that the process I am logging from only sees the training error that was computed in this process
|
https://github.com/huggingface/accelerate/issues/2331
|
closed
|
[] | 2024-01-12T19:13:27Z
| 2024-01-16T11:36:34Z
| null |
simonhessner
|
huggingface/transformers
| 28,476
|
How to avoid the peak RAM memory usage of a model when I want to load to GPU
|
### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.10.201-191.748.amzn2.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
I am using transformers to load a model into GPU, and I observed that before moving the model to GPU there is a peak of RAM usage that later gets unused. I assume the model is loaded into CPU before moving into GPU.
In GPU model takes around 4Gi and to load it I need more than 7Gi of RAM which seems weird.
Is there a way to load it direcly to the GPU without spending so much RAM?
I have tried with the `low_cpu_mem_usage` and `device_map` parameter to `cuda` and `auto` but no luck.
```python
from transformers import AutoModel; m = AutoModel.from_pretrained("jinaai/jina-embeddings-v2-base-en", trust_remote_code=True, low_cpu_mem_usage=True, device_map="auto")
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModel; m = AutoModel.from_pretrained("jinaai/jina-embeddings-v2-base-en", trust_remote_code=True, low_cpu_mem_usage=True, device_map="auto")
```
### Expected behavior
Not having such a memory peak
|
https://github.com/huggingface/transformers/issues/28476
|
closed
|
[] | 2024-01-12T11:39:52Z
| 2024-02-12T08:08:17Z
| null |
JoanFM
|
huggingface/datasets
| 6,584
|
np.fromfile not supported
|
How to do np.fromfile to use it like np.load
```python
def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs):
import numpy as np
if hasattr(filepath_or_buffer, "read"):
return np.fromfile(filepath_or_buffer, *args, **kwargs)
else:
filepath_or_buffer = str(filepath_or_buffer)
return np.fromfile(xopen(filepath_or_buffer, "rb", download_config=download_config).read(), *args, **kwargs)
```
this is not work
|
https://github.com/huggingface/datasets/issues/6584
|
open
|
[] | 2024-01-12T09:46:17Z
| 2024-01-15T05:20:50Z
| 6
|
d710055071
|
pytorch/audio
| 3,725
|
Resampling at arbitrary time steps
|
### 🚀 The feature
Currently, `torchaudio.functional.resample` can only resample at regular time points and the period is determined by `orig_freq` and `new_freq`.
Is it possible to resample at arbitrary time steps?
So rather than specifying a resampling ratio, we specify a array of time steps.
### Motivation, pitch
I would like to be able to model jitter in an ADC which can be modelled by a slightly varying sample rate. If you integrate a sample rate curve (which isn't constant), you get irregular time steps. A function such as the one suggested above would allow me to resample using these time steps and model a jittery ADC.
### Alternatives
I've rolled out my own function but it's not super efficient. Some experts might do a better job.
### Additional context
_No response_
|
https://github.com/pytorch/audio/issues/3725
|
open
|
[] | 2024-01-12T09:20:10Z
| 2024-01-16T18:52:40Z
| 5
|
pfeatherstone
|
huggingface/distil-whisper
| 73
|
I want to confirm how the knowledge organization is implemented?
|
I don't quite understand how knowledge distillation is implemented here.
Whisper is trained on 680,000 hours of untagged data for autoregression. According to the content of the fourth section of the paper, our model is trained on 21,170 hours of data with pseudo-labels generated by Whisper, with the first and 32nd layer parameters frozen based on Whisper. **This means that our model only needs to go through 21,170 hours of data with pseudo-labels and a model structure similar to Whisper, freezing the first and 32nd layers, using weighted KL divergence and label cross-entropy to achieve good results?**
If this is the case, it is indeed a significant discovery, indicating that we can always reduce the model's parameters and inference time after pre-training the model using similar methods, without significant loss of accuracy.
Thank you in advance
|
https://github.com/huggingface/distil-whisper/issues/73
|
open
|
[] | 2024-01-12T07:43:21Z
| 2024-01-17T16:57:31Z
| null |
hxypqr
|
huggingface/transformers.js
| 516
|
How to access attentions matrix for MarianMT?
|
### Question
Hey, I've been trying to access the attentions output by the MarianMT like so (please excuse the unorthodox config argument, tidying up is next on my todo list):
```
const model_name = "Xenova/opus-mt-en-fr";
const tokenizer = await MarianTokenizer.from_pretrained(model_name, {
config: {
output_hidden_states: true,
output_attentions: true
}
})
const tokens = (await tokenizer(text)).input_ids;
const model = await MarianMTModel.from_pretrained(model_name, {
config: {
model_type: 'marian',
is_encoder_decoder: true,
_name_or_path: 'Helsinki-NLP/opus-mt-en-fr',
_num_labels: 3,
activation_dropout: 0,
activation_function: 'swish',
add_bias_logits: false,
add_final_layer_norm: false,
architectures: ['MarianMTModel'],
attention_dropout: 0,
bad_words_ids: [[Array]],
bos_token_id: 0,
classif_dropout: 0,
classifier_dropout: 0,
d_model: 512,
decoder_attention_heads: 8,
decoder_ffn_dim: 2048,
decoder_layerdrop: 0,
decoder_layers: 6,
decoder_start_token_id: 59513,
decoder_vocab_size: 59514,
dropout: 0.1,
encoder_attention_heads: 8,
encoder_ffn_dim: 2048,
encoder_layerdrop: 0,
encoder_layers: 6,
eos_token_id: 0,
forced_eos_token_id: 0,
gradient_checkpointing: false,
id2label: { '0': 'LABEL_0', '1': 'LABEL_1', '2': 'LABEL_2' },
init_std: 0.02,
label2id: { LABEL_0: 0, LABEL_1: 1, LABEL_2: 2 },
max_length: 512,
max_position_embeddings: 512,
normalize_before: false,
normalize_embedding: false,
num_beams: 4,
num_hidden_layers: 6,
pad_token_id: 59513,
scale_embedding: true,
share_encoder_decoder_embeddings: true,
static_position_embeddings: true,
transformers_version: '4.34.0.dev0',
use_cache: true,
vocab_size: 59514,
output_hidden_states: true,
output_cross_attentions: true,
output_attentions: true
}
})
const translated = await model.generate(tokens)
const result = tokenizer.decode(translated[0], { skip_special_tokens: true })
console.log((await model.getAttentions(translated)))
```
I'm then getting the following error when I run the code:
`
Error: `output_attentions` is true, but the model did not produce cross-attentions. This is most likely because the model was not exported with `output_attentions=True`.
`
I've looked around but haven't been able to find out what is meant by the reference to exporting the model. How would I go about fixing this?
|
https://github.com/huggingface/transformers.js/issues/516
|
open
|
[
"question"
] | 2024-01-11T20:16:42Z
| 2024-01-15T08:21:17Z
| null |
DaveTJones
|
huggingface/text-generation-inference
| 1,437
|
How to run text-generation-benchmark without the graph and get the output data into a csv file or a json file?
|
### Feature request
text-generation-benchmark has been an amazing tool for understanding the model deployments better. Is there a way where we can run this without generating the graph and get the results in a csv format?
### Motivation
Motivation is that we want to use this tool with another program which gets the results from the binary.
### Your contribution
I'm not sure. Looks like an addition to the TGI-benchmark parameter and it can be a potential PR
|
https://github.com/huggingface/text-generation-inference/issues/1437
|
closed
|
[
"Stale"
] | 2024-01-11T15:33:37Z
| 2024-02-17T01:44:18Z
| null |
pranavthombare
|
huggingface/transformers.js
| 515
|
ONNX optimisations for edge deployment
|
### Question
Hello, I'm exploring if I can extract any more performance from my deployment of transformers.js. Appreciate the answer to this is nuanced and best answered by profiling, but would value opinions of experts that have walked this path before using this lib.
In my specific use case I know that I will always be deploying to the latest chrome running on windows systems that exist in VM and do not have a dedicated GPU (i.e. vanilla corprate desktop)
In the current util, during the export no optimization flag is passed so by default the models aren't optimized. https://github.com/xenova/transformers.js/blob/main/scripts/convert.py#L426
The main export takes a AutoOptimization level as a string and given no GPU's I would be restricted to 03.
https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/__main__.py#L567
##Questions:
1. Is there any reasons I wouldn't want to optimize a model using transformers.js?
2. Auto optimize seems to detect BERT automatically.
https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/fusion_options.py#L56
Is there any reason that I should modify transformers.js convert.py to to manually call ORTOptimizer with a OptimizationConfig inbetween steps 1&2 instead of passing a level string in step 1?
https://github.com/xenova/transformers.js/blob/main/scripts/convert.py#L429
|
https://github.com/huggingface/transformers.js/issues/515
|
closed
|
[
"question"
] | 2024-01-11T13:49:59Z
| 2025-10-13T04:59:32Z
| null |
georgedavies019
|
pytorch/serve
| 2,894
|
How can I implement batch inference in my model?
|
### 📚 The doc issue
I read the docs, and I see this sentence:
> The frontend then tries to aggregate the batch-size number of requests and send it to the backend.
How does it work?
In my case, my batch_size is 4 and max_batch_delay is 5000. I sent 2 request simultaneously to torchserve, but in my handler log, which showed torchserve ran 2 preprocess, inference and postprocess. This situation is not as expected? How I can achieve batch inference in my model?
My model have 3 input tensors, shapes are [12568, 20, 4], [12568], [12568, 4]. When batch size is 2, shapes are [12568 x 2, 20, 4], [12568 x 2], [12568 x 2, 4].
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/2894
|
closed
|
[] | 2024-01-11T10:39:58Z
| 2024-01-12T05:28:13Z
| 5
|
steelONIONknight
|
huggingface/alignment-handbook
| 98
|
Is QLoRA better than finetuning?
|
The results reported in https://github.com/huggingface/alignment-handbook/pull/88 suggest that QLoRA is better for both SFT and DPO. Is this accurate, and have people seen this happen in any other settings?
|
https://github.com/huggingface/alignment-handbook/issues/98
|
open
|
[] | 2024-01-10T21:04:11Z
| 2024-01-10T21:04:11Z
| 0
|
normster
|
huggingface/transformers.js
| 514
|
Is it possible to use adapters from the hub?
|
### Question
Hi, would it be possible to use adapters on top of a model using the js library?
|
https://github.com/huggingface/transformers.js/issues/514
|
open
|
[
"question"
] | 2024-01-10T20:57:03Z
| 2024-01-11T16:01:11Z
| null |
vabatta
|
huggingface/setfit
| 468
|
How effective is to use your own pre-trained ST model based on NLI dataset ?
|
Hi !
I'm interested to use SetFit for classify text extracted from hotel reviews (booking, tripadvisor, etc) but I would to add domain knowledge to my Sentence Transfomers body.
For example, this [paper](https://arxiv.org/abs/2202.01924) use a Sentence Transformers model trained on a custom NLI dataset (RNLI for Review Natural Langage Inference) for extract product features without training on labeled data. The results show that a train on domain based NLI dataset is better that the MNLI for Zero-Shot aspect extraction.
So, is it a good approach to train my own Sentence Transformers model (or fine-tune a pre-trained) on NLI domain based dataset for improve performance of SetFit ?
Thank you in advance
|
https://github.com/huggingface/setfit/issues/468
|
closed
|
[] | 2024-01-10T19:25:09Z
| 2024-02-09T14:55:46Z
| null |
azaismarc
|
huggingface/transformers.js
| 512
|
What do you all think about having a "Transformers.js Community" in Hugging Face?
|
### Question
After checking how [MLX Community on Hugging Face](https://huggingface.co/mlx-community) is working, I thought it could be a good idea to have one for Transformers.js.
One of the key benefits of a community is "multiple curators": anyone in the community would have the ability to edit the repositories, which makes it easier to maintain the converted models and ensure that they have more detailed Readmes.
Also, having multiple curators allows for quicker resolution of issues with the model configuration. Members of the community won't need to create a pull request to request changes or wait for someone to approve the PR, which is especially important for urgent fixes.
Another good move the MLX community made was releasing a script that automatically uploads models to the organization in Hugging Face, which makes it easy for anyone to convert and share their favorite models.
I would love to hear the opinions of others.
|
https://github.com/huggingface/transformers.js/issues/512
|
closed
|
[
"question"
] | 2024-01-10T16:03:51Z
| 2025-05-10T21:06:54Z
| null |
felladrin
|
huggingface/candle
| 1,552
|
How to pass the attention_mask to Bert model in examples?
|
I am trying to run `shibing624/text2vec-base-chinese` with candle, and the encoder returns `input_ids`, `attention_mask`, `token_id_types`, but there are only two params of BertModel in candle.
https://github.com/huggingface/candle/blob/main/candle-examples/examples/bert/main.rs#L170
```python
from transformers import BertTokenizer, BertModel
import torch
# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] # First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Load model from HuggingFace Hub
tokenizer = BertTokenizer.from_pretrained('shibing624/text2vec-base-chinese')
model = BertModel.from_pretrained('shibing624/text2vec-base-chinese')
sentences = ['如何更换花呗绑定银行卡', '花呗更改绑定银行卡']
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
https://github.com/huggingface/candle/issues/1552
|
closed
|
[] | 2024-01-10T11:57:55Z
| 2024-01-10T12:38:54Z
| null |
lz1998
|
huggingface/sentence-transformers
| 2,400
|
New release of library?
|
I was wondering when you will be releasing a new version of the library that includes the latest changes in the main branch? We are eagerly awaiting one inorder to consume the fix for this issue https://github.com/UKPLab/sentence-transformers/issues/1800
|
https://github.com/huggingface/sentence-transformers/issues/2400
|
closed
|
[
"question"
] | 2024-01-09T20:42:53Z
| 2024-01-29T10:00:33Z
| null |
vineetsajuTR
|
pytorch/serve
| 2,892
|
Setting log level of handler
|
### 📚 The doc issue
I need to set the logging level of handler to debug, i wanna see all the logs (of torch also). The docs dont mention much other than setting the log level for torch serve itself (log4j ones).
I tried setting the config inside the handler, but it didnt work
```python
logging.basicConfig(level=logging.DEBUG)
```
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/2892
|
closed
|
[
"question",
"triaged"
] | 2024-01-09T18:04:49Z
| 2024-06-07T21:39:33Z
| null |
hariom-qure
|
huggingface/peft
| 1,334
|
when we use inject_adapter_in_model method to inject the adapters directly into a PyTorch model, how to merge the Lora weight with the base model in the inference stage?
|
https://github.com/huggingface/peft/issues/1334
|
closed
|
[] | 2024-01-09T12:30:52Z
| 2024-02-17T15:03:59Z
| null |
mikiyukio
|
|
huggingface/datasets
| 6,570
|
No online docs for 2.16 release
|
We do not have the online docs for the latest minor release 2.16 (2.16.0 nor 2.16.1).
In the online docs, the latest version appearing is 2.15.0: https://huggingface.co/docs/datasets/index

|
https://github.com/huggingface/datasets/issues/6570
|
closed
|
[
"bug",
"documentation"
] | 2024-01-09T07:43:30Z
| 2024-01-09T16:45:50Z
| 7
|
albertvillanova
|
pytorch/xla
| 6,274
|
Inconsistent behaviour with `xm.xrt_world_size()` and/or `xm.get_xla_supported_devices()`
|
## 🐛 Bug
I noticed that when I execute some code (see further below) on a TPU VM v3-8 (inside a Python venv 3.10.12 + torch 2.1.2+cu121 + torch_xla 2.1.0) uncommenting each time either the `xm.xrt_world_size()` part (**Output 1**) or `xm.get_xla_supported_devices()` (**Output 2**) or none of them - both commented - (**Output 3**) I get different outputs, warnings and sometimes errors.
**P.S.: I've been trying for a few days to workout how to perform a single multicore processing on one TPU v3-8 device with the ultimate goal to perform distributed training across 5 TPU v3-8 devices but unfortunately I'm still stuck in the most basic operations and struggling to understand how things work in practice. Any help is really appreciated.**
## To Reproduce
1. Created 5 TPU VMs as queued resources using the following script (i=1,2,3,4,5):
```
gcloud alpha compute tpus queued-resources create queued-resource-v3-8-$i \
--node-id=my-tpu-vm-v3-8-$i \
--project=my-tpu-project \
--zone=europe-west4-a \
--accelerator-type=v3-8 \
--runtime-version=tpu-ubuntu2204-base \
--service-account=my-service account
```
2. Then, once I got access to them I checked their status and once ready I access each one using e.g. (VM1):
`gcloud compute tpus tpu-vm ssh my-tpu-vm-v3-8-1 --zone=europe-west4-a`
3. I connected to each Cloud TPU VM and run the following startup script (some env variables are the same for all VMs such as `MASTER_ADDR`, `MASTER_PORT` while others are specific to each VM such as `TPU_IP_ADDRESS` and `RANK`):
```
#!/bin/bash
# Check if both TPU IP and TPU NAME arguments are provided
if [ "$#" -ne 3 ]; then
echo "Usage: setup_tpu.sh <TPU-IP-ADDRESS> <TPU-NAME> <ENV_PATH>"
exit 1
fi
# Read TPU IP address, TPU NAME from the arguments and path to the virtual environment
TPU_IP=$1
TPU_NAME=$2
ENV_PATH=$3
# Install python3-venv for creating virtual environments
sudo apt-get update
sudo apt-get install -y python3.10-venv
# Check if the virtual environment already exists
if [ -d "$ENV_PATH" ]; then
echo "Virtual environment '$ENV_PATH' already exists. Deleting it."
sudo rm -rf $ENV_PATH
fi
echo "Creating a new virtual environment '$ENV_PATH'."
# Create a Python virtual environment
python3 -m venv $ENV_PATH
source $ENV_PATH/bin/activate
# Upgrade pip
pip install --upgrade pip
# Install PyTorch and Torch XLA
**pip install torch~=2.1.0 torch_xla[tpu]~=2.1.0 torchvision -f https://storage.googleapis.com/libtpu-releases/index.html**
# Install other dependencies
pip install numpy pandas notebook tensorboard tqdm altair datasets tokenizers torchmetrics jupyter ipywidgets google-cloud-storage
# The script clones the PyTorch/XLA repository. We clone the branch r2.1. If you need a different version, adjust the branch name accordingly.
**git clone -b r2.1 https://github.com/pytorch/xla.git**
# empty .bash_profile Before Adding New Variables
> ~/.bash_profile
# Set TPU and GCS related environment variables
**echo "export PJRT_DEVICE=TPU" >> ~/.bash_profile**
echo "export TPU_NAME=$TPU_NAME" >> ~/.bash_profile
echo "export TPU_IP_ADDRESS=$TPU_IP" >> ~/.bash_profile
echo "export XRT_TPU_CONFIG='tpu_worker;0;$TPU_IP:8470'" >> ~/.bash_profile
echo "export BUCKET_NAME='my-bucket'" >> ~/.bash_profile
echo "export GCS_MOUNTED_BUCKET=\"/mnt/buckets/$BUCKET_NAME\"" >> ~/.bash_profile
echo "export HF_DATASETS_CACHE=\"\$GCS_MOUNTED_BUCKET/huggingface_datasets_cache\"" >> ~/.bash_profile
GCSFUSE_REPO=$(lsb_release -c -s)
echo "export GCSFUSE_REPO='gcsfuse-$GCSFUSE_REPO'" >> ~/.bash_profile
# Environment variables for distributed training
echo "export MASTER_ADDR='10.164.0.4'" >> ~/.bash_profile # Replace with the IP address of the master VM (in our case it is VM1)
echo "export MASTER_PORT=9230" >> ~/.bash_profile # Replace with your chosen port (see GC Console > VPC network > Firewall > Protocols/ports)
echo "export WORLD_SIZE=40" >> ~/.bash_profile # Total number of TPU cores across all VMs
echo "export RANK=0" >> ~/.bash_profile # Unique rank for this VM (0, 1, 2, ..., num_vms - 1)
echo "export LOCAL_RANK=0" >> ~/.bash_profile # Local rank (0 for a single TPU VM)
echo "export XLA_IR_DEBUG=1" # enables verbose logging in PyTorch XLA to get more detailed logs, which might help in diagnosing any issues
# Apply the environment variables
source ~/.bash_profile
echo "TPU VM setup is complete."
```
4. I run the most basic test on each VM to make sure everything works as it should ([see here](https://cloud.google.com/tpu/docs/run-calculation-pytorch#perform_a_simple_calculation)):
```
import torch
import torch_xla.core.xla_model as xm
dev = xm.xla_device()
t1 = torch.randn(3,3,device=dev)
t2 = torch.randn(3,3,device=dev)
print(t1 + t2)
```
**Note: the above code works as expected.**
5. I also went through the [Troubleshooting](https://github.com/pytorch/xla/blob/master/TROUBLESHOO
|
https://github.com/pytorch/xla/issues/6274
|
closed
|
[
"question",
"distributed"
] | 2024-01-09T05:45:42Z
| 2025-04-23T14:42:27Z
| null |
h-sellak
|
huggingface/text-generation-inference
| 1,415
|
How to use local Medusa head?
|
It is said that Medusa can significantly accelerate inference speed. During my attempts to utilize it, I have observed that it does not support the use of local Medusa config and head. The code fragment I discovered that pertains to this functionality is as follows, which I have modified. However, I do not comprehend the meaning of 'medusa_sf'. The training process of Medusa does not generate new safetensors. What is this?
```python
medusa_config = f"{model_id}/config_medusa.json"
# medusa_config = hf_hub_download(
# use_medusa, revision=revision, filename="config.json"
# )
with open(medusa_config, "r") as f:
config = json.load(f)
medusa_head = f"{model_id}/medusa_lm_head.pt"
# medusa_head = hf_hub_download(
# use_medusa, revision=revision, filename="medusa_lm_head.pt"
# )
medusa_sf = medusa_head[: -len(".pt")] + ".safetensors"
weights = Weights(
[medusa_sf], device, dtype, process_group=self.process_group
)
lm_head = model.lm_head
model.lm_head = MedusaModel(config, weights, lm_head)
```
How should I employ TGI to access the local Medusa? A huge thank for your work!
|
https://github.com/huggingface/text-generation-inference/issues/1415
|
closed
|
[] | 2024-01-09T03:22:47Z
| 2024-01-10T17:36:23Z
| null |
eurus-ch
|
huggingface/transformers
| 28,388
|
How to use an efficient encoder as shared EncoderDecoderModel?
|
### Feature request
Efficient encoder like destilBERT, ALBERT or ELECTRA aren't supported as decoder of the EncoderDecoderModel and so they can't be shared as encoder and decoder.
### Motivation
Warm-starting shared models is a powerful way to build transformer models. Yet the efficient models can't be used.
### Your contribution
We could implement the support for destilBERT, ALBERT or ELECTRA. They shouldn't be that different from other encoders.
|
https://github.com/huggingface/transformers/issues/28388
|
open
|
[
"Feature request"
] | 2024-01-08T11:43:05Z
| 2024-01-08T12:35:24Z
| null |
Bachstelze
|
pytorch/kineto
| 854
|
Is Kineto planning to support backend extensions?
|
Hello, there is 'PrivateUse1' in pytorch to support backend integration. Will Kineto provide similar features?
|
https://github.com/pytorch/kineto/issues/854
|
closed
|
[
"question"
] | 2024-01-08T03:19:53Z
| 2024-04-23T15:21:34Z
| null |
fwenguang
|
huggingface/alignment-handbook
| 92
|
Is there anyway that I can use learning rate warm-up during the training ?
|
I am using this repo to:
1. Continual Pre-training
2. SFT
3. DPR
For stage 1, I want to use a learning rate warm-up.
|
https://github.com/huggingface/alignment-handbook/issues/92
|
closed
|
[] | 2024-01-07T21:07:25Z
| 2024-01-10T06:48:52Z
| 1
|
shamanez
|
huggingface/alignment-handbook
| 91
|
how to use dpo without flash-attention
|
Is there any flash-attention free version?
|
https://github.com/huggingface/alignment-handbook/issues/91
|
open
|
[] | 2024-01-07T16:27:08Z
| 2024-02-06T19:51:38Z
| null |
Fu-Dayuan
|
huggingface/accelerate
| 2,312
|
Seeking for Help: how to work deepspeed zero stage 3 with quantized model?
|
Hi, I would like to conduct dpo training on my 2 a6000 (48GB) gpus based on this project (https://github.com/allenai/open-instruct). Specifically, the model was based on qlora and reference model was based on quantized one. I would like to utilize the deepspeed zero stage 3 to accelerate training time.
During the training process, I encountered errors related to the model and reference model integration with Deepspeed. Below is the relevant code snippet and the encountered error:
The model and reference model both were loaded with
```python
bnb_config = BitsAndBytesConfig(
load_in_8bit=True,
)
device_index = accelerator.local_process_index
device_map = {"": device_index} # force data-parallel training.
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
from_tf=bool(".ckpt" in model_name_or_path),
config=config,
load_in_8bit=True,
quantization_config=bnb_config,
torch_dtype=torch.bfloat16,
use_flash_attention_2=True if args.use_flash_attn else False,
)
reference_model = model
# some codes about coverting model to lora model...
def prepare_deepspeed(accelerator, model):
deepspeed_plugin = accelerator.state.deepspeed_plugin
config_kwargs = deepcopy(deepspeed_plugin.deepspeed_config)
if model is not None:
if hasattr(model, "config"):
hidden_size = (
max(model.config.hidden_sizes)
if getattr(model.config, "hidden_sizes", None)
else getattr(model.config, "hidden_size", None)
)
if hidden_size is not None and config_kwargs["zero_optimization"]["stage"] == 3:
# Note that `stage3_prefetch_bucket_size` can produce DeepSpeed messages like: `Invalidate trace cache @ step 0: expected module 1, but got module 0`
# This is expected and is not an error, see: https://github.com/microsoft/DeepSpeed/discussions/4081
config_kwargs.update(
{
"zero_optimization.reduce_bucket_size": hidden_size * hidden_size,
"zero_optimization.stage3_param_persistence_threshold": 10 * hidden_size,
"zero_optimization.stage3_prefetch_bucket_size": 0.9 * hidden_size * hidden_size,
}
)
# If ZeRO-3 is used, we shard both the active and reference model.
# Otherwise, we assume the reference model fits in memory and is initialized on each device with ZeRO disabled (stage 0)
if config_kwargs["zero_optimization"]["stage"] != 3:
config_kwargs["zero_optimization"]["stage"] = 0
model, *_ = deepspeed.initialize(model=model, config=config_kwargs)
model.eval()
return model
reference_model = prepare_deepspeed(accelerator, reference_model)
```
```
File "/root/data1/tulu2/open-instruct/open-instruct-main/open_instruct/dpo_tune.py", line 692, in main
reference_model = prepare_deepspeed(accelerator, reference_model)
File "/root/data1/tulu2/open-instruct/open-instruct-main/open_instruct/dpo_tune.py", line 396, in prepare_deepspeed
model, *_ = deepspeed.initialize(model=model, config=config_kwargs)
File "/conda/envs/tulu_dpo_env/lib/python3.10/site-packages/deepspeed/__init__.py", line 171, in initialize
engine = DeepSpeedEngine(args=args,
File "/conda/envs/tulu_dpo_env/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 259, in __init__
self._configure_distributed_model(model)
File "/conda/envs/tulu_dpo_env/lib/python3.10/site-
|
https://github.com/huggingface/accelerate/issues/2312
|
closed
|
[] | 2024-01-07T09:44:28Z
| 2024-01-11T11:01:31Z
| null |
grayground
|
huggingface/datasets
| 6,565
|
`drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader
|
### Describe the bug
Scenario:
- Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from each dataset in a single batch, with `drop_last_batch=True` to skip the last batch in case it doesn't have two samples.
What works:
- Using DataLoader with `num_workers=0`
What does not work:
- Using DataLoader with `num_workers=1`, errors in the last batch.
Basically, `drop_last_batch=True` is ignored when using multiple dataloading workers.
Please take a look at the minimal repro script below.
### Steps to reproduce the bug
```python
from datasets import Dataset, interleave_datasets
from torch.utils.data import DataLoader
def merge_samples(batch):
assert len(batch['a']) == 2, "Batch size must be 2"
batch['c'] = [batch['a'][0]]
batch['d'] = [batch['a'][1]]
return batch
def gen1():
for ii in range(1, 8385):
yield {"a": ii}
def gen2():
for ii in range(1, 5302):
yield {"a": ii}
if __name__ == '__main__':
dataset1 = Dataset.from_generator(gen1).to_iterable_dataset(num_shards=1024)
dataset2 = Dataset.from_generator(gen2).to_iterable_dataset(num_shards=1024)
interleaved = interleave_datasets([dataset1, dataset2], stopping_strategy="all_exhausted")
mapped = interleaved.map(merge_samples, batched=True, batch_size=2, remove_columns=interleaved.column_names,
drop_last_batch=True)
# Works
loader = DataLoader(mapped, batch_size=32, num_workers=0)
i = 0
for b in loader:
print(i, b['c'].shape, b['d'].shape)
i += 1
print("DataLoader with num_workers=0 works")
# Doesn't work
loader = DataLoader(mapped, batch_size=32, num_workers=1)
i = 0
for b in loader:
print(i, b['c'].shape, b['d'].shape)
i += 1
```
### Expected behavior
`drop_last_batch=True` should have same behaviour for `num_workers=0` and `num_workers>=1`
### Environment info
- `datasets` version: 2.16.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.2
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.6.0
I have also tested on Linux and got the same behavior.
|
https://github.com/huggingface/datasets/issues/6565
|
closed
|
[] | 2024-01-07T02:46:50Z
| 2025-03-08T09:46:05Z
| 2
|
naba89
|
huggingface/transformers.js
| 505
|
How do I use WebGL as executionProvider?
|
### Question
```js
export const executionProviders = [
// 'webgpu',
'wasm'
];
```
I looked at src/backends/onnx.js and noticed that there was no webgl in the executionProviders.
Is there a way to use WebGL as executionProvider?
|
https://github.com/huggingface/transformers.js/issues/505
|
closed
|
[
"question"
] | 2024-01-06T19:16:36Z
| 2024-10-18T13:30:09Z
| null |
kwaroran
|
pytorch/executorch
| 1,548
|
How to implement the "aten.mul.Scalar" for Qualcomm backend
|
The second arg of "aten.mul.Scalar" is const scalar value, such as float: 0.5f.
The function define_tensor/define_scalar/define_value of NodeVisitor should get the arg "node" as input, but how can I define one node like torch.fx.Node for const scalar value?
|
https://github.com/pytorch/executorch/issues/1548
|
closed
|
[
"partner: qualcomm",
"triaged"
] | 2024-01-06T09:12:19Z
| 2024-01-09T02:18:37Z
| null |
czy2014hust
|
pytorch/pytorch
| 116,922
|
How to adapt to `at::scaled_dot_product_attention`'s routing logic for a third-party cuda-like device?
|
https://github.com/pytorch/pytorch/blob/f24bba1624a8bb5c920833b18fc6162db084ca09/aten/src/ATen/native/transformers/attention.cpp#L635-L642
Now, I am adapting `at::scaled_dot_product_attention` to a specific type of cuda-like device and encounters a problem.
In `at::scaled_dot_product_attention`, it will choose a path between `at::_scaled_dot_product_flash_attention`, `at::_scaled_dot_product_efficient_attention` and `at::_scaled_dot_product_attention_math` for `cpu`, `cuda` and `romc` in the routing codes.
But for another cuda-like device, the routing codes will always go into the `at::_scaled_dot_product_attention_math` path.
If I have implemented all these three paths for my cuda-like device, how can I change these routing codes to fully support `at::scaled_dot_product_attention`?
Should I write a new `at::scaled_dot_product_attention` only for my cuda-like device, or just change some codes in the current torch repository?
I need some suggestions, thank you!
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki
|
https://github.com/pytorch/pytorch/issues/116922
|
closed
|
[] | 2024-01-06T07:28:43Z
| 2024-01-15T02:13:30Z
| null |
drslark
|
huggingface/diffusers
| 6,474
|
how to use xformers
|
Maybe this is a relatively low-level question, but what always bothers me is how does Xformer run when running SD? Or can it be accelerated by default after installing this library? Thank you all for answering your questions
|
https://github.com/huggingface/diffusers/issues/6474
|
closed
|
[] | 2024-01-06T03:34:16Z
| 2024-01-11T03:38:19Z
| null |
babyta
|
pytorch/serve
| 2,890
|
Difference between `Custom handler with module level entry point` and `Custom handler with class level entry point`
|
### 📚 The doc issue
# Not an issue
What is the difference between `Custom handler with module level entry point` and `Custom handler with class level entry point`?
Can you give me any examples?
Thanks for help
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/2890
|
closed
|
[
"question",
"triaged"
] | 2024-01-05T20:45:25Z
| 2024-01-25T05:07:51Z
| null |
IonBoleac
|
huggingface/datasets
| 6,561
|
Document YAML configuration with "data_dir"
|
See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference
|
https://github.com/huggingface/datasets/issues/6561
|
open
|
[
"documentation"
] | 2024-01-05T14:03:33Z
| 2025-08-07T14:57:58Z
| 6
|
severo
|
pytorch/TensorRT
| 2,579
|
❓ [Question] Support for layers with Custom C++ and CUDA Extensions
|
## ❓ Question
Support for layers with Custom C++ and CUDA Extensions
## What you have already tried
Can I convert the LLTM class in directory `cuda` of https://github.com/pytorch/extension-cpp (below) into a tensorrt engine through Torch-TensorRT?
I tried the code below:
```lltm.py
import math
from torch import nn
from torch.autograd import Function
import torch
import lltm_cuda
torch.manual_seed(42)
class LLTMFunction(Function):
@staticmethod
def forward(ctx, input, weights, bias, old_h, old_cell):
outputs = lltm_cuda.forward(input, weights, bias, old_h, old_cell)
new_h, new_cell = outputs[:2]
variables = outputs[1:] + [weights]
ctx.save_for_backward(*variables)
return new_h, new_cell
@staticmethod
def backward(ctx, grad_h, grad_cell):
outputs = lltm_cuda.backward(
grad_h.contiguous(), grad_cell.contiguous(), *ctx.saved_variables)
d_old_h, d_input, d_weights, d_bias, d_old_cell, d_gates = outputs
return d_input, d_weights, d_bias, d_old_h, d_old_cell
class LLTM(nn.Module):
def __init__(self, input_features, state_size):
super(LLTM, self).__init__()
self.input_features = input_features
self.state_size = state_size
self.weights = nn.Parameter(
torch.Tensor(3 * state_size, input_features + state_size))
self.bias = nn.Parameter(torch.Tensor(1, 3 * state_size))
self.reset_parameters()
def reset_parameters(self):
stdv = 1.0 / math.sqrt(self.state_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, +stdv)
def forward(self, input, state):
return LLTMFunction.apply(input, self.weights, self.bias, *state)
import torch_tensorrt
model = LLTM(64, 32).cuda()
print(
model(
torch.randn(2, 64).cuda(),
torch.randn(2, 32).cuda(),
torch.randn(2, 32).cuda(),
)
)
traced_model = torch.jit.trace(
model,
[
torch.randn(2, 64).cuda(),
torch.randn(2, 32).cuda(),
torch.randn(2, 32).cuda(),
],
)
import torch_tensorrt
trt_model = torch_tensorrt.compile(
traced_model,
inputs=[
torch_tensorrt.Input((2, 64), dtype=torch.float32),
torch_tensorrt.Input((2, 32), dtype=torch.float32),
torch_tensorrt.Input((2, 32), dtype=torch.float32),
],
enabled_precisions={torch.float32},
)
```
output:
```
[1] 895656 segmentation fault (core dumped) python lltm.py
````
I read the relevant materials, but I have no idea how to proceed at all.
|
https://github.com/pytorch/TensorRT/issues/2579
|
closed
|
[
"question"
] | 2024-01-05T07:25:23Z
| 2024-01-15T06:22:05Z
| null |
Siyeong-Lee
|
pytorch/TensorRT
| 2,577
|
Can please somebody give a clear explanation of how to install torch-tensorrt on Windows?
|
## ❓ Question
Hello,
I've encountered problems installing torch-tensorrt on Windows 10
No matter how I try, how many sources I look up to, there is no clear explanation on how to do everything. The documentation is vague, and because I am used to working with python code, which does everything for you, that is pip install... python code.py, and nothing more is required, I do not have as much experience with cmake, building libraries, files, and c++, which makes it very difficult to follow along the installation process.
Now I've tried to follow along instructions from the [main page](https://github.com/pytorch/TensorRT)
pip install torch-tensorrt doesn't work
downloaded zip file of this repository; python setup.py install also doesn't work
installed bazel
modified the workspace, still nothing
tried to directly import into code py/torch-tensorrt - nothing
then inside the py folder opened command prompt ant typed in:
`bazel build //:libtorchtrt --compilation_mode=dbg`
and received this error:
`Starting local Bazel server and connecting to it...
INFO: Repository libtorch instantiated at:
D:/pyth/tensorrt-main/WORKSPACE:53:13: in <toplevel>
Repository rule http_archive defined at:
C:/users/tomas/_bazel_tomas/r4zfvyvs/external/bazel_tools/tools/build_defs/repo/http.bzl:372:31: in <toplevel>
WARNING: Download from https://download.pytorch.org/libtorch/nightly/cu121/libtorch-cxx11-abi-shared-with-deps-latest.zip failed: class com.google.devtools.build.lib.bazel.repository.downloader.ContentLengthMismatchException Bytes read 2210658461 but wanted 2501377827
ERROR: An error occurred during the fetch of repository 'libtorch':
Traceback (most recent call last):
File "C:/users/tomas/_bazel_tomas/r4zfvyvs/external/bazel_tools/tools/build_defs/repo/http.bzl", line 132, column 45, in _http_archive_impl
download_info = ctx.download_and_extract(
Error in download_and_extract: java.io.IOException: Error downloading [https://download.pytorch.org/libtorch/nightly/cu121/libtorch-cxx11-abi-shared-with-deps-latest.zip] to C:/users/tomas/_bazel_tomas/r4zfvyvs/external/libtorch/temp7217651597570855917/libtorch-cxx11-abi-shared-with-deps-latest.zip: Bytes read 2210658461 but wanted 2501377827
ERROR: D:/pyth/tensorrt-main/WORKSPACE:53:13: fetching http_archive rule //external:libtorch: Traceback (most recent call last):
File "C:/users/tomas/_bazel_tomas/r4zfvyvs/external/bazel_tools/tools/build_defs/repo/http.bzl", line 132, column 45, in _http_archive_impl
download_info = ctx.download_and_extract(
Error in download_and_extract: java.io.IOException: Error downloading [https://download.pytorch.org/libtorch/nightly/cu121/libtorch-cxx11-abi-shared-with-deps-latest.zip] to C:/users/tomas/_bazel_tomas/r4zfvyvs/external/libtorch/temp7217651597570855917/libtorch-cxx11-abi-shared-with-deps-latest.zip: Bytes read 2210658461 but wanted 2501377827
ERROR: D:/pyth/tensorrt-main/core/util/logging/BUILD:13:11: //core/util/logging:logging depends on @libtorch//:libtorch in repository @libtorch which failed to fetch. no such package '@libtorch//': java.io.IOException: Error downloading [https://download.pytorch.org/libtorch/nightly/cu121/libtorch-cxx11-abi-shared-with-deps-latest.zip] to C:/users/tomas/_bazel_tomas/r4zfvyvs/external/libtorch/temp7217651597570855917/libtorch-cxx11-abi-shared-with-deps-latest.zip: Bytes read 2210658461 but wanted 2501377827
ERROR: Analysis of target '//:libtorchtrt' failed; build aborted:
INFO: Elapsed time: 458.697s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (64 packages loaded, 413 targets configured)
Fetching https://download.pytorch.org/...orch-cxx11-abi-shared-with-deps-latest.zip; 2.1 GiB (2,210,121,825B) 446s
And also tried some other things, I cannot remember, but unsuccessfully.
THANK YOU FOR YOUR HELP IN ADVANCE
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version: 2.1.2+cu121
- OS : Windows 10
- I am running python and pytorch straight from Windows, without any environment
- Python version: 3.10.13
- CUDA version: 12.1 update 1
- GPU models and configuration: GTX 1660 TI
|
https://github.com/pytorch/TensorRT/issues/2577
|
closed
|
[
"question"
] | 2024-01-05T02:52:01Z
| 2025-12-02T18:12:43Z
| null |
ninono12345
|
huggingface/sentence-transformers
| 2,397
|
Does finetuning a cross-encoder yield prediction labels and not similarity scores?
|
Hi,
This is less of a coding issue and more of a conceptual question. I have binary labels for similarity and dissimilarity while training a cross-encoder; so its a binary classification task. The pretrained cross-encoder has a float score, most of the time around .5. After finetuning, the models only predict a decimal really close 0 or 1, which makes sense since the model is being trained for a binary classification task. But is is supposed to be a label prediction or a similarity score? Or is it limited to the type of data you have for training?
|
https://github.com/huggingface/sentence-transformers/issues/2397
|
closed
|
[
"question"
] | 2024-01-04T21:01:44Z
| 2024-01-09T17:53:17Z
| null |
FDSRashid
|
huggingface/text-generation-inference
| 1,403
|
How to load llama-2 thru Client
|
### System Info
Hi there, text_generation.__version__ = 0.6.0
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
I am trying to load llama-2 model thru Client
```
from text_generation import Client
model_endpoint = "https://api-inference.huggingface.co/models/meta-llama/Llama-2-7b-hf"
# model_endpoint = "https://api-inference.huggingface.co/models/tiiuae/falcon-7b-instruct"
# model_endpoint = "https://api-inference.huggingface.co/models/lmsys/vicuna-7b-v1.5"
client = Client(model_endpoint, timeout=60, headers={"Authorization": f"Bearer {token_auth}"})
generation: str = client.generate(
prompt="What is the capital city of British Columbia, Canada",
temperature=1,
top_p=0.9,
max_new_tokens=384,
stop_sequences=None,
).generated_text
```
### Expected behavior
However, this is an error:
> BadRequestError: Model requires a Pro subscription; check out hf.co/pricing to learn more. Make sure to include your HF token in your query.
Kindly ask any solutions ?
thanks.
|
https://github.com/huggingface/text-generation-inference/issues/1403
|
closed
|
[] | 2024-01-04T17:25:59Z
| 2024-01-05T16:01:56Z
| null |
yanan1116
|
huggingface/transformers
| 28,343
|
How to log custom value?
|
I want to log some info to `{'loss': 2.5234, 'learning_rate': 1.0344827586206896e-06, 'epoch': 0.0}`
how can i do that?
like: {'loss': 2.5234, 'learning_rate': 1.0344827586206896e-06, 'epoch': 0.0, 'version': 'v1'}
|
https://github.com/huggingface/transformers/issues/28343
|
closed
|
[] | 2024-01-04T12:28:43Z
| 2024-01-07T13:07:22Z
| null |
xmy0916
|
huggingface/transformers.js
| 499
|
An error occurred during model execution: "RangeError: offset is out of bounds".
|
### Question
Hello - having an issue getting this code to run in the browser. Using `Xenova/TinyLlama-1.1B-Chat-v1.0` on `"@xenova/transformers": "^2.13.2"`
It runs perfectly in node.
```ts
import { pipeline } from '@xenova/transformers';
console.log('Loading model...');
const generator = await pipeline('text-generation', 'Xenova/TinyLlama-1.1B-Chat-v1.0');
console.log('Model loaded!');
const messages = [
{ role: 'system', content: 'You are a friendly Assistant' },
{ role: 'user', content: 'Explain JavaScript Scopes in simple terms' },
];
const prompt = generator.tokenizer.apply_chat_template(messages, {
tokenize: false,
add_generation_prompt: true,
});
console.log('Generating...');
const result = await generator(prompt, {
max_new_tokens: 256,
temperature: 0.5,
do_sample: true,
top_k: 50,
});
console.dir(result);
```
In Node it runs:
<img width="951" alt="Screenshot 2024-01-03 at 2 53 39 PM" src="https://github.com/xenova/transformers.js/assets/176013/4dfb556c-4605-4a19-b560-a52c07a28e5f">
But in the browser I see this:
<img width="1264" alt="Screenshot 2024-01-03 at 2 54 28 PM" src="https://github.com/xenova/transformers.js/assets/176013/899c803f-d311-4661-b3f9-ccd3e9c714d0">
Same issue in Firefox.
This issue seems to say it's memory: https://github.com/xenova/transformers.js/issues/8
Is this one too large to run in the browser?
|
https://github.com/huggingface/transformers.js/issues/499
|
closed
|
[
"question"
] | 2024-01-03T19:55:45Z
| 2024-10-18T13:30:09Z
| null |
wesbos
|
huggingface/transformers.js
| 497
|
Cross Encoder
|
### Question
I'm trying to run this pre-trained Cross Encoder model ([MS Marco TinyBERT](https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L-2-v2)) not available in Transformers.js.
I've managed to convert it using the handy script, and I'm successfully running it with the "feature-extraction" task:
```js
const pairs = [
["How many people live in Berlin?", "Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers."],
[ "How many people live in Berlin?", "Berlin is well known for its museums."]
];
const model = await pipeline("feature-extraction", modelName);
const out = await model(pairs[0]);
console.log(Array.from(out.data)) // [-8.387903213500977, -9.811422348022461]
```
But I'm trying to run it as a Cross Encoder model as it's intended to, like the Python [example code](https://www.sbert.net/docs/pretrained-models/ce-msmarco.html?highlight=cross%20encoder):
```python
from sentence_transformers import CrossEncoder
model_name = 'cross-encoder/ms-marco-TinyBERT-L-2-v2'
model = CrossEncoder(model_name, max_length=512)
scores = model.predict([
('How many people live in Berlin?', 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'),
('How many people live in Berlin?', 'Berlin is well known for its museums.')
])
print(scores) // [ 7.1523685 -6.2870455]
```
How can I infer a similarity score from two sentences?
PS: if there are existing models/techniques for sentence similarity I'll take it!
|
https://github.com/huggingface/transformers.js/issues/497
|
closed
|
[
"question"
] | 2024-01-03T16:24:37Z
| 2024-03-01T00:11:31Z
| null |
achrafash
|
huggingface/autotrain-advanced
| 448
|
What is the difference between autotrain and kohya_ss?
|
What is the difference between autotrain and kohya_ss?
|
https://github.com/huggingface/autotrain-advanced/issues/448
|
closed
|
[
"stale"
] | 2024-01-03T16:18:58Z
| 2024-01-22T15:01:45Z
| null |
loboere
|
pytorch/executorch
| 1,527
|
How to build qnn_executor_runner for linux-gcc9.3?
|
My requirements are that I want to compile the model on x86 host and run the inference on linux device using Qualcomm AI Engine, e.g. SA8295. So how to build `qnn_executor_runner` for linux-gcc9.3 not android? thanks~
the libQnnHtp.so is different in qnn.
```
$ find . -name libQnnHtp.so
./lib/aarch64-oe-linux-gcc9.3/libQnnHtp.so
./lib/aarch64-android/libQnnHtp.so
```
|
https://github.com/pytorch/executorch/issues/1527
|
closed
|
[
"partner: qualcomm",
"triaged"
] | 2024-01-03T09:04:08Z
| 2024-01-29T07:49:12Z
| null |
huangzhiyuan
|
huggingface/optimum
| 1,622
|
device set bug
|
### System Info
```shell
optimum 1.16.1
```
### Who can help?
@philschmid
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
model_id = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_id)
quantization_config = GPTQConfig(bits=4, dataset=["c4", "c4", "c4"], tokenizer=tokenizer)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda:5", quantization_config=quantization_config)
print()
### Expected behavior
optimum/gptq/quantizer.py line 429
data[k] = v.to(0)
Why is it fixed at 0? When setting device_map for the model, an error occurs that the input and model are not on the same device.
Is this a bug?
|
https://github.com/huggingface/optimum/issues/1622
|
open
|
[
"bug"
] | 2024-01-03T09:01:16Z
| 2024-01-09T10:17:45Z
| 1
|
Yuang-Deng
|
pytorch/pytorch
| 116,687
|
How to install pytorch on
|
https://github.com/pytorch/pytorch/issues/116687
|
closed
|
[] | 2024-01-03T08:12:33Z
| 2024-01-03T08:42:38Z
| null |
Joseph513shen
|
|
huggingface/transformers.js
| 494
|
in-browser inference slower than node inference to be expected?
|
### Question
i noticed that i get much higher performance when i run inference in node vs in the browser (latest chrome, m2 mac, ). is that generally to be expected? for context - i'm creating embeddings for chunks of text using the gte-small model.
thank you!
|
https://github.com/huggingface/transformers.js/issues/494
|
closed
|
[
"question"
] | 2024-01-03T04:26:47Z
| 2024-08-27T23:53:36Z
| null |
carlojoerges
|
huggingface/optimum
| 1,621
|
Cannot convert sentence transformer model properly
|
### System Info
```shell
Optimum Version = 1.16.1
```
### Who can help?
@michaelbenayoun
@fxmarty
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
When running:
`optimum-cli export onnx -m sentence-transformers/distiluse-base-multilingual-cased-v2 --task feature-extraction ./models/distiluse-base-multilingual-cased-v2`
I get:
```
...
The ONNX export succeeded with the warning: The exported ONNX model does not have the exact same outputs as what is provided in SentenceTransformersTransformerOnnxConfig. Difference: onnx::Shape_530, onnx::Shape_233, onnx::Shape_332, onnx::Shape_431, onnx::Shape_629, 764.
...
```
And afterwards when i try running the inference session with the generated .onnx model i get:
```
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Expand node. Name:'/1/Expand' Status Message: invalid expand shape
```
It seems like the model is not being properly converted. I'm currently trying to figure out why exactly.
### Extra context:
- This pr seems to have added support to sentence-transformers models, maybe something is missing: https://github.com/huggingface/optimum/pull/1589
- To generate the runtime session error I used this script and changed the model names and exported model path: https://github.com/huggingface/optimum/issues/1519#issuecomment-1854780869
- The same error occurs using node.js onnx runtime, so I assume the model is not exported properly.
### Expected behavior
The model is exported properly and generates the same results as using Sentence transformers directly.
|
https://github.com/huggingface/optimum/issues/1621
|
closed
|
[
"bug"
] | 2024-01-02T12:08:07Z
| 2024-01-12T15:26:21Z
| 4
|
leodalcin
|
huggingface/alignment-handbook
| 87
|
How can I config `loss_type`?
|
I want to change the **loss_type** into KTO or something else to test but I can't. Please show me the way. Thank you.
|
https://github.com/huggingface/alignment-handbook/issues/87
|
closed
|
[] | 2024-01-02T11:54:34Z
| 2024-01-10T13:41:19Z
| 2
|
hahuyhoang411
|
pytorch/examples
| 1,208
|
add examples/siamese_network with triplet loss example
|
<!--
Thank you for suggesting an idea to improve pytorch/examples
Please fill in as much of the template below as you're able.
-->
## Is your feature request related to a problem? Please describe.
Can you please provide an example of Siamese network training / testing with triplet loss such that it can be used with more complex image datasets?
## Describe the solution
Either add an args flag to set triplet loss as the method in the existing example, or provide a separate example for triplet loss.
## Describe alternatives solution
I tried to do this on my own.
|
https://github.com/pytorch/examples/issues/1208
|
open
|
[] | 2024-01-01T19:19:35Z
| 2024-01-01T19:19:35Z
| 0
|
pax7
|
huggingface/datasets
| 6,548
|
Skip if a dataset has issues
|
### Describe the bug
Hello everyone,
I'm using **load_datasets** from **huggingface** to download the datasets and I'm facing an issue, the download starts but it reaches some state and then fails with the following error:
Couldn't reach https://huggingface.co/datasets/wikimedia/wikipedia/resolve/4cb9b0d719291f1a10f96f67d609c5d442980dc9/20231101.ext/train-00000-of-00001.parquet
Failed to resolve \'huggingface.co\' ([Errno -3] Temporary failure in name resolution)"))')))

so I was wondering is there a parameter to be passed to load_dataset() to skip files that can't be downloaded??
### Steps to reproduce the bug
Parameter to be passed to load_dataset() of huggingface to skip files that can't be downloaded??
### Expected behavior
load_dataset() finishes without error
### Environment info
None
|
https://github.com/huggingface/datasets/issues/6548
|
open
|
[] | 2023-12-31T12:41:26Z
| 2024-01-02T10:33:17Z
| 1
|
hadianasliwa
|
huggingface/transformers.js
| 491
|
Running tests locally fail
|
### Question
When I git clone to my Mac, and run tests, I get a lot of errors:
```
● Models › Loading different architecture types › gpt2 (GPT2Model)
Could not locate file: "https://huggingface.co/gpt2/resolve/main/tokenizer_config.json".
239 |
240 | const message = ERROR_MAPPING[status] ?? `Error (${status}) occurred while trying to load file`;
> 241 | throw Error(`${message}: "${remoteURL}".`);
| ^
242 | }
243 |
244 | class FileCache {
at handleError (src/utils/hub.js:241:11)
at getModelFile (src/utils/hub.js:474:24)
at getModelJSON (src/utils/hub.js:575:18)
at async Promise.all (index 1)
at loadTokenizer (src/tokenizers.js:61:16)
at Function.from_pretrained (src/tokenizers.js:2465:20)
at Object.<anonymous> (tests/models.test.js:61:37)
```
And indeed, a lot of files don't actually exist, like in this case:
https://huggingface.co/gpt2/resolve/main/tokenizer_config.json
But I don't see this in the logs for your github actions, so i am confused.
|
https://github.com/huggingface/transformers.js/issues/491
|
closed
|
[
"question"
] | 2023-12-30T02:12:35Z
| 2024-10-18T13:30:11Z
| null |
sroussey
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.