repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js
| 490
|
Is it possible to implement sentence splitting?
|
### Question
Can this library be used to implement sentence splitting, possibly with tokenizers?
|
https://github.com/huggingface/transformers.js/issues/490
|
closed
|
[
"question"
] | 2023-12-30T01:17:55Z
| 2024-02-01T01:51:52Z
| null |
devfacet
|
huggingface/transformers.js
| 486
|
Output different from sentence transformers
|
### Question
Hello, i'm not sure if i'm doing something wrong, but the pooled outputs from sentence transformers and this library seem to be different.
The results are the same if I use `pooling: 'none'` in js and `output_value='token_embedding` in python.
I've seen some other similar issues, but this seems to be a different problem.
```js
const fs = require('fs');
class MyClassificationPipeline {
static task = 'feature-extraction';
static model = 'Xenova/distiluse-base-multilingual-cased-v2';
static instance = null;
static async getInstance(progress_callback = null) {
if (this.instance === null) {
// Dynamically import the Transformers.js library
let { pipeline, env } = await import('@xenova/transformers');
// NOTE: Uncomment this to change the cache directory
// env.cacheDir = './.cache';
this.instance = pipeline(this.task, this.model, { progress_callback, quantized: false });
}
return this.instance;
}
}
// Comment out this line if you don't want to start loading the model as soon as the server starts.
// If commented out, the model will be loaded when the first request is received (i.e,. lazily).
MyClassificationPipeline.getInstance();
async function main() {
const classifier = await MyClassificationPipeline.getInstance();
const res = await classifier('This is an example sentence', { pooling: 'mean', normalize:false });
fs.writeFileSync('./xenova-embedding.json', JSON.stringify(res.data, null, 2), 'utf-8');
}
main();
```
```python
import json
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('sentence-transformers/distiluse-base-multilingual-cased-v2')
embedding = model.encode("This is an example sentence")
with open('embeddings.json', 'w') as f:
json.dump(embedding.tolist(), f)
```
Am i missing something?
|
https://github.com/huggingface/transformers.js/issues/486
|
closed
|
[
"question"
] | 2023-12-29T10:15:07Z
| 2024-01-02T12:20:17Z
| null |
leodalcin
|
huggingface/trl
| 1,155
|
What is the best way for the inference process in LORA in PEFT approach
|
Here is the SFTtrainer method i used for finetuning mistral
```
trainer = SFTTrainer(
model=peft_model,
train_dataset=data,
peft_config=peft_config,
dataset_text_field=" column name",
max_seq_length=3000,
tokenizer=tokenizer,
args=training_arguments,
packing=packing,
)
trainer.train()
```
I found different mechanisms for the finetuned model inference after PEFT based LORA finetuning
Method - 1
save adapter after completing training and then merge with base model then use for inference
```
trainer.model.save_pretrained("new_adapter_path")
from peft import PeftModel
finetuned_model = PeftModel.from_pretrained(base_model,
new_adapter_path,
torch_dtype=torch.float16,
is_trainable=False,
device_map="auto"
)
finetuned_model = finetuned_model.merge_and_unload()
```
Method - 2
save checkpoints during training and then use the checkpoint with the least loss
```
from peft import PeftModel
finetuned_model = PeftModel.from_pretrained(base_model,
"least loss checkpoint path",
torch_dtype=torch.float16,
is_trainable=False,
device_map="auto"
)
finetuned_model = finetuned_model.merge_and_unload()
```
Method - 3
same method with AutoPeftModelForCausalLM class
```
model = AutoPeftModelForCausalLM.from_pretrained(
"output directory checkpoint path",
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map="cuda")
finetuned_model = finetuned_model.merge_and_unload()
```
Method-4
AutoPeftModelForCausalLM class specifies the output folder without specifying a specific checkpoint
```
instruction_tuned_model = AutoPeftModelForCausalLM.from_pretrained(
training_args.output_dir,
torch_dtype=torch.bfloat16,
device_map = 'auto',
trust_remote_code=True,
)
finetuned_model = finetuned_model.merge_and_unload()
```
Method-5
All the above methods without merging
```
#finetuned_model = finetuned_model.merge_and_unload()
```
Which is the actual method I should follow for inference?
and when to use which method over another?
|
https://github.com/huggingface/trl/issues/1155
|
closed
|
[] | 2023-12-29T09:51:23Z
| 2024-02-10T15:05:12Z
| null |
pradeepdev-1995
|
huggingface/peft
| 1,310
|
What is the best way for the inference process in LORA in PEFT approach
|
### Feature request
What is the best way for the inference process in LORA in PEFT approach
### Motivation
What is the best way for the inference process in LORA in PEFT approach
### Your contribution
Here is the SFTtrainer method i used for finetuning mistral
```
trainer = SFTTrainer(
model=peft_model,
train_dataset=data,
peft_config=peft_config,
dataset_text_field=" column name",
max_seq_length=3000,
tokenizer=tokenizer,
args=training_arguments,
packing=packing,
)
trainer.train()
```
I found different mechanisms for the finetuned model inference after PEFT based LORA finetuning
Method - 1
save adapter after completing training and then merge with base model then use for inference
```
trainer.model.save_pretrained("new_adapter_path")
from peft import PeftModel
finetuned_model = PeftModel.from_pretrained(base_model,
new_adapter_path,
torch_dtype=torch.float16,
is_trainable=False,
device_map="auto"
)
finetuned_model = finetuned_model.merge_and_unload()
```
Method - 2
save checkpoints during training and then use the checkpoint with the least loss
```
from peft import PeftModel
finetuned_model = PeftModel.from_pretrained(base_model,
"least loss checkpoint path",
torch_dtype=torch.float16,
is_trainable=False,
device_map="auto"
)
finetuned_model = finetuned_model.merge_and_unload()
```
Method - 3
same method with AutoPeftModelForCausalLM class
```
model = AutoPeftModelForCausalLM.from_pretrained(
"output directory checkpoint path",
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map="cuda")
finetuned_model = finetuned_model.merge_and_unload()
```
Method-4
AutoPeftModelForCausalLM class specifies the output folder without specifying a specific checkpoint
```
instruction_tuned_model = AutoPeftModelForCausalLM.from_pretrained(
training_args.output_dir,
torch_dtype=torch.bfloat16,
device_map = 'auto',
trust_remote_code=True,
)
finetuned_model = finetuned_model.merge_and_unload()
```
Method-5
All the above methods without merging
```
#finetuned_model = finetuned_model.merge_and_unload()
```
Which is the actual method I should follow for inference?
and when to use which method over another?
|
https://github.com/huggingface/peft/issues/1310
|
closed
|
[] | 2023-12-29T09:49:55Z
| 2024-01-02T15:31:23Z
| null |
pradeepdev-1995
|
huggingface/datasets
| 6,542
|
Datasets : wikipedia 20220301.en error
|
### Describe the bug
When I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist.
### Steps to reproduce the bug
1.I tried downloading directly.
```python
wiki_dataset = load_dataset("wikipedia", "20220301.en")
```
An exception occurred
```
MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')`
```
2.I modified the code as prompted.
```python
wiki_dataset = load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')
```
An exception occurred:
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json
```
### Expected behavior
I searched in the parent directory of the corresponding URL, but there was no corresponding "20220301" directory.
I really need this data set and hope to provide a download method.
### Environment info
python 3.8
datasets 2.16.0
apache-beam 2.52.0
dill 0.3.7
|
https://github.com/huggingface/datasets/issues/6542
|
closed
|
[] | 2023-12-29T08:34:51Z
| 2024-01-02T13:21:06Z
| 2
|
ppx666
|
huggingface/diffusers
| 6,384
|
How to map A1111 reference_only parameters into diffusers?
|
Thanks for the community to implement the reference_only functionality in A1111, but how can the parameters correspond to each other? I have tried to reproduce the effect of webui in the diffusers library, but I can't seem to do it. I'm using the StableDiffusionReferencePipeline community pipeline.
My questions are:
1. Is reference_only in A1111 equivalent to reference_attn=True, reference_adain=False?

2. Some parameters in A1111, such as starting control step, seem to have no corresponding parameters in the pipeline.

3. The style_fidelity in A111 seems to have significant differences compared to style_fidelity in A1111.
|
https://github.com/huggingface/diffusers/issues/6384
|
closed
|
[
"stale"
] | 2023-12-29T08:16:15Z
| 2024-01-28T15:29:43Z
| null |
Logos23333
|
huggingface/peft
| 1,308
|
How to check the gradients of lora layers when training a peft model
|
### Feature request
when I trained a lora model like this
```python
model = get_peft_model(model, lora_config)
training(model,data)
```
How can I check the gradients of lora layers from a `peft` model ?
### Motivation
check gradients of lora layers from peft model during training
### Your contribution
ni
|
https://github.com/huggingface/peft/issues/1308
|
closed
|
[] | 2023-12-29T04:26:10Z
| 2024-01-05T04:55:41Z
| null |
stardusts-hj
|
pytorch/tutorials
| 2,724
|
💡 Request - Tutorials for Holistic Trace Analysis
|
### 🚀 Descirbe the improvement or the new tutorial
Add tutorials explaining how to use features in Holistic Trace Analysis.
### Existing tutorials on this topic
None
### Additional context
HTA eases the profiling distributed jobs in PyTorch. In order to introduce HTA to the PyTorch community it would be beneficial to add some tutorials.
|
https://github.com/pytorch/tutorials/issues/2724
|
closed
|
[] | 2023-12-28T21:56:27Z
| 2024-01-02T23:03:08Z
| 0
|
anupambhatnagar
|
huggingface/transformers.js
| 484
|
TypeScript Pipline Types for different models?
|
### Question
Is there a suggested way to get types for the different models? Right now after I create a pipline, like one of the following:
```
const segmenter = await pipeline('image-segmentation', 'Xenova/face-parsing');
// or
const extractor = await pipeline(`feature-extraction`, `Xenova/UAE-Large-V1`, {
quantized: true, // Set this to false to use the full (unquantized) model
});
```
All the methods and returned values are `(...args: any[]) => any` - finding it hard to work with the methods and returned values.
I realize each model returns different outputs, and I'm fairly new to the whole convertion process, but are these types kept somewhere in the Python or the json files with the model that could be used as typescript types?
Ideally `pipeline` would infer the types, but I'm also ok with importing (or generating the types myself) and using it as a generic:
```
const whateve = pipeline<ReturnType>(`task`, `model`)
```
|
https://github.com/huggingface/transformers.js/issues/484
|
closed
|
[
"question"
] | 2023-12-28T21:16:05Z
| 2024-01-02T15:08:47Z
| null |
wesbos
|
huggingface/optimum-neuron
| 395
|
How to use generate() with inputs_embeds
|
I hope this is the right place to ask this question. Let me know if I need to move to another repo.
Currently I'm using `NeuronModelForCausalLM`.
I have a use case where I need to be able to do the following:
1. Generate embedding tokens
2. Modify embedding tokens
3. Run inference from modified embedding tokens
I am able to do steps 1 & 2 currently using the following:
```
from optimum.neuron import NeuronModelForCausalLM
llama_model = NeuronModelForCausalLM.from_pretrained('aws-neuron/Llama-2-7b-chat-hf-seqlen-2048-bs-1')
embedded_tokens = llama_model.model.chkpt_model.model.embed_tokens(token_ids)
### Code to modify embedded_tokens
```
However, as far as I can tell, generation with these modified tokens is not possible with `llama_model.generate()`
When I use the 'input_embeds' keyword argument, and set `input_ids=None`, I get the following:
```
ValueError: The following `model_kwargs` are not used by the model: ['inputs_embeds']
```
If this is not possible with the NeuronModelForCausalLM.generate() currently, is there a way to work around this manually? If so, could you provide an example?
Thanks very much for your help!
|
https://github.com/huggingface/optimum-neuron/issues/395
|
closed
|
[
"Stale"
] | 2023-12-28T18:28:28Z
| 2024-10-31T08:04:57Z
| null |
liechtym
|
huggingface/transformers.js
| 483
|
Unrecognized token '<' when running
|
### Question
I downloaded the react translation example. When I start the app everything seems to render fine, but as soon as I press translate, nothing happens and I get this error in the console on the browser:
`Unhandled Promise Rejection: SyntaxError: JSON Parse error: Unrecognized token '<'`
I've gotten this same issue trying to run other models keeping things very basic as found here: https://huggingface.co/docs/transformers.js/pipelines
UPDATE: This error only happens in Safari, but it works fine in Chrome.
If I try to make the simplest example with react like in the tutorial link it fails in both chrome and safari
|
https://github.com/huggingface/transformers.js/issues/483
|
closed
|
[
"question"
] | 2023-12-28T14:44:50Z
| 2023-12-28T20:35:02Z
| null |
philg-204
|
huggingface/transformers.js
| 482
|
How tot get the same output as the python library for the Resnet Model ?
|
### Question
Hi,
I am trying to translate a python script to use it in my node server. Currently, I spawn a process to execute the python code, but I would like to improve response time by using the transformers.js version.
My problem is that I don't have the same output with the two codes.
The python output is a vector of dimension 2048
The js output is a vector of dimension 1000
It seems that my code has a problem as soon as the ImageProcessor step because the `inputs` are not equal
Python code :
```python
import torch
from transformers import logging
logging.set_verbosity_error()
from PIL import Image
class ImgToVec:
def __init__(self, pretrained_model="microsoft/resnet-50"):
from transformers import AutoImageProcessor, ResNetModel
self.pretrained_model = pretrained_model
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.image_processor = AutoImageProcessor.from_pretrained(pretrained_model)
self.model = ResNetModel.from_pretrained(pretrained_model).to(self.device)
def get_embedding(self, file):
im = Image.open(file)
inputs = self.image_processor(im, return_tensors="pt").to(self.device)
print(f"inputs : {inputs} dimensiosn : {inputs['pixel_values'].size()}")
with torch.no_grad():
outputs = self.model(**inputs)
return outputs.pooler_output[0, :, 0, 0].tolist()
# https://cdn-lfs.huggingface.co/repos/cf/db/cfdbeec4acf4145f96e47e07a9e161cade4dbce7cfad3ba24765bf1713d53ef3/d65b6f72943d5e2d4f7e5e4dedfb93aea0fbbda140ae7c3ee772124b579e07c4?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27football-match.jpg%3B+filename%3D%22football-match.jpg%22%3B&response-content-type=image%2Fjpeg&Expires=1704020059&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwNDAyMDA1OX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy9jZi9kYi9jZmRiZWVjNGFjZjQxNDVmOTZlNDdlMDdhOWUxNjFjYWRlNGRiY2U3Y2ZhZDNiYTI0NzY1YmYxNzEzZDUzZWYzL2Q2NWI2ZjcyOTQzZDVlMmQ0ZjdlNWU0ZGVkZmI5M2FlYTBmYmJkYTE0MGFlN2MzZWU3NzIxMjRiNTc5ZTA3YzQ%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=kWwcSkWcf8K62Tgr57HYD5VObZuozl3Jf%7EHV5alcyRA-gvbREfzgjMKU9rVOc84r0uwo9d3f-si-PoJ3GdyB8WObJFJWF0nE9SX5C-f3Nookj4SWevcJkLNgF27KqUPMhWWZ8B3KjEDvcxPirjHfc4fv87-uM%7EQIuazixgu0i8lXpzeSyKdZGNIc3zUG-hDzU3EKCGBWbwnGG9Yq%7Evz%7Eit-vvYc7i1AoYTAteZUP1ngDdywjwNf6VvvGqmyBdMcwVDiA0ShwAhW9Z3mqt%7EVz6HaYipWejY0mWmyVhyCWFtJOe9yrk%7ETJKr5cOV3yq6sM0jSheh3GuSd%7E2qYzjBsDVQ__&Key-Pair-Id=KVTP0A1DKRTAX
result = ImgToVec("microsoft/resnet-50").get_embedding("./football-match.jpg")
```
My JS code :
```ts
class ImgToVec {
public async getEmbedding(
file: string,
pretrainedModel = 'Xenova/resnet-50',
): Promise<number[]> {
const { ResNetForImageClassification, AutoProcessor, RawImage } =
await import('@xenova/transformers');
const model = await ResNetForImageClassification.from_pretrained(
pretrainedModel,
);
const imageProcessor = await AutoProcessor.from_pretrained(pretrainedModel);
const image = await RawImage.read(file);
const inputs = await imageProcessor(image);
const outputs = await model(inputs, { config: { embeddingSize: 2048 } });
console.log('inputs', inputs);
const embedding: number[] = outputs.data;
return embedding;
}
}
const imgToVec = new ImgToVec();
// https://cdn-lfs.huggingface.co/repos/cf/db/cfdbeec4acf4145f96e47e07a9e161cade4dbce7cfad3ba24765bf1713d53ef3/d65b6f72943d5e2d4f7e5e4dedfb93aea0fbbda140ae7c3ee772124b579e07c4?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27football-match.jpg%3B+filename%3D%22football-match.jpg%22%3B&response-content-type=image%2Fjpeg&Expires=1704020059&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwNDAyMDA1OX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy9jZi9kYi9jZmRiZWVjNGFjZjQxNDVmOTZlNDdlMDdhOWUxNjFjYWRlNGRiY2U3Y2ZhZDNiYTI0NzY1YmYxNzEzZDUzZWYzL2Q2NWI2ZjcyOTQzZDVlMmQ0ZjdlNWU0ZGVkZmI5M2FlYTBmYmJkYTE0MGFlN2MzZWU3NzIxMjRiNTc5ZTA3YzQ%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=kWwcSkWcf8K62Tgr57HYD5VObZuozl3Jf%7EHV5alcyRA-gvbREfzgjMKU9rVOc84r0uwo9d3f-si-PoJ3GdyB8WObJFJWF0nE9SX5C-f3Nookj4SWevcJkLNgF27KqUPMhWWZ8B3KjEDvcxPirjHfc4fv87-uM%7EQIuazixgu0i8lXpzeSyKdZGNIc3zUG-hDzU3EKCGBWbwnGG9Yq%7Evz%7Eit-vvYc7i1AoYTAteZUP1ngDdywjwNf6VvvGqmyBdMcwVDiA0ShwAhW9Z3mqt%7EVz6HaYipWejY0mWmyVhyCWFtJOe9yrk%7ETJKr5cOV3yq6sM0jSheh3GuSd%7E2qYzjBsDVQ__&Key-Pair-Id=KVTP0A1DKRTAX
imgToVec.getEmbedding('./football-match.jpg').then((embedding) => {
console.log(embedding);
});
```
Any ideas how to solve my problem please ?
|
https://github.com/huggingface/transformers.js/issues/482
|
closed
|
[
"question"
] | 2023-12-28T11:38:20Z
| 2024-01-10T15:04:22Z
| null |
Spoutnik97
|
huggingface/diffusers
| 6,370
|
How to use diffusers lora in the AUTOMATIC1111
|
Thanks for your great work, I use the train_text_to_image_lora_sdxl.py to train my custom dataset and get these output, And I get the good result. But I want to use the AUTOMATIC1111 to use the lora weight, I move the pytorch_lora_weights to the AUTOMATIC1111 lora folder But get the error report:`AssertionError: conversion failed: lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_k_lora_A_weight. the model may not be trained by `sd-scripts``

how can I do to convert the lora model weight to which format AUTOMATIC1111 can accpet.
|
https://github.com/huggingface/diffusers/issues/6370
|
closed
|
[] | 2023-12-28T06:17:19Z
| 2024-01-02T13:38:26Z
| null |
chongxian
|
huggingface/computer-vision-course
| 163
|
How to include "What you'll learn" section for this course?
|
Hello everyone,
Our PR for Fundamentals of Computer Vision was merged a few days back. After that, one thing we still need to acknowledge based on your [feedback](https://github.com/johko/computer-vision-course/issues/38#issuecomment-1764502604) on our chapter outline is building a demo using Gradio to give learners a taste of what they'll learn. One of our teammates, @aman06012003 , created a simple [Cat vs Dog classifier deployed it on Hugging face spaces](https://ak0601-cat-dog-classifier.hf.space/), which we want you to take a look at and give feedback.
Once the demo is finalized, there are two ways to include it, referring to the [Hugging Face Audio Course](https://huggingface.co/learn/audio-course/chapter0/introduction). One is to create a new .mdx file in our fundamentals folder. The other is to create a new chapter - Welcome to the course, where we add what you'll learn, community notes, etc. We are still determining the optimal path, so please guide us.
Team members - @seshu-pavan , @bellabf , @aman06012003
bcc - @MKhalusova @johko @merveenoyan @lunarflu
Best,
Fundamentals team
|
https://github.com/huggingface/computer-vision-course/issues/163
|
closed
|
[] | 2023-12-27T12:41:26Z
| 2024-04-26T13:36:59Z
| null |
seshupavan
|
huggingface/transformers
| 28,260
|
How to set pad_token of Llava for batched generation and training?
|
Hello, @younesbelkada I'm trying to use Llava for batched generation, using the default pad_token. here is the script:
```python
import json
from PIL import Image
from transformers import AutoProcessor, LlavaForConditionalGeneration,AutoTokenizer
from torch.utils.data import Dataset,DataLoader
import torch
import os
from tqdm import tqdm
DATA_ROOT = "/mnt/gozhang/code/LLaVA/playground/data/eval/mm-vet"
processor = AutoProcessor.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf")
tokenizer = AutoTokenizer.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf")
class MMVetDataset(Dataset):
def __init__(self,data_root) -> None:
super().__init__()
self.data_root = data_root
with open(os.path.join(data_root, "mm-vet.json"), "r") as f:
data = json.load(f)
self.data = [(k,v) for k,v in data.items()]
def __len__(self):
return len(self.data)
def __getitem__(self, index):
return {'id':self.data[index][0],
'image':os.path.join(self.data_root,'images',self.data[index][1]['imagename']),
'question':"USER: <image>\n"+self.data[index][1]['question']+" ASSISTANT:"}
def collator(batch):
ids = [b['id'] for b in batch]
questions = [b['question'] for b in batch]
images = [Image.open(b['image']) for b in batch]
inputs = processor(text=questions,images=images,return_tensors="pt",padding=True)
return ids,inputs
model = LlavaForConditionalGeneration.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf",torch_dtype=torch.float16)
model.to('cuda')
#model.to(torch.float16)
dataset = MMVetDataset(DATA_ROOT)
dataloader = DataLoader(dataset,batch_size=16,collate_fn=collator)
results = {}
bar = tqdm(total=len(dataset))
model.eval()
with torch.inference_mode():
for ids, inputs in dataloader:
inputs.to('cuda')
inputs['pixel_values'] = inputs['pixel_values'].half()
outputs = model.generate(**inputs,temperature=0.2,do_sample=True,max_new_tokens=1024,use_cache=True)
input_token_len = inputs['input_ids'].shape[1]
responses=tokenizer.batch_decode(outputs[:, input_token_len:], skip_special_tokens=True, clean_up_tokenization_spaces=False)
for id,res in zip(ids,responses):
results[id]=res
bar.update(len(responses))
with open('mmvet_result.json','w') as f:
json.dump(results,f,indent=4)
```
But when generating the fifth batch, it reports `RuntimeError: probability tensor contains either inf, nan or element < 0`. Then I try different pad_token, setting `processor.tokenizer.pad_token = processor.tokenizer.unk_token` (following the raw llava codebase), or `processor.tokenizer.pad_token = processor.tokenizer.eos_token`(following the common setting), or `processor.tokenizer.pad_token = processor.tokenizer.bos_token`(following this [issue](https://discuss.huggingface.co/t/llama2-pad-token-for-batched-inference/48020)). And I find that only setting pad_token to eos_token can avoid the error.
I wonder what's the effect of different pad_token during batched generation, and what's the root cause of this error, and how to set the correct pad_token for training the model?
|
https://github.com/huggingface/transformers/issues/28260
|
closed
|
[] | 2023-12-27T12:17:02Z
| 2024-02-05T02:43:32Z
| null |
TideDra
|
huggingface/transformers
| 28,259
|
How to add new merge rules in AutoTokenizer
|
### Model description
I'm training new tokenizer from llama2, however, it seems that BPE tokenizer will clear the origin "vocab" and "merge" dict, and the training result is highly bias in my own datasets (about 6M C function) with some ugly tokens.
I wonder that is it possible to train a tokenizer from llama2 with the origin "vocab" and "merge" dict unchanged, only add some new vocab and merge rules from our datasets to support my requirement?
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/transformers/issues/28259
|
open
|
[
"New model"
] | 2023-12-27T12:15:26Z
| 2023-12-27T12:15:26Z
| null |
Sandspeare
|
huggingface/accelerate
| 2,289
|
[QUESTION] why stage3_gather_16bit_weights_on_model_save is set to false no matter what value of it in deepspeed config
|
[`accelerator._prepare_deepspeed()`](https://github.com/huggingface/accelerate/blob/d08c23c20975f39393b431143237c193733e7bb8/src/accelerate/accelerator.py#L1464C13-L1464C82) looks to force the `stage3_gather_16bit_weights_on_model_save` to `false`, which should raise an exception in [`accelerator.get_state_dict()`](https://github.com/huggingface/accelerate/blob/d08c23c20975f39393b431143237c193733e7bb8/src/accelerate/accelerator.py#L2985C17-L2985C68). Additionally, [`trainer.save_model()`](https://github.com/huggingface/transformers/blob/c48787f347bd604f656c2cfff730e029c8f8c1fe/src/transformers/trainer.py#L2827C17-L2827C77) invoke above function, then catch this exception and raise another exception. Yet, the log seems totally fine. I'm confused... Why this happened?
|
https://github.com/huggingface/accelerate/issues/2289
|
closed
|
[] | 2023-12-27T10:04:28Z
| 2024-01-05T06:59:16Z
| null |
LaniakeaS
|
huggingface/diffusers
| 6,352
|
how to choose save precision for lora file in training
|
I'm confused about my lora precision(fp16,bf16,float) and whether i can choose precision about my lora weights. I searched for the params about the **StableDiffusionXLPipeline.save_lora_weights** function used to save lora in sdxl text2img training script and didnt find params like 'save_precision' or sth.
anyone can help? thanks!
|
https://github.com/huggingface/diffusers/issues/6352
|
closed
|
[] | 2023-12-27T09:02:47Z
| 2023-12-28T08:21:29Z
| null |
DoctorTar
|
huggingface/transformers.js
| 481
|
Why do certain models not load?
|
### Question
I was keen to try:
https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0
I tried:
```ts
import {
AutoModelForCausalLM,
AutoTokenizer,
} from '@xenova/transformers';
const autoTokenizer = await AutoTokenizer.from_pretrained(
'Upstage/SOLAR-10.7B-Instruct-v1.0',
);
const model = await AutoModelForCausalLM.from_pretrained(
'Upstage/SOLAR-10.7B-Instruct-v1.0',
);
```
But it fails with an error:
```ts
Error: Could not locate file: "https://huggingface.co/Upstage/SOLAR-10.7B-Instruct-v1.0/resolve/main/onnx/decoder_model_merged_quantized.onnx".
```
Is this an error on my side, is the model incompatible, ... ?
|
https://github.com/huggingface/transformers.js/issues/481
|
open
|
[
"question"
] | 2023-12-27T01:44:52Z
| 2024-05-10T18:21:57Z
| null |
adaboese
|
pytorch/TensorRT
| 2,558
|
How to set the input when compiling model for non-image input?
|
Hi, I have trained a model whose input is a set of 3D points with a shape `Nx3`, N is not a fixed number. In this case, how to set the input during compiling my model?
For image, the input shape is like this:
```
inputs = [torch.randn((1, 3, 224, 224)).to("cuda").half()]
```
What if for my case? Thank you!
```[tasklist]
### Tasks
```
|
https://github.com/pytorch/TensorRT/issues/2558
|
open
|
[
"question"
] | 2023-12-26T12:34:22Z
| 2023-12-27T18:20:31Z
| null |
DeepDuke
|
huggingface/peft
| 1,298
|
[Question] What is the main difference between "modules_to_save" and "target_modules"?
|
Hi, in my work I need to add some special token to LLAMA, so I need to train the parameter of ["embed_tokens", "lm_head"] for both layers, what confuses me is that should I add this parameter to LoraConfig's "modules_to_save " or "target_modules"? Looking forward to your reply!
|
https://github.com/huggingface/peft/issues/1298
|
closed
|
[] | 2023-12-26T07:37:05Z
| 2024-02-03T15:03:27Z
| null |
SatireY
|
huggingface/datasets
| 6,534
|
How to configure multiple folders in the same zip package
|
How should I write "config" in readme when all the data, such as train test, is in a zip file
train floder and test floder in data.zip
|
https://github.com/huggingface/datasets/issues/6534
|
open
|
[] | 2023-12-26T03:56:20Z
| 2023-12-26T06:31:16Z
| null |
d710055071
|
pytorch/xla
| 6,234
|
How to judge the input parameters in an hlo graph, which is the weight of the model
|
## ❓ Questions and Help
How to judge the input parameters in an hlo graph, which is the weight of the model (that is, the parameters saved by the model and the parameters thought of the model training), is there any good way to judge it in C++ torch xla source code?
for example: (one model of only linear Op)
I want to find out the linear bias and weith. In here, %arg0: bias and %arg1: weight .
```
func.func @main(%arg0: tensor<5xf32>, %arg1: tensor<5x10xf32>, %arg2: tensor<1x10xf32>, %arg3: tensor<1x5xf32>) -> tuple<tensor<1x5xf32>, tensor<f32>> {
%0 = mhlo.reshape %arg0 : (tensor<5xf32>) -> tensor<1x5xf32>
%1 = "mhlo.transpose"(%arg1) {permutation = dense<[1, 0]> : tensor<2xi64>, xla_shape = "f32[10,5]{0,1}"} : (tensor<5x10xf32>) -> tensor<10x5xf32>
%2 = "mhlo.fusion"(%0, %arg2, %1) ({
^bb0(%arg4: tensor<1x5xf32>, %arg5: tensor<1x10xf32>, %arg6: tensor<10x5xf32>):
%10 = "mhlo.dot"(%arg5, %arg6) {precision_config = [#mhlo<precision DEFAULT>, #mhlo<precision DEFAULT>]} : (tensor<1x10xf32>, tensor<10x5xf32>) -> tensor<1x5xf32>
%11 = mhlo.add %10, %arg4 : tensor<1x5xf32>
mhlo.return %11 : tensor<1x5xf32>
}) {fusion_kind = #mhlo<fusion_kind kLoop>} : (tensor<1x5xf32>, tensor<1x10xf32>, tensor<10x5xf32>) -> tensor<1x5xf32>
%3 = mhlo.subtract %2, %arg3 : tensor<1x5xf32>
%4 = mhlo.multiply %3, %3 : tensor<1x5xf32>
%5 = mhlo.constant dense<0.000000e+00> : tensor<f32>
%6 = mhlo.reduce(%4 init: %5) across dimensions = [0, 1] : (tensor<1x5xf32>, tensor<f32>) -> tensor<f32>
reducer(%arg4: tensor<f32>, %arg5: tensor<f32>) {
%10 = mhlo.add %arg4, %arg5 : tensor<f32>
mhlo.return %10 : tensor<f32>
}
%7 = mhlo.constant dense<2.000000e-01> : tensor<f32>
%8 = mhlo.multiply %6, %7 : tensor<f32>
%9 = "mhlo.tuple"(%2, %8) {xla_shape = "(f32[1,5]{1,0}, f32[])"} : (tensor<1x5xf32>, tensor<f32>) -> tuple<tensor<1x5xf32>, tensor<f32>>
return %9 : tuple<tensor<1x5xf32>, tensor<f32>>
}
```
|
https://github.com/pytorch/xla/issues/6234
|
closed
|
[] | 2023-12-25T09:22:28Z
| 2024-01-24T06:22:24Z
| null |
ckfgihub
|
pytorch/TensorRT
| 2,557
|
❓ [Question] a10 performance drop significantly
|
## ❓ Question
<!-- Your question -->
I converted the gfpgan model (https://github.com/TencentARC/GFPGAN) with torch_tensorrt, and I found torch_tensorrt is twice as fast as torch in 3070. But in one a10 server, torch_tensorrt and torch are closed; In other a10 server, torch_tensorrt is even twice as slow as torch. Statics shows below. (two type of a10 from two difference cloud server).
| GPU | CPU | CPU core | CPU freq | memory | inference framework | CPU usage | memory usage | GPU usage | inference time |
|------------|---------|----------|---------|----------|------|-----------|-----------|----------|----------|
| 3070 | AMD Ryzen 7 5800X 8-Core Processor | 16 | 2200-3800MHz | 32G | pytorch | 30-35% | 160-170% | 13.5g 987.7m | 33.889511s |
| 3070 | | | | | torch_tensorrt | 15-20% | 180-200% | 11.7g 1.1g | 16.259879s |
| a10(v1) | Intel (R) Xeon (R) Platinum 8350C CPU @ 2.60GHz | 28 | 2593MHz | 112G | pytorch | 25-30% | 190-200% | 15.1g 1.2g | 33.933190s |
| a10(v1) | | | | | torch_tensorrt | 15-20% | 190-200% | 13.0g 1.2g | 31.899047s |
| a10(v2)| Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz | 28 | 2300-4600MHz | 112G | pytorch | 20-30% | 180-200% | 15.1g 1.0g | 34.027398s |
| a10(v2)| | | | | torch_tensorrt | 10-15% | 160-170% | 13.1g 1.1g | 66.498723s |
I also tried torch2trt(https://github.com/NVIDIA-AI-IOT/torch2trt) and fixed some op error, finding it's twice as fast as torch_tensorrt in 3070. And performance didn't drop so strangely in a10 server.
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): nvcr.io/nvidia/pytorch:23.08-py3
- CPU Architecture: as above
- OS (e.g., Linux): linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): docker
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration: as above
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/2557
|
open
|
[
"question"
] | 2023-12-25T08:54:43Z
| 2024-01-05T02:12:17Z
| null |
ArtemisZGL
|
huggingface/trl
| 1,140
|
How to additional finetune with new data from previous adapter ?
|
Hi All, I have question about finetune. Currently I use SFTtrainer for finetuning Llama2-7b-chat model and save it in adapter format. The question is, In case of I want to additional finetune with new data from previous adapter, How I could to do. Normally I additional finetune by merge adapter with base model before finetune it. I'm not sure my method that i do is correct or not. Or have any other method that easily more than this.
Thank
|
https://github.com/huggingface/trl/issues/1140
|
closed
|
[] | 2023-12-25T04:19:34Z
| 2024-02-01T15:05:24Z
| null |
SiraHaruethaipree
|
huggingface/optimum
| 1,613
|
Convert opus translation to onnx and run inference from it
|
To convert I use this snippet
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers.models.marian import MarianOnnxConfig
import onnxruntime as ort
model_ckpt = "Helsinki-NLP/opus-mt-en-zh"
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
ref_model = AutoModelForSeq2SeqLM.from_pretrained(model_ckpt)
feature = "seq2seq-lm"
onnx_path = f"onnx/{model_ckpt}-{feature}/"
!python -m transformers.onnx --model={model_ckpt} --atol=1e-4 --feature={feature} {onnx_path}
```
To inference (which is not running) I use this snippet
```
import torch
from transformers import AutoTokenizer, pipeline
from optimum.onnxruntime import ORTModelForSeq2SeqLM
model = ORTModelForSeq2SeqLM.from_pretrained("./onnx/Helsinki-NLP/opus-mt-en-zh-seq2seq-lm")
```
The error is
```
FileNotFoundError: Could not find any ONNX model file for the regex ['(.*)?decoder(.*)?with_past(.*)?\\.onnx']
```
Maybe it tries to find model.onnx but in the folder there are 2 onnx : decoder_model.onnx and encoder_model.onnx
I think the snippet is from 2022, Is there any changes ?
Thanks
|
https://github.com/huggingface/optimum/issues/1613
|
closed
|
[] | 2023-12-25T04:04:47Z
| 2025-04-29T01:45:20Z
| 5
|
x4080
|
huggingface/chat-ui
| 658
|
chat-ui do not support TGI http url when deploy publicly
|
hi @nsarrazin, the chat-ui works well locally
~~~
# .env.local
endpoints: [{"type":"tgi","url":"http://127.0.0.1:8080/generate_stream"}]
~~~
but if deploy it in public, when chat from the external brower, get the 403 error:
~~~
403
You don't have access to this conversation. If someone gave you this link, ask them to use the 'share' feature instead.
~~~
this issue may be related this issue https://github.com/huggingface/chat-ui/issues/364
it seems that: chat-ui only support the https url, but the TGI only support the http url. it has conflicts. how to fix this?
|
https://github.com/huggingface/chat-ui/issues/658
|
closed
|
[] | 2023-12-25T03:08:10Z
| 2024-04-25T16:27:52Z
| 1
|
walkacross
|
huggingface/transformers.js
| 475
|
How to use your own models
|
### Question
Hey I really appreciate your work here!
I'm very interested in setting up a perfect RAG pipeline / flow and therefore I need a good document extraction with table-transformers and layout detection.
Example :
https://github.com/deepdoctection/deepdoctection
Where I'd use
https://huggingface.co/microsoft/layoutlmv3-base
https://huggingface.co/microsoft/table-transformer-detection
I could ask you if would add one of these but I want to try it myself.
As I understood I can use your script and deploy it on my huggingface.co so I could consume it, is this right?
|
https://github.com/huggingface/transformers.js/issues/475
|
closed
|
[
"question"
] | 2023-12-24T21:38:02Z
| 2024-05-15T09:32:26Z
| null |
DomEscobar
|
huggingface/datasets
| 6,530
|
Impossible to save a mapped dataset to disk
|
### Describe the bug
I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py).
After I do the mapping like this:
```
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True)
train_dataset = train_dataset.map(
compute_vae_encodings_fn,
batched=True,
batch_size=16,
)
```
and try to save it like this:
`train_dataset.save_to_disk("test")`
i get this error ([full traceback](https://pastebin.com/kq3vt739)):
```
TypeError: Object of type function is not JSON serializable
The format kwargs must be JSON serializable, but key 'transform' isn't.
```
But what is interesting is that pushing to hub works like that:
`train_dataset.push_to_hub("kopyl/mapped-833-icons-sdxl-1024-dataset", token=True)`
Here is the link of the pushed dataset: https://huggingface.co/datasets/kopyl/mapped-833-icons-sdxl-1024-dataset
### Steps to reproduce the bug
Here is the self-contained notebook:
https://colab.research.google.com/drive/1RtCsEMVcwWcMwlWURk_cj_9xUBHz065M?usp=sharing
### Expected behavior
It should be easily saved to disk
### Environment info
NVIDIA A100, Linux (NC24ads A100 v4 from Azure), CUDA 12.2.
[pip freeze](https://pastebin.com/QTNb6iru)
|
https://github.com/huggingface/datasets/issues/6530
|
open
|
[] | 2023-12-23T15:18:27Z
| 2023-12-24T09:40:30Z
| 1
|
kopyl
|
huggingface/sentence-transformers
| 2,392
|
util.paraphrase_mining returning scores only above 0.98
|
Hey,
I'm using util.paraphrase_mining (sentence-transformers v2.2.2) to get similarity scores (cosine) in a corpus of ~20k texts with the encoder model being all-MiniLM-L6-v2 and with the parameters query_chunk_size=500, corpus_chunk_size=1000, top_k=500000, max_pairs=5000000.
The returned list of triplets contain scores only above 0.98. I was wondering why the lower scores don't appear.
Thanks in advance for your answer!
|
https://github.com/huggingface/sentence-transformers/issues/2392
|
closed
|
[
"question"
] | 2023-12-23T13:00:27Z
| 2024-01-29T14:20:33Z
| null |
sinangokce
|
huggingface/chat-ui
| 656
|
Web Search failed with "Invalid URL"
|

Why is this happening? It seems to happen regardless of whether I have USE_LOCAL_WEBSEARCH set to true or false.
```
SERPAPI_KEY=<my key>
USE_LOCAL_WEBSEARCH=true
MODELS=`[
{
"name": "mistralai/Mixtral-8x7b-Instruct-v0.1",
"displayName": "mistralai/Mixtral-8x7b-Instruct-v0.1",
"description": "Mixtral-8x7b-Instruct-v0.1 is a state of the art language model, based on a mixture of experts, that outperforms ChatGPT.",
"websiteUrl": "https://www.aaprintsupplyco.com",
"preprompt": "",
"chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.4,
"top_p": 0.95,
"top_k": 50,
"truncate": 31768,
"max_new_tokens": 2048,
"stop": ["[INST]","</s>"]
},
"endpoints" : [{
"type": "openai",
"baseURL": "https://api.together.xyz/v1"
}],
"promptExamples": [
{
"title": "Write a blog post",
"prompt": "Your goal is to help me create a compelling blog post about a topic.\nYou will follow the following process:\n\n1. Ask me for the topic of the blog post.\n2. After I provide my answer you will need to collect some additional information by going through the next steps:\na) Questions (ask any relevant questions pertaining to what additional information is needed from me to write a good blog post).\n\nOnce you have enough information, or once I say I am done, you will write the blog post."
}, {
"title": "Improve my English",
"prompt": "I want you to act as an English grammar and spelling corrector and improver. I will speak to you and you will answer in the corrected and improved version of my text, in English. I want you to replace my simplified A0-level words and sentences with improved, higher level English words and sentences. Keep the meaning same, but make them sound better. I want you to only reply the correction, the improvements and nothing else, do not write explanations. If there is nothing to improve, just reply with the original text."
}, {
"title": "Assist in a task",
"prompt": "I want you to be my Prompt engineer. Your goal is to help me craft the best possible instruction prompt for my needs. The prompt will be used by you, an AI model. You will follow the following process:\n\n1. Your first response will be to simply ask me what the task I want to accomplish. \n2. After I provide my answer and you will generate a first iteration of the prompt, but we will need to improve it through continual iterations by going through the next steps. You will generate two sections:\na) Revised prompt (provide your rewritten prompt, it should be clear, concise, and easily understood by you),\nb) Questions (ask any relevant questions pertaining to what additional information is needed from me to improve the prompt).\n3. We will continue this iterative process with me providing additional information to you and you updating the prompt in the Revised prompt section until I say we are done.\n\nOnly after I say I am done, will you provide a response to the revised prompt."
}
]
},
{
"name": "openchat/openchat-3.5-1210",
"displayName": "openchat/openchat-3.5-1210",
"description": "OpenChat 3.5 is the #1 model on MT-Bench, with only 7B parameters. Small and fast.",
"websiteUrl": "https://www.aaprintsupplyco.com",
"preprompt": "",
"chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}GPT4 Correct User: {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<|end_of_turn|>GPT4 Correct Assistant:{{/ifUser}}{{#ifAssistant}}{{content}}<|end_of_turn|>{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.4,
"top_p": 0.95,
"top_k": 50,
"truncate": 8192,
"max_new_tokens": 1024,
"stop": ["<|end_of_turn|>","</s>"]
},
"endpoints" : [{
"type": "openai",
"baseURL": "https://api.together.xyz/v1"
}],
"promptExamples": [
{
"title": "Write a blog post",
"prompt": "Your goal is to help me create a compelling blog post about a topic.\nYou will follow the following process:\n\n1. Ask me for the topic of the blog post.\n2. After I provide my answer you will need to collect some additional information by going through the next steps:\na) Questions (ask any relevant questions pertaining to what additional information is needed from me to write a good blog post).\n\nOnce you have enough information, or once I say I am done, you will write the blog post."
}, {
"titl
|
https://github.com/huggingface/chat-ui/issues/656
|
closed
|
[] | 2023-12-22T19:19:34Z
| 2024-01-09T05:45:13Z
| 5
|
gururise
|
huggingface/chat-ui
| 655
|
Generation failed (Module.summarize) when using TogetherAI openai compatible endpoint
|
TogetherAI offers an [OpenAI compatible endpoint](https://docs.together.ai/docs/openai-api-compatibility). When using this endpoint with the model setup as follows:
```
MODELS=`[
{
"name": "mistralai/Mixtral-8x7b-Instruct-v0.1",
"displayName": "Mixtral-8x7b",
"endpoints" : [{
"type": "openai",
"baseURL": "https://api.together.xyz/v1"
}],
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
]
}
]`
TASK_MODEL=`{
"name": "openchat/openchat-3.5-1210",
"chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}GPT4 Correct User: {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<|end_of_turn|>GPT4 Correct Assistant:{{/ifUser}}{{#ifAssistant}}{{content}}<|end_of_turn|>{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop": ["<|end_of_turn|>","</s>"]
},
"endpoints" : [{
"type": "openai",
"baseURL": "https://api.together.xyz/v1"
}]
}`
```
Inference and streaming work just fine with the output displayed in the chat window; however, in the console, the **following error always appears** after every interaction, and the conversation titles are never summarized.
```
Error: Generation failed
at Module.generateFromDefaultEndpoint (/home/gene/Downloads/chat-ui/src/lib/server/generateFromDefaultEndpoint.ts:22:9)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Module.summarize (/home/gene/Downloads/chat-ui/src/lib/server/summarize.ts:28:10)
at async eval (/home/gene/Downloads/chat-ui/src/routes/conversation/[id]/+server.ts:167:26)
```
Even if I try setting TASK_MODEL='mistralai/Mixtral-8x7b-Instruct-v0.1', I still get this error.
|
https://github.com/huggingface/chat-ui/issues/655
|
open
|
[] | 2023-12-22T17:34:59Z
| 2024-01-23T05:14:26Z
| 1
|
gururise
|
huggingface/datasets
| 6,529
|
Impossible to only download a test split
|
I've spent a significant amount of time trying to locate the split object inside my _split_generators() custom function.
Then after diving [in the code](https://github.com/huggingface/datasets/blob/5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54/src/datasets/load.py#L2558) I realized that `download_and_prepare` is executed before! split is passed to the dataset builder in `as_dataset`.
If I'm not missing something, this seems like bad design, for the following use case:
> Imagine there is a huge dataset that has an evaluation test set and you want to just download and run just to compare your method.
Is there a current workaround that can help me achieve the same result?
Thank you,
|
https://github.com/huggingface/datasets/issues/6529
|
open
|
[] | 2023-12-22T16:56:32Z
| 2024-02-02T00:05:04Z
| 2
|
ysig
|
huggingface/transformers.js
| 470
|
How to convert a model with .pt tail
|
### Question
I'm new to this area,I'm woundering how to convert a model with .pt tail?thanks a lot
|
https://github.com/huggingface/transformers.js/issues/470
|
open
|
[
"question"
] | 2023-12-22T10:20:16Z
| 2023-12-23T20:46:37Z
| null |
Bzayyz
|
huggingface/transformers.js
| 469
|
How to convert a model with .pt tail
|
### Question
I'm new to this area,I'm woundering how to convert a model with .p2 tail?thanks a lot
|
https://github.com/huggingface/transformers.js/issues/469
|
closed
|
[
"question"
] | 2023-12-22T10:20:05Z
| 2023-12-22T10:20:54Z
| null |
Bzayyz
|
pytorch/tutorials
| 2,721
|
[BUG] - <title>RuntimeError: CUDA error: an illegal memory access was encountered using vmap and model ensembling call for cuda system
|
### Add Link
https://pytorch.org/tutorials/intermediate/ensembling.html
https://pytorch.org/docs/stable/notes/extending.func.html#defining-the-vmap-staticmethod
### Describe the bug
### 🐛 Describe the bug
I want to use **vmap** to vectorize the **ensemble models** inherited from torch.autograd.Function. And torch.autograd.Function’s forward/backward calls into functions from **cuda**. etc,
Firstly, I set **generate_vmap_rule=True** ,which means calling the system's vmap function directly.
**error: RuntimeError: Cannot access data pointer of Tensor that doesn't have storage**
Becaue model calls for cuda system,I need to write the own vmap,
```
def vmap(info,in_dims,input):
if in_dims[0] is not None:
input_B = input.shape[0]
input = einops.rearrange(input,'B N C -> (B N) C')
outputs,_,_ = model.apply(input)
if in_dims[0] is not None:
outputs = einops.rearrange(input,'(B N) C -> B N C',B = input_B)
return outputs,(0)
```
**error: RuntimeError: CUDA error: an illegal memory access was encountered,CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.**
**How can I write the vmap.py to deal the Multiple models process multiple batches of data and models call for cuda to process data?**
code follows,I simplify the model class.
```
def model(torch.autograd.Function):
def foward():
calls for cuda forward
def backward():
calls for cuda backward
def setup_context():
@staticmethod
def vmap():
from torch.func import stack_module_state
b_p = torch.randn([10,100,3]).cuda()
objs = [model() for i in range(10)]
pe_models = []
for obj in objs:
pe_models.append(obj.pe)
pe_param, pe_buffer = stack_module_state(pe_models)
base_model = copy.deepcopy(pe_models[0])
def fmodel(params,buffers,x):
return functional_call(base_model,(params,buffers),x)
out = vmap(fmodel)(pe_param,pe_buffer,b_p)
```
### Describe your environment
### Versions
pytorch2.0
cuda11.7
python 3.8
ubuntu20.4
collect_env.py error update later
cc @albanD
|
https://github.com/pytorch/tutorials/issues/2721
|
open
|
[
"bug",
"core"
] | 2023-12-22T09:26:03Z
| 2024-01-04T08:27:38Z
| 2
|
wuyingxiong
|
huggingface/chat-ui
| 650
|
chat-ui docker image failed to connect the mongo docker contrainer
|
step 1: build the chat-ui image
~~~
docker build -t chat-ui -f ./Dockerfile.local .
~~~
step 2:
~~~
# bind the 27016
docker run -d -p 27016:27017 --name mongo-chatui mongo:latest
~~~
step 3: run a contrainer
~~~
# add a .env.local config
MONGODB_URL=mongodb://localhost:27016
HF_TOKEN=<your access token>
~~~
~~~
docker run --rm --mount type=bind,source="$(pwd)/.env.local",target=/app/.env.local -p 3000:3000 chat-ui
~~~
## results: when load localhost:3000
~~~
MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27016
at Timeout._onTimeout (/app/node_modules/mongodb/lib/sdam/topology.js:278:38)
at listOnTimeout (node:internal/timers:573:17)
at process.processTimers (node:internal/timers:514:7) {
reason: TopologyDescription {
type: 'Unknown',
servers: Map(1) { 'localhost:27016' => [ServerDescription] },
stale: false,
compatible: true,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
setName: null,
maxElectionId: null,
maxSetVersion: null,
commonWireVersion: 0,
logicalSessionTimeoutMinutes: null
},
code: undefined,
[Symbol(errorLabels)]: Set(0) {}
}
MongoTopologyClosedError: Topology is closed
at /app/node_modules/mongodb/lib/sdam/topology.js:218:46 {
[Symbol(errorLabels)]: Set(0) {}
}
MongoTopologyClosedError: Topology is closed
at processWaitQueue (/app/node_modules/mongodb/lib/sdam/topology.js:514:46)
at Topology.selectServer (/app/node_modules/mongodb/lib/sdam/topology.js:283:9)
at Topology.<anonymous> (/app/node_modules/mongodb/lib/sdam/topology.js:42:94)
at node:internal/util:442:7
at new Promise (<anonymous>)
at Topology.selectServerAsync (node:internal/util:428:12)
at executeOperationAsync (/app/node_modules/mongodb/lib/operations/execute_operation.js:74:35)
at /app/node_modules/mongodb/lib/operations/execute_operation.js:12:45
at maybeCallback (/app/node_modules/mongodb/lib/utils.js:293:21)
at executeOperation (/app/node_modules/mongodb/lib/operations/execute_operation.js:12:38) {
[Symbol(errorLabels)]: Set(0) {}
}
~~~
@nsarrazin
|
https://github.com/huggingface/chat-ui/issues/650
|
open
|
[
"support",
"docker"
] | 2023-12-22T08:34:52Z
| 2025-05-25T20:37:17Z
| 6
|
walkacross
|
huggingface/chat-ui
| 649
|
Formatting is incorrect when using LiteLLM (Together.ai)
|
I'm using Mixtral-7b-Instruct-v0.1 via [LiteLLM](https://github.com/BerriAI/litellm) to provide a OpenAI compatible API to together.ai where the model is hosted.
Everything works fine, including streaming; however, the formatting is messed up as shown. Any ideas why?

|
https://github.com/huggingface/chat-ui/issues/649
|
closed
|
[
"bug",
"question",
"front",
"models"
] | 2023-12-22T05:46:37Z
| 2023-12-22T17:11:09Z
| null |
gururise
|
huggingface/distil-whisper
| 67
|
I can only use its encoder to extract audio features, right? How should I use it? Could you provide an example
|
I can only use its encoder to extract audio features, right? How should I use it? Could you provide an example
|
https://github.com/huggingface/distil-whisper/issues/67
|
open
|
[] | 2023-12-22T03:50:32Z
| 2024-01-15T18:07:34Z
| null |
wvinzh
|
pytorch/serve
| 2,866
|
The workers works in parallelism?
|
### 📚 The doc issue
This is not an issue. How the workers works in parallelism and if they works in parallelism?
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/2866
|
closed
|
[
"triaged"
] | 2023-12-21T23:33:39Z
| 2024-01-05T20:54:53Z
| 7
|
IonBoleac
|
huggingface/transformers.js
| 468
|
Node.js
|
### Question
Will this library work with Node.js?
|
https://github.com/huggingface/transformers.js/issues/468
|
closed
|
[
"question"
] | 2023-12-21T23:03:36Z
| 2023-12-21T23:06:53Z
| null |
Julianbullmagic
|
huggingface/gsplat.js
| 47
|
I don't need to load loading and onProgress,When data is loaded, how can I render it on the interface immediately?
|
I don't need to load loading,When data is loaded, how can I render it on the interface immediately? I see Class Loader
Nothing's been done there
|
https://github.com/huggingface/gsplat.js/issues/47
|
closed
|
[] | 2023-12-21T20:13:52Z
| 2024-01-29T20:15:01Z
| null |
did66
|
huggingface/candle
| 1,463
|
How to introduce openai triton in candle?
|
The handwritten CUDA operator is very complicated. How can we use openai triton in candle to simplify this process. :)
|
https://github.com/huggingface/candle/issues/1463
|
open
|
[] | 2023-12-21T18:42:38Z
| 2024-01-01T11:56:29Z
| null |
tyfeng1997
|
pytorch/audio
| 3,720
|
Can't install some of the libraries
|
Hello, i have a problem while installing some of the libraries because i can't install module fcntl. Is there any solution because on one windows pc works but on my main it doesn't. That module is linux dependent.
|
https://github.com/pytorch/audio/issues/3720
|
open
|
[] | 2023-12-21T13:58:55Z
| 2023-12-21T13:58:55Z
| 0
|
Toplica001
|
pytorch/audio
| 3,719
|
streamreader add_video_stream doesn't seem to accept any filter_desc options
|
### 🐛 Describe the bug
I'm using the following options in my streamreader:
```
vr.add_video_stream(
frames_per_chunk=decode_size,
decoder=codec,
decoder_option={"threads": "0", "gpu": "0"},
hw_accel='cuda',
filter_desc=f"format=pix_fmts=rgb24"
)
```
Unfortunately I get the error `RuntimeError: Failed to configure the graph: Function not implemented`.
If I remove the filter_desc option the code runs normally. For me the streamreader is not very useful if the output is not in rgb24 but in yuv444p instead. Is there a way to fix this (without moving to the nightly build), or are there any alternatives?
### Versions
PyTorch version: 2.1.2+cu118
Is CUDA available: True
[pip3] numpy==1.24.1
[pip3] torch==2.1.2+cu118
[pip3] torchaudio==2.1.2+cu118
[pip3] torchvision==0.16.2+cu118
[pip3] triton==2.1.0
|
https://github.com/pytorch/audio/issues/3719
|
open
|
[] | 2023-12-21T09:58:03Z
| 2023-12-28T07:46:49Z
| 1
|
caspersmit-sa
|
huggingface/transformers
| 28,179
|
How to fine tune facebook/esm2_t33_650M_UR50D
|
### System Info
How to fine tune facebook/esm2_t33_650M_UR50D?It's too big and the model.half() couldn't work. Besids, i always met the error : CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc). Is it possible that the model in the huggingface is wrong?
The following is the script:
from os.path import join
import os
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data as data
import transformers
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer
from datasets import Dataset,load_metric
from sklearn.model_selection import train_test_split
#os.environ['CUDA_VISIBLE_DEVICES'] = '1'
CURRENT_DIR = os.getcwd()
check_point = join(CURRENT_DIR,"esm1b_t33_650M_UR50S")
#Data processing
def process_tsv(file):
sequences = list()
labels = list()
df = pd.read_csv(file,sep="\t")
for ind in df.index:
sequences.append(df["sequence"][ind])
labels.append(df["label"][ind])
return sequences,labels
def tokenize_add_label(sequences, labels, tokenizer):
"""This function takes sequences and labels creates a Dataset containing tokenized sequences and add labels to it
args:
sequences (str): a list of sequences
labels (int): a list of labels
tokenizer : a pre-trained tokenizer
return:
Dataset: tokenized sequences and associated labels)"""
sequences_tokenized = tokenizer(sequences, padding=True, truncation=True)
sequences_tokenized = torch.float16(sequences_tokenized)
labels = torch.tensor(labels)
labels = labels.long()
sequences_dataset = Dataset.from_dict(sequences_tokenized)
sequences_dataset = sequences_dataset.add_column("labels", labels)
return sequences_dataset
sequences,labels = process_tsv(join(CURRENT_DIR,"example.tsv"))
tokenizer = AutoTokenizer.from_pretrained(check_point)
sequences_dataset = tokenize_add_label(sequences,labels,tokenizer)
num_labels = max(labels)+1
model = AutoModelForSequenceClassification.from_pretrained(check_point,num_labels=num_labels)
#device = "cuda" if torch.cuda.is_available() else "cpu"
#model.to(device)
model.cuda()
#model = model.half()
#model.enable_input_require_grads()
model_name = check_point.split("/")[-1]
trainer_dir = f"{model_name}-finetuned-model_esm-1b_on_7beta"
if not os.path.exists(trainer_dir):
os.mkdir(trainer_dir)
batch_size = 1
training_args = transformers.TrainingArguments(
output_dir=trainer_dir, # output directory
overwrite_output_dir=True,
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=batch_size, # batch size per device during training
per_device_eval_batch_size=batch_size, # batch size for evaluation
learning_rate=2e-5,
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir=trainer_dir, # directory for storing logs
logging_steps=10,
load_best_model_at_end=True,
evaluation_strategy="epoch",
save_strategy="epoch",
save_total_limit=1,
metric_for_best_model="accuracy",
greater_is_better=True,
disable_tqdm=True,
gradient_accumulation_steps = 2,
gradient_checkpointing=True
)
metric = load_metric(join(CURRENT_DIR,"metrics","accuracy/accuracy.py"))
def compute_metrics(eval_pred):
logits, labels = eval_pred
print("logits",logits)
print("labels",labels)
predictions = np.argmax(logits, axis=-1)
print("predictions",predictions)
return metric.compute(predictions=predictions, references=labels)
trainer = Trainer(
model = model,
args = training_args,
train_dataset=sequences_dataset,
eval_dataset=sequences_dataset,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
model.config.problem_type
trainer.train()
trainer.state.log_history
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
Some weights of EsmForSequenceClassification were not initialized from the model checkpoint at /home/wangmuqiang/fine_tune_esm2/esm1b_t33_650M_UR50S and are newly initialized: ['classifier.dense.bias', 'classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.weight']
You should probably TRAIN this model on a down-stream task to be able to use it fo
|
https://github.com/huggingface/transformers/issues/28179
|
closed
|
[] | 2023-12-21T09:50:27Z
| 2024-01-30T08:03:39Z
| null |
Admire7494
|
huggingface/alignment-handbook
| 81
|
Why we use a lower batch size when comparing SFT lora with SFT full fine-tuning ?
|
https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-beta/sft/config_lora.yaml
|
https://github.com/huggingface/alignment-handbook/issues/81
|
closed
|
[] | 2023-12-20T21:09:33Z
| 2024-01-07T21:03:14Z
| 2
|
shamanez
|
huggingface/trl
| 1,115
|
How to prepare multi-turn dialogue dataset for dpo?
|
the single-turn dialogue dataset is like:
dpo_dataset_dict = {
"prompt": [
"hello",
"how are you",
"What is your name?",
"What is your name?",
"Which is the best programming language?",
"Which is the best programming language?",
"Which is the best programming language?",
],
"chosen": [
"hi nice to meet you",
"I am fine",
"My name is Mary",
"My name is Mary",
"Python",
"Python",
"Java",
],
"rejected": [
"leave me alone",
"I am not fine",
"Whats it to you?",
"I dont have a name",
"Javascript",
"C++",
"C++",
],
}
So, how to prepare a multi-turn dialogue dataset? Can you provide an example? Thank you!
|
https://github.com/huggingface/trl/issues/1115
|
closed
|
[
"🏋 DPO"
] | 2023-12-20T09:14:45Z
| 2024-10-03T14:12:48Z
| null |
chloefresh
|
huggingface/transformers
| 28,155
|
What is the minimum video card with large memory required to run the mixtral-8x7b model
|
I mean the model that just came out:mistralai/Mixtral-8x7B-Instruct-v0.1,looks like a lot of parameter files,what is the minimum nvidia graphics card video memory required?
|
https://github.com/huggingface/transformers/issues/28155
|
closed
|
[] | 2023-12-20T01:54:45Z
| 2024-01-28T08:04:44Z
| null |
zysNLP
|
huggingface/dataset-viewer
| 2,218
|
JobManagerCrashedError jobs are never retried
|
Currently, we have 7768 jobs with error_code `JobManagerCrashedError`. Some of them are caused by zombie killer set crashes.
```
Atlas atlas-x5jgb3-shard-0 [primary] datasets_server_cache> db.cachedResponsesBlue.aggregate([{$match:{error_code:"JobManagerCrashedError","details.copied_from_artifact":{$exists:false}}},{$group:{_id:{kind:"$kind"},count:{$sum:1}}},{$sort:{count:-1}}])
[
{ _id: { kind: 'split-duckdb-index' }, count: 3658 },
{ _id: { kind: 'split-descriptive-statistics' }, count: 1872 },
{ _id: { kind: 'config-parquet-and-info' }, count: 1765 },
{ _id: { kind: 'split-first-rows-from-streaming' }, count: 322 },
{ _id: { kind: 'split-first-rows-from-parquet' }, count: 72 },
{ _id: { kind: 'split-opt-in-out-urls-scan' }, count: 60 },
{ _id: { kind: 'dataset-config-names' }, count: 21 }
]
```
But most of them are set as crashed when deploying and are never retried, even if they are fast and straightforward to process.
Should we retry those jobs in backfill? I think we should differentiate the ones that are easy to process against those that are difficult (primarily because of OOMs), maybe retry once or twice, and set a different error so that we can identify which of them are caused by limited resources.
|
https://github.com/huggingface/dataset-viewer/issues/2218
|
closed
|
[
"question"
] | 2023-12-19T15:22:30Z
| 2024-01-09T20:32:58Z
| null |
AndreaFrancis
|
pytorch/benchmark
| 2,094
|
how to get the memory test job
|
https://arxiv.org/pdf/2304.14226.pdf the paper says torchbench can do memory test, but I can‘t find any test jobs for memory test
https://github.com/pytorch/benchmark/actions
|
https://github.com/pytorch/benchmark/issues/2094
|
closed
|
[] | 2023-12-19T14:18:29Z
| 2023-12-20T01:59:46Z
| null |
GuWei007
|
pytorch/TensorRT
| 2,551
|
❓ [Question] Error regarding the operation of pytorch_quantization:/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found
|
## ❓ Question
<!-- Your question -->
When I run finetune_qat.py for vgg I get the error:
```
python finetune_qat.py
Traceback (most recent call last):
File "/home/incar/tms/source/tensortclassicify/finetune_qat.py", line 16, in <module>
from pytorch_quantization import nn as quant_nn
File "/home/incar/miniconda3/envs/timm/lib/python3.10/site-packages/pytorch_quantization/__init__.py", line 20, in <module>
from .quant_modules import *
File "/home/incar/miniconda3/envs/timm/lib/python3.10/site-packages/pytorch_quantization/quant_modules.py", line 23, in <module>
from pytorch_quantization import nn as quant_nn
File "/home/incar/miniconda3/envs/timm/lib/python3.10/site-packages/pytorch_quantization/nn/__init__.py", line 19, in <module>
from pytorch_quantization.nn.modules.tensor_quantizer import *
File "/home/incar/miniconda3/envs/timm/lib/python3.10/site-packages/pytorch_quantization/nn/modules/tensor_quantizer.py", line 24, in <module>
from pytorch_quantization.tensor_quant import QuantDescriptor, tensor_quant, fake_tensor_quant, scaled_e4m3
File "/home/incar/miniconda3/envs/timm/lib/python3.10/site-packages/pytorch_quantization/tensor_quant.py", line 28, in <module>
from pytorch_quantization import cuda_ext
ImportError: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /home/incar/miniconda3/envs/timm/lib/python3.10/site-packages/pytorch_quantization/cuda_ext.cpython-310-x86_64-linux-gnu.so)
```
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):
'2.1.2+cu121'
- CPU Architecture:
intel
- OS (e.g., Linux):
ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
pip install torch torchvision torchaudio
- Build command you used (if compiling from source):
pip install nvidia-pyindex sphinx-glpi-theme prettytable pyyaml absl-py scipy
pip install -i https://pypi.ngc.nvidia.com pytorch-quantization
- Are you using local sources or building from archives:
no
- Python version:
3.10
- CUDA version:
12.2
- GPU models and configuration:
- Any other relevant information:
## Additional context
so,how can i run pytorch_quantization?
|
https://github.com/pytorch/TensorRT/issues/2551
|
open
|
[
"question"
] | 2023-12-19T10:16:49Z
| 2024-02-16T02:29:47Z
| null |
tms2003
|
huggingface/optimum
| 1,608
|
XENOVA conversion issues
|
### System Info
```shell
using the requirements.txt in Xenova for environment.
https://github.com/xenova/transformers.js/blob/main/scripts/requirements.txt
```
### Who can help?
@xenova
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
"Error while initializing BPE: Token `_</w>` out of vocabulary"
### Expected behavior
Been trying to run blenderbot90, 400, 1b distilled.
Have had lots of issues, but I'll start with this one.
version 1 attempt, and loading from local after git-large file from HF repo.
tokenizer = AutoTokenizer.from_pretrained(model)
model = ORTModelForSeq2SeqLM.from_pretrained(model)
inputs = tokenizer("what is a black hole", return_tensors="pt")
gen_tokens = model.generate(**inputs)
response = tokenizer.batch_decode(gen_tokens)
version 2 attempt, directly repo using pipeline
from transformers import AutoTokenizer, pipeline
from optimum.onnxruntime import ORTModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Xenova/blenderbot_small-90M")
model = ORTModelForSeq2SeqLM.from_pretrained("Xenova/blenderbot_small-90M")
onnx_pipe = pipeline("conversational", model=model, tokenizer=tokenizer)
text = "what is a black hole"
response = onnx_pipe (text)
Both cases getting this error: "Error while initializing BPE: Token `_</w>` out of vocabulary"
|
https://github.com/huggingface/optimum/issues/1608
|
closed
|
[
"bug"
] | 2023-12-19T02:11:58Z
| 2023-12-19T04:54:00Z
| 3
|
gidzr
|
pytorch/torchx
| 802
|
Why can't tracker entrypoint be specified in .torchxconfig
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our
[documentation](https://pytorch.org/torchx).
### Question
The [documentation](https://pytorch.org/torchx/main/tracker.html#user-job-configuration-advanced) is somewhat confusing and is marked for Advanced use after mentioning the mechanism to reference entrypoint, but is there a reason we can't also specify the tracker's entrypoint right in `.torchxconfig` in addition to those discoverable via `entry_points.txt`? E.g.:
```
[torchx:tracker]
my_tracker=my_module:my_function
[tracker:my_tracker]
...
```
|
https://github.com/meta-pytorch/torchx/issues/802
|
open
|
[] | 2023-12-18T21:26:02Z
| 2023-12-19T17:39:10Z
| 2
|
clumsy
|
huggingface/safetensors
| 409
|
Doesn't work with versions of torch where "meta" dtype is not supported.
|
### System Info
This is on my mac where I was just testing the interface. It seems like this could easily be fixed.
```
...
>>> from safetensors.torch import save_file
>>> x
{'a': tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])}
>>> x['a'].device
device(type='cpu')
>>> save_file(x, filename='foo')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/safetensors/torch.py", line 281, in save_file
serialize_file(_flatten(tensors), filename, metadata=metadata)
File "/usr/local/lib/python3.9/site-packages/safetensors/torch.py", line 460, in _flatten
shared_pointers = _find_shared_tensors(tensors)
File "/usr/local/lib/python3.9/site-packages/safetensors/torch.py", line 72, in _find_shared_tensors
if v.device != torch.device("meta") and storage_ptr(v) != 0 and storage_size(v) != 0:
RuntimeError: Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hip, msnpu, xla, vulkan device type at start of device string: meta
>>> safetensors.__version__
'0.4.1'
>>> torch.__version__
'1.8.1'
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Reproduction
Install torch 1.8.1 and safetensors 0.4.1 (this is current safetensor version in pip default channel)
run the code above (sorry I have not reduced this to a script but it's the most minimal example of using safetensors)
### Expected behavior
save_file should work with older versions of torch, like 1.8.1
|
https://github.com/huggingface/safetensors/issues/409
|
closed
|
[
"Stale"
] | 2023-12-18T15:51:28Z
| 2024-01-23T01:49:25Z
| null |
danpovey
|
huggingface/candle
| 1,457
|
How to do to quantize manually a phi-2 version, starting from safetensors file
|
Hi
I have fine tuned a phi-2 model using lora
I merged adapter with base model to get a trained one
I now have a bunch of safetensors file
How is it possible to convert these files into a gguf file ( llama.cpp concerter does not support phi)
In other words, how is it possible to achieve the same as : model-v2-q4k.gguf in lmz/candle-quantized-phi
|
https://github.com/huggingface/candle/issues/1457
|
closed
|
[] | 2023-12-18T15:14:37Z
| 2023-12-18T15:58:12Z
| null |
ghost
|
huggingface/optimum
| 1,605
|
Static Quantization - Token classification
|
Hi,
I am following the code [here](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime/quantization/token-classification) for doing static quantization on my token classification model.
The inference time for quantized model(static) is almost the same as non quantized one. I have tried dynamic quantization too and it is showing some improvement in terms of latency but i need more latency improvements.
Do i have to do anything additional to lower/improve the inference time than what is mentioned [here](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime/quantization/token-classification) for static quantization. Can anyone please help me?
|
https://github.com/huggingface/optimum/issues/1605
|
open
|
[
"quantization"
] | 2023-12-18T13:31:33Z
| 2024-10-09T09:21:22Z
| 0
|
akshay-babbar
|
huggingface/diffusers
| 6,211
|
[Examples] How much time you support training scripts of text to video in diffusers?
|
I want to train svd in diffusers, can you support this feature in examples.
Thanks for your contributions.
|
https://github.com/huggingface/diffusers/issues/6211
|
closed
|
[
"stale"
] | 2023-12-18T08:26:57Z
| 2024-01-26T15:05:32Z
| null |
jiaxiangc
|
huggingface/optimum
| 1,604
|
Table Transformer to ONNX
|
### Feature request
Hi all,
I am trying to convert Table-transformer model from transformers(pretrained) to ONNX. Error reads something like " 'table-transformer' is not a supported format.
Is there any way to convert table-transformer (TATR) to ONNX model. Any help would be cherished.
Thanks.
### Motivation
Motivation for this is, I am working on developing a light weight table structure recognition model, ONNX model would help me in that regard.
### Your contribution
None
|
https://github.com/huggingface/optimum/issues/1604
|
closed
|
[
"feature-request",
"onnx"
] | 2023-12-18T07:18:21Z
| 2024-02-28T08:52:49Z
| 3
|
balajiChundi
|
huggingface/safetensors
| 407
|
Does safetensors save the model's hierarchical structure? Is it similar to ONNX?
|
If safetensors saves the model's hierarchical structure, how can one access this structure? Is it possible to read it directly like with ONNX?Can I directly load a model from safetensors?
If the hierarchical structure of the model is not preserved, does it mean that the original model must be read from config.json?
|
https://github.com/huggingface/safetensors/issues/407
|
closed
|
[
"Stale"
] | 2023-12-17T15:04:55Z
| 2024-02-24T01:45:09Z
| 3
|
ZDragonX
|
huggingface/datasets
| 6,507
|
where is glue_metric.py> @Frankie123421 what was the resolution to this?
|
> @Frankie123421 what was the resolution to this?
use glue_metric.py instead of glue.py in load_metric
_Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_
|
https://github.com/huggingface/datasets/issues/6507
|
closed
|
[] | 2023-12-17T09:58:25Z
| 2023-12-18T11:42:49Z
| null |
Mcccccc1024
|
huggingface/peft
| 1,278
|
How to add trainable parameters? (bugs in 'modules_to_save')
|
### System Info
Hi,
How can I train other weights in the model rather than fix them during lora training?
### Who can help?
@BenjaminBossan Hi, I find you are active recently so I @ you here..
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```
self.model, self.peft_optimizer, _, self.peft_lr_scheduler = deepspeed.initialize(
config=training_args.deepspeed,
model=model,
model_parameters=optimizers['model_parameters'] if self.training_args.do_train else None,
optimizer=hf_optimizer,
lr_scheduler=hf_lr_scheduler
)
```
I add the parameters I want to train in `hf_optimizer`, but those parameters still do not change
### Expected behavior
the gradient of those parameters added to `hf_optimizer` should not be None
|
https://github.com/huggingface/peft/issues/1278
|
closed
|
[] | 2023-12-17T05:34:09Z
| 2024-01-29T15:03:39Z
| null |
shawnricecake
|
pytorch/audio
| 3,717
|
AV-HuBERT integration with torchaudio.pipelines.Wav2Vec2FABundle
|
### 🚀 The feature
How would someone go about configuring AV-HuBERT to work with `torchaudio.pipelines.Wav2Vec2FABundle`? It currently only supports [MMS_FA](https://pytorch.org/audio/stable/pipelines.html#pertrained-models)
### Motivation, pitch
Currently the `torchaudio.pipelines.Wav2Vec2FABundle` forced aligner only supports [MMS_FA](https://pytorch.org/audio/stable/pipelines.html#pertrained-models).
This is a request to add support for an AV-ASR, namely AV-HuBERT. The feature could also be a tutorial on how to extend the list of supported models that are multimodal speech+video.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/audio/issues/3717
|
open
|
[] | 2023-12-16T01:04:05Z
| 2023-12-16T01:04:05Z
| 0
|
bejjani
|
huggingface/accelerate
| 2,262
|
When I trained with two processes the gradient of the parameters could not be shared and I ended up with two different models. How to solve this problem?
|
When I trained with two processes the gradient of the parameters could not be shared and I ended up with two different models. Did anyone meet this problem before? How to solve it?
|
https://github.com/huggingface/accelerate/issues/2262
|
closed
|
[] | 2023-12-15T13:48:34Z
| 2024-06-11T12:26:07Z
| null |
zypsjtu
|
huggingface/datasets
| 6,501
|
OverflowError: value too large to convert to int32_t
|
### Describe the bug

### Steps to reproduce the bug
just loading datasets
### Expected behavior
how can I fix it
### Environment info
pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3-none-any.whl
pip install huggingface_hub-0.19.4-py3-none-any.whl tokenizers-0.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl transformers-4.36.1-py3-none-any.whl pyarrow_hotfix-0.6-py3-none-any.whl datasets-2.15.0-py3-none-any.whl tyro-0.5.18-py3-none-any.whl trl-0.7.4-py3-none-any.whl
done
|
https://github.com/huggingface/datasets/issues/6501
|
open
|
[] | 2023-12-15T10:10:21Z
| 2025-06-27T04:27:14Z
| 1
|
zhangfan-algo
|
pytorch/kineto
| 851
|
In Overview page, time unit error
|
Time unit error

|
https://github.com/pytorch/kineto/issues/851
|
closed
|
[
"question"
] | 2023-12-15T04:15:45Z
| 2024-04-23T15:23:24Z
| null |
Aiuan
|
huggingface/diffusers
| 6,178
|
How to train Stable Diffusion with DDPM?
|
I want to train Stable Diffusion with DDPM, but I can't find the code in this project. I found a lot of training code elsewhere on the internet, but most of it is distillation code on pre-trained models, not the original DDPM training code. I also tried to implement the original training code myself, but I couldn't get good results. Could you provide me with the code for this part if it's convenient for you?
|
https://github.com/huggingface/diffusers/issues/6178
|
closed
|
[] | 2023-12-15T02:43:07Z
| 2023-12-15T02:54:06Z
| null |
MenSanYan
|
huggingface/dataset-viewer
| 2,208
|
Add a collection with datasets infos
|
While working on enabling private datasets (#39) under conditions (isPro, isEnterprise), I thought we missed a place where we control the access to the dataset.
I think the first step in the DAG, instead of dataset-config-names, should be more about the dataset characteristics: if it's private or public, maybe if it's gated (not sure if it's useful info), if the user is pro or if the org is enterprise, if the viewer is disabled through the README (see https://github.com/huggingface/datasets-server/issues/2207), if the dataset is in the block list.
All that information could go to a new step called `dataset-status` or something similar.
The content could be:
```json
{
"dataset": "namespace/dataset",
"private": true,
"proUser": false,
"enterpriseOrg": true,
"disabledFromReadme": false,
"gated": false,
"blocked": false,
}
```
And a second step, called `dataset-enabled`, that would depend on `dataset-status`, and would return:
- 200 `{enabled: true}` if all the conditions are met
- 404 if we don't want to disclose the existence of the dataset, or if it does not exist
- 501 if it's not implemented
- 403? 404? if the dataset viewer is not enabled (private dataset, no pro user/enterprise org)
Then, the following steps would propagate the error if so, or if 200, will act as currently.
I think it's clearer to have two different steps: one to collect the data, another one to take a decision on this basis. We could also have everything in one cache entry, but I think the logic for maintenance would be harder (we would have to add info like: is that dataset private, is the user pro, etc. in the error details, or in the content, etc. to be able to check them regularly)
|
https://github.com/huggingface/dataset-viewer/issues/2208
|
closed
|
[
"question",
"refactoring / architecture",
"P2"
] | 2023-12-14T13:59:42Z
| 2024-01-11T14:30:03Z
| null |
severo
|
huggingface/dataset-viewer
| 2,207
|
Backfill job processes datasets with disabled viewer?
|
If I read the code correctly, the backfill cronjob does not check if the dataset viewer is disabled (`viewer: false` in the README).
If we want to implement the dataset viewer for private datasets, under conditions (isPro, isEnterprise), we will have to check these conditions before adding jobs.
|
https://github.com/huggingface/dataset-viewer/issues/2207
|
closed
|
[
"bug",
"question",
"P2"
] | 2023-12-14T13:01:53Z
| 2024-02-06T16:03:10Z
| null |
severo
|
huggingface/huggingface_hub
| 1,907
|
How to fix "VBox(children=(HTML(value='<center> <img..." error? When trying login()
|
### Describe the bug
Hello. I am doing like below but it doesn't show enter token panel as supposed to be
What could be the reason?

Pip freeze is as below
```
alembic @ file:///home/conda/feedstock_root/build_artifacts/alembic_1701459233889/work
anyio @ file:///home/conda/feedstock_root/build_artifacts/anyio_1700835416766/work
archspec @ file:///home/conda/feedstock_root/build_artifacts/archspec_1699370045702/work
argon2-cffi @ file:///home/conda/feedstock_root/build_artifacts/argon2-cffi_1692818318753/work
argon2-cffi-bindings @ file:///home/conda/feedstock_root/build_artifacts/argon2-cffi-bindings_1695386553988/work
arrow @ file:///home/conda/feedstock_root/build_artifacts/arrow_1696128962909/work
asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1698341106958/work
async-generator==1.10
async-lru @ file:///home/conda/feedstock_root/build_artifacts/async-lru_1690563019058/work
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1683424013410/work
Babel @ file:///home/conda/feedstock_root/build_artifacts/babel_1698174530262/work
beautifulsoup4 @ file:///home/conda/feedstock_root/build_artifacts/beautifulsoup4_1680888073205/work
bleach @ file:///home/conda/feedstock_root/build_artifacts/bleach_1696630167146/work
blinker @ file:///home/conda/feedstock_root/build_artifacts/blinker_1698890160476/work
boltons @ file:///home/conda/feedstock_root/build_artifacts/boltons_1677499911949/work
Brotli @ file:///home/conda/feedstock_root/build_artifacts/brotli-split_1695989787169/work
cached-property @ file:///home/conda/feedstock_root/build_artifacts/cached_property_1615209429212/work
certifi @ file:///home/conda/feedstock_root/build_artifacts/certifi_1700303426725/work/certifi
certipy==0.1.3
cffi @ file:///home/conda/feedstock_root/build_artifacts/cffi_1696001724357/work
charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1698833585322/work
colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1666700638685/work
comm @ file:///home/conda/feedstock_root/build_artifacts/comm_1691044910542/work
conda @ file:///home/conda/feedstock_root/build_artifacts/conda_1699392346065/work
conda-libmamba-solver @ file:///home/conda/feedstock_root/build_artifacts/conda-libmamba-solver_1700148543755/work/src
conda-package-handling @ file:///home/conda/feedstock_root/build_artifacts/conda-package-handling_1691048088238/work
conda_package_streaming @ file:///home/conda/feedstock_root/build_artifacts/conda-package-streaming_1691009212940/work
cryptography @ file:///home/conda/feedstock_root/build_artifacts/cryptography-split_1701563208210/work
debugpy @ file:///home/conda/feedstock_root/build_artifacts/debugpy_1695534290440/work
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
defusedxml @ file:///home/conda/feedstock_root/build_artifacts/defusedxml_1615232257335/work
entrypoints @ file:///home/conda/feedstock_root/build_artifacts/entrypoints_1643888246732/work
exceptiongroup @ file:///home/conda/feedstock_root/build_artifacts/exceptiongroup_1700579780973/work
executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1698579936712/work
fastjsonschema @ file:///home/conda/feedstock_root/build_artifacts/python-fastjsonschema_1700055509243/work/dist
filelock==3.13.1
fqdn @ file:///home/conda/feedstock_root/build_artifacts/fqdn_1638810296540/work/dist
fsspec==2023.12.2
greenlet @ file:///home/conda/feedstock_root/build_artifacts/greenlet_1698243379066/work
huggingface-hub==0.19.4
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1701026962277/work
importlib-metadata @ file:///home/conda/feedstock_root/build_artifacts/importlib-metadata_1701632192416/work
importlib-resources @ file:///home/conda/feedstock_root/build_artifacts/importlib_resources_1699364556997/work
ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1698244021190/work
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1701703101339/work
ipython-genutils==0.2.0
ipywidgets==8.1.1
isoduration @ file:///home/conda/feedstock_root/build_artifacts/isoduration_1638811571363/work/dist
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1696326070614/work
Jinja2 @ file:///home/conda/feedstock_root/build_artifacts/jinja2_1654302431367/work
json5 @ file:///home/conda/feedstock_root/build_artifacts/json5_1688248289187/work
jsonpatch @ file:///home/conda/feedstock_root/build_artifacts/jsonpatch_1695536281965/work
jsonpointer @ file:///home/conda/feedstock_root/build_artifacts/jsonpointer_1695397236330/work
jsonschema @ file:///home/conda/feedstock_root/build_artifacts/jsonschema-meta_1700159890288/work
jsonschema-specifications @ file:///home/conda/feedstock_root/build_artifacts/jsonschema-specifications_1701365715051/w
|
https://github.com/huggingface/huggingface_hub/issues/1907
|
closed
|
[
"bug"
] | 2023-12-14T11:45:44Z
| 2025-03-15T08:03:44Z
| null |
FurkanGozukara
|
huggingface/unity-api
| 17
|
Android support
|
Great repo! My question is - does it work on Android?
I did some research but couldn't find much - except for some comments on [YouTube](https://www.youtube.com/watch?v=Ngmb7l7tO0I) that speech recognition doesn't really work on Android ("_when i export to an a Android Device the text always is "you", no matter what did i say. I don't know if needs another configuration because in the unity editor works fine_").
Could you please clarify?
Thank you!
|
https://github.com/huggingface/unity-api/issues/17
|
open
|
[
"question"
] | 2023-12-14T11:15:56Z
| 2024-01-18T10:56:45Z
| null |
dogadogan
|
huggingface/alignment-handbook
| 76
|
can we inference with lora adapter after running the SFT ?
|
I trained the model using SFT on a custom dataset using lora config, which produced a Lora adapter, can we infer with it like having a base model and this adapter on top of it, or merge it ?
|
https://github.com/huggingface/alignment-handbook/issues/76
|
closed
|
[] | 2023-12-14T10:55:20Z
| 2023-12-28T07:14:29Z
| 2
|
Tejaswi-kashyap-006
|
huggingface/accelerate
| 2,251
|
when a tensor is generated from some_func(A.shape) (where A is a tensor), the generated tensor locates in cpu, not A's device
|
how to solve it ? I have tried tensor.to(A.device) and tensor.to(accelerator.device), but it seems not to work.
|
https://github.com/huggingface/accelerate/issues/2251
|
closed
|
[] | 2023-12-14T09:18:15Z
| 2023-12-14T14:38:17Z
| null |
weizhenhuan
|
pytorch/serve
| 2,853
|
Torchserve Error: number of batch response mismatched
|
### 🐛 Describe the bug
We deployed NER Model with n1-standard-8 machine without GPU with below config properties. when we kept batch size as 1, it is taking more time to process the simultaneous requests. when we try to increase the batch size, we are getting below error. (we tried with different batch size like 16,32,64,8 etc and max workers as 1 and 8). I want to process multiple threads simultaneously. Please suggest solution. Do I need to change handler script, if yes, how? How to increase throughput?
### Error logs
Response: response_data: {'code': 503, 'type': 'InternalServerException', 'message': 'number of batch response mismatched'}
### Installation instructions
Yes, we are using docker container to deploy the model on vertex ai
### Model Packaing
Using docker and creating a custom prediction container and packaging all the serving scripts like handler.py, config properties etc
### config.properties
inference_address=http://0.0.0.0:8090
management_address=http://0.0.0.0:8091
metrics_address=http://0.0.0.0:8092
install_py_dep_per_model=true
prefer_direct_buffer=true
job_queue_size=10000
async_logging=true
number_of_netty_threads=8
netty_client_threads=8
default_workers_per_model=1
models={\
"description": {\
"1.0": {\
"defaultVersion": true,\
"marName": "description.mar",\
"minWorkers": 1,\
"maxWorkers": 8,\
"batchSize": 16,\
"maxBatchDelay": 65,\
"responseTimeout": 100\
}\
}\
}
### Versions
we are using this base image
pytorch/torchserve:latest-gpu
### Repro instructions
we carried out performance testing using 5/10/20 simultaneous users hitting vertex ai endpoint but avg time is around 20 seconds which is very high for 20 simultaneous users.
### Possible Solution
How to optimize the config parameters? Do I need to update handler script? Please suggest a way
|
https://github.com/pytorch/serve/issues/2853
|
closed
|
[
"triaged"
] | 2023-12-14T08:33:11Z
| 2024-01-18T20:11:46Z
| 9
|
rajeshmore1
|
pytorch/TensorRT
| 2,541
|
❓ [Question] Is it possible to export unet's tensorrt engine as a file in stable diffusion?
|
## ❓ Question
Hello. I am currently trying to infer the stable diffusion XL inpaint model using your package.
model link : https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1
I referred to your example code and modified it as follows.
```python
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image
import torch_tensorrt
model_id = "diffusers/stable-diffusion-xl-1.0-inpainting-0.1"
device = "cuda"
# Instantiate Stable Diffusion Pipeline with FP16 weights
pipe = AutoPipelineForInpainting.from_pretrained(
model_id, variant="fp16", torch_dtype=torch.float16
)
pipe = pipe.to(device)
backend = "torch_tensorrt"
# Optimize the UNet portion with Torch-TensorRT
pipe.unet = torch.compile(
pipe.unet,
backend=backend,
options={
"truncate_long_and_double": True,
"precision": torch.float16,
},
dynamic=False,
)
# %%
# Inference
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
image = load_image(img_url).resize((1024, 1024))
mask_image = load_image(mask_url).resize((1024, 1024))
prompt = "a tiger sitting on a park bench"
image = pipe(
prompt=prompt,
image=image,
mask_image=mask_image,
guidance_scale=8.0,
num_inference_steps=20,
strength=0.99,
).images[0]
image.save("inpaint-result.png")
```
On my gpu machine the conversion to tensorrt takes over 15 minutes. Since I can't do this conversion every time, I'm trying to find a way to save it in file format such as ".trt" file and use it.
When looking in your documentation, it was difficult to find such a feature. Do you support these features? If so, please let me know.
## What you have already tried
Described above
## Environment
docker container : nvcr.io/nvidia/pytorch:23.11-py3
gpu : p40
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/2541
|
open
|
[
"question"
] | 2023-12-14T08:13:19Z
| 2023-12-15T22:48:48Z
| null |
0-chan-kor
|
huggingface/peft
| 1,265
|
When generate outputs, how to get the probility of the outputs? Is there any param to let the model output probility ?
|
### Feature request
xx
### Motivation
xx
### Your contribution
xx
|
https://github.com/huggingface/peft/issues/1265
|
closed
|
[] | 2023-12-14T08:05:34Z
| 2023-12-14T10:37:19Z
| null |
ShawnALiu
|
huggingface/transformers
| 28,025
|
How to combine two pretrained model in huggingface transformers?
|
### Feature request
I want to combine two pretrained model(LLAMA and BERT) in a new python class. More specific,The way I've tried is to define a new class c that inherits llama and load bert in c's \_\_init\_\_ function.

So that I can use c.from_pretrained('llama_ckpt_dir') to load two model together.
`model=C.from_pretrained('llama_ckpt_dir',low_cpu_mem_usage=True)`
After I use c.save_pretrained(), even the checkpoint keeps total structure of llama and bert ,bert's params are all random initialize(weights Gaussian initialization bias all zero). (I checked this by torch.load the saved c checkpoint and print it out)
Sincerely requesting some help, what should be done?
### Motivation
Since trainer can be passed only one model at a time, so it seems a good feature that should be concerned for who wants to do things like train two model together?
But there is another difficulty that how to deal with two total diffrent tokenizer from bert and llama(even though this is not required for trainer(since tokenizer usually only used by data preprocess), but I hope I can fix this so that I can completely transform c into a total hf model)
### Your contribution
I'm not sure what I can help, but I can fully support anything that can contribute to this issue.
|
https://github.com/huggingface/transformers/issues/28025
|
closed
|
[] | 2023-12-14T04:45:51Z
| 2024-01-03T10:26:31Z
| null |
rangehow
|
huggingface/chat-ui
| 631
|
Can we add full version number/build number on the landingpage?
|
Can we add full version number/build number or whatever, on the landingpage?
To distinguish between different installations.
If you go to https://huggingface.co/chat/, it looks like this:

If you go to https://huggingfaceh4-zephyr-chat.hf.space/, it looks like this:

So the version seems to be the same, but the buttons on the right side seems to indicate that there is differences in the version, i would guess? (if not huggingchat is a custom build?)
|
https://github.com/huggingface/chat-ui/issues/631
|
open
|
[
"enhancement"
] | 2023-12-13T10:50:19Z
| 2023-12-14T14:26:31Z
| 4
|
patchie
|
huggingface/optimum
| 1,592
|
Can optimum.bettertransformer supports LLAVA model?
|
### System Info
```shell
Local NVIDIA env:
(llava) xuyang@nobisuke:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Jan__6_16:45:21_PST_2023
Cuda compilation tools, release 12.0, V12.0.140
Build cuda_12.0.r12.0/compiler.32267302_0
Python=3.10.4
Torch==2.0.1+cu117
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
```
from optimum.bettertransformer import BetterTransformer
model = BetterTransformer.transform(model)
```
### Expected behavior
Recently, we sought to apply the optimum.bettertransformer in LLAVA for fine-tuning. The code run successfully and we found that the memory has decreased significantly.
However, in https://huggingface.co/docs/optimum/v1.15.0/bettertransformer/overview, we found that LLAVA is not in the support list.
Therefore, we want to confirm that can bettertransformer employ for pre-training or fine-tuning in LLAVA now?
|
https://github.com/huggingface/optimum/issues/1592
|
closed
|
[
"bug"
] | 2023-12-13T09:08:35Z
| 2023-12-13T12:37:13Z
| 1
|
xiaovhua
|
huggingface/blog
| 1,702
|
How to introduce new alphabets in Whisper fine-tuning
|
Dear @sanchit-gandhi,
I was following your tutorial, [Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper), to fine-tune Whisper with a dataset in the Amharic language. Amharic is used in Whisper training as speech-translation only, [Amharic audio -> corresponding English translation text]. Hence the Amharic alphabets are unseen in Whisper training.
The dataset I am trying to fine-tune with is [Amharic audio -> corresponding text in Amharic characters]. It consists of 92.28 hours (32901 instances) for training and 9.12 hours (3139 instances) for the testing set.
My data sources are:
1. https://github.com/getalp/ALFFA_PUBLIC/tree/master/ASR/AMHARIC and
2. https://www.findke.ovgu.de/findke/en/Research/Data+Sets/Amharic+Speech+Corpus.html
I tried the tiny, base, and small model sizes. In my first run with whisper-small, I observed a bad performance but when tried to play around with some parameters, including the model size, I was unable to run the code even.
I am not quite sure how to introduce the Amharic language characters other than giving the corresponding text as I have seen in the Hindi example.
I would appreciate your comment regarding the language whose characters were not seen in the Whisper training because it was treated as a speech translation only.
Thank you!
|
https://github.com/huggingface/blog/issues/1702
|
open
|
[] | 2023-12-13T02:47:31Z
| 2024-10-02T02:16:12Z
| null |
mequanent
|
huggingface/chat-ui
| 629
|
Unable to use Azure AD for OpenID signin
|
Azure AD does not return the `picture` claim for the `profile` scope which results in a Zod validation error and authentication failing with `HTTP 500`:
```
chat-ui-chat-ui-1 | 21:07:21 28|index | ZodError: [
chat-ui-chat-ui-1 | 21:07:21 28|index | {
chat-ui-chat-ui-1 | 21:07:21 28|index | "code": "invalid_type",
chat-ui-chat-ui-1 | 21:07:21 28|index | "expected": "string",
chat-ui-chat-ui-1 | 21:07:21 28|index | "received": "undefined",
chat-ui-chat-ui-1 | 21:07:21 28|index | "path": [
chat-ui-chat-ui-1 | 21:07:21 28|index | "picture"
chat-ui-chat-ui-1 | 21:07:21 28|index | ],
chat-ui-chat-ui-1 | 21:07:21 28|index | "message": "Required"
chat-ui-chat-ui-1 | 21:07:21 28|index | }
chat-ui-chat-ui-1 | 21:07:21 28|index | ]
chat-ui-chat-ui-1 | 21:07:21 28|index | at get error [as error] (file:///app/node_modules/zod/lib/index.mjs:538:31)
chat-ui-chat-ui-1 | 21:07:21 28|index | at ZodEffects.parse (file:///app/node_modules/zod/lib/index.mjs:638:22)
chat-ui-chat-ui-1 | 21:07:21 28|index | at updateUser (file:///app/build/server/chunks/7-74fde01e.js:34:6)
chat-ui-chat-ui-1 | 21:07:21 28|index | at load (file:///app/build/server/chunks/7-74fde01e.js:126:9)
chat-ui-chat-ui-1 | 21:07:21 28|index | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
chat-ui-chat-ui-1 | 21:07:21 28|index | at async load_server_data (file:///app/build/server/index.js:1932:18)
chat-ui-chat-ui-1 | 21:07:21 28|index | at async file:///app/build/server/index.js:3303:18 {
chat-ui-chat-ui-1 | 21:07:21 28|index | issues: [
chat-ui-chat-ui-1 | 21:07:21 28|index | {
chat-ui-chat-ui-1 | 21:07:21 28|index | code: 'invalid_type',
chat-ui-chat-ui-1 | 21:07:21 28|index | expected: 'string',
chat-ui-chat-ui-1 | 21:07:21 28|index | received: 'undefined',
chat-ui-chat-ui-1 | 21:07:21 28|index | path: [Array],
chat-ui-chat-ui-1 | 21:07:21 28|index | message: 'Required'
chat-ui-chat-ui-1 | 21:07:21 28|index | }
chat-ui-chat-ui-1 | 21:07:21 28|index | ],
chat-ui-chat-ui-1 | 21:07:21 28|index | addIssue: [Function (anonymous)],
chat-ui-chat-ui-1 | 21:07:21 28|index | addIssues: [Function (anonymous)],
chat-ui-chat-ui-1 | 21:07:21 28|index | errors: [
chat-ui-chat-ui-1 | 21:07:21 28|index | {
chat-ui-chat-ui-1 | 21:07:21 28|index | code: 'invalid_type',
chat-ui-chat-ui-1 | 21:07:21 28|index | expected: 'string',
chat-ui-chat-ui-1 | 21:07:21 28|index | received: 'undefined',
chat-ui-chat-ui-1 | 21:07:21 28|index | path: [Array],
chat-ui-chat-ui-1 | 21:07:21 28|index | message: 'Required'
chat-ui-chat-ui-1 | 21:07:21 28|index | }
chat-ui-chat-ui-1 | 21:07:21 28|index | ]
chat-ui-chat-ui-1 | 21:07:21 28|index | }
```
|
https://github.com/huggingface/chat-ui/issues/629
|
closed
|
[
"support"
] | 2023-12-12T21:22:19Z
| 2024-02-19T09:39:51Z
| 8
|
zacps
|
huggingface/chat-ui
| 628
|
isModelsModalOpen is not defined in ChatIntroduction.svelte probably after recent update ?
|
Hi getting this error after updating to the latest version :
Am Running :
{
'chat-ui': '0.6.0',
npm: '10.2.4',
node: '21.3.0',
acorn: '8.11.2',
ada: '2.7.4',
ares: '1.20.1',
base64: '0.5.1',
brotli: '1.0.9',
cjs_module_lexer: '1.2.2',
cldr: '44.0',
icu: '74.1',
llhttp: '9.1.3',
modules: '120',
napi: '9',
nghttp2: '1.58.0',
nghttp3: '0.7.0',
ngtcp2: '0.8.1',
openssl: '3.0.12+quic',
simdutf: '4.0.4',
tz: '2023c',
undici: '5.27.2',
unicode: '15.1',
uv: '1.46.0',
uvwasi: '0.0.19',
v8: '11.8.172.17-node.17',
zlib: '1.2.13.1-motley-5daffc7'
}
```
> chat-ui@0.6.0 dev
> vite dev
VITE v4.3.9 ready in 1206 ms
➜ Local: http://localhost:5173/
➜ Network: use --host to expose
(node:1526125) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
12:13:23 AM [vite-plugin-svelte] /home/user/public_html/chatui3/src/lib/components/chat/ChatIntroduction.svelte:53:7 'isModelsModalOpen' is not defined
12:13:23 AM [vite-plugin-svelte] /home/user/public_html/chatui3/src/lib/components/chat/ChatIntroduction.svelte:54:53 'isModelsModalOpen' is not defined
12:13:23 AM [vite-plugin-svelte] /home/user/public_html/chatui3/src/lib/components/chat/ChatIntroduction.svelte:64:22 'isModelsModalOpen' is not defined
ReferenceError: isModelsModalOpen is not defined
at /home/user/public_html/chatui3/src/lib/components/chat/ChatIntroduction.svelte:61:8
at Object.$$render (/home/user/public_html/chatui3/node_modules/svelte/src/runtime/internal/ssr.js:156:16)
at eval (/home/user/public_html/chatui3/src/lib/components/chat/ChatMessages.svelte:75:99)
at Object.$$render (/home/user/public_html/chatui3/node_modules/svelte/src/runtime/internal/ssr.js:156:16)
at eval (/home/user/public_html/chatui3/src/lib/components/chat/ChatWindow.svelte:116:102)
at Object.$$render (/home/user/public_html/chatui3/node_modules/svelte/src/runtime/internal/ssr.js:156:16)
at /home/user/public_html/chatui3/src/routes/+page.svelte:57:25
at Object.$$render (/home/user/public_html/chatui3/node_modules/svelte/src/runtime/internal/ssr.js:156:16)
at Object.default (/home/user/public_html/chatui3/.svelte-kit/generated/root.svelte:50:42)
at eval (/home/user/public_html/chatui3/src/routes/+layout.svelte:203:39)
```
|
https://github.com/huggingface/chat-ui/issues/628
|
closed
|
[
"support"
] | 2023-12-12T18:49:31Z
| 2023-12-24T07:40:42Z
| 7
|
DrShivang
|
huggingface/autotrain-advanced
| 389
|
How to disable default used --multi_gpu ?
|
File "/app/env/lib/python3.10/site-packages/accelerate/commands/launch.py", line 822, in _validate_launch_command
raise ValueError("You need to use at least 2 processes to use `--multi_gpu`.")
ValueError: You need to use at least 2 processes to use `--multi_gpu`.
How to disable this from the default provided params ?
Can autotrain be used with the free CPU version ?
thank you
|
https://github.com/huggingface/autotrain-advanced/issues/389
|
closed
|
[] | 2023-12-12T13:32:03Z
| 2023-12-15T09:21:52Z
| null |
FiveTechSoft
|
huggingface/chat-ui
| 627
|
Rlhf data collection feature
|
Is it possible to add a way to generate multiple drafts for a given input. And then based on what the user picks save that data so that it can be used for rlhf?
|
https://github.com/huggingface/chat-ui/issues/627
|
open
|
[
"enhancement",
"front",
"back"
] | 2023-12-12T13:29:06Z
| 2023-12-14T08:53:14Z
| 0
|
nivibilla
|
huggingface/transformers
| 27,974
|
how to replace the existing token in a tokenizer
|
### Feature request
I have a tokenizer which have lots of preserved tokens like bellow:
```
'<reserved_7>': 100,
'<reserved_8>': 101,
'<reserved_9>': 102,
'<reserved_10>': 103,
'<reserved_11>': 104,
'<reserved_12>': 105,
'<reserved_13>': 106,
'<reserved_14>': 107,
```
I want to replace the '<reserved_7>' with '<|im_start|>' and replace '<reserved_8>' with '<|im_end|>'
what I want to get is a tokenizer which can act as below:
tokenizer.encode('<|im_start|>') => 100
### Motivation
I want to replace the '<reserved_7>' with '<|im_start|>' and replace '<reserved_8>' with '<|im_end|>'
### Your contribution
no
|
https://github.com/huggingface/transformers/issues/27974
|
closed
|
[] | 2023-12-12T12:59:53Z
| 2025-05-05T19:18:29Z
| null |
muziyongshixin
|
pytorch/TensorRT
| 2,530
|
❓ [Question] The stable diffusion example doesn't work
|
## ❓ Question
<!-- Your question -->
## What you have already tried
https://github.com/pytorch/TensorRT/blob/main/examples/dynamo/torch_compile_stable_diffusion.py
I tried executing the above Python code, but conversion to TensorRT failed as shown below.
```bash
WARNING:torch_tensorrt.dynamo.backend.backends:TRT conversion failed on the subgraph. See trace above. Returning GraphModule forward instead.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch_tensorrt/dynamo/backend/backends.py", line 93, in _pretraced_backend
trt_compiled = compile_module(
File "/usr/local/lib/python3.10/dist-packages/torch_tensorrt/dynamo/compile.py", line 244, in compile_module
trt_module = convert_module(
File "/usr/local/lib/python3.10/dist-packages/torch_tensorrt/dynamo/conversion/conversion.py", line 33, in convert_module
module_outputs = module(*torch_inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 726, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 305, in __call__
raise e
File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 292, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.14", line 6, in forward
view_10 = torch.ops.aten.view.default(permute_10, [2, -1, 320]); permute_10 = None
File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 499, in __call__
return self._op(*args, **kwargs or {})
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1323, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1621, in dispatch
r = func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 499, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
```
Is this an example python that actually passes? Or is there an environment version that needs to be set for this example?
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
I used the latest version of the pytorch container, nvcr.io/nvidia/pytorch:23.11-py3, and pip installed the latest versions of diffusers and transformers.
## Additional context
None
|
https://github.com/pytorch/TensorRT/issues/2530
|
closed
|
[
"question"
] | 2023-12-12T10:35:01Z
| 2024-10-25T10:30:09Z
| null |
0-chan-kor
|
huggingface/chat-ui
| 623
|
ChatUI with Docker - Permissions Issue
|
I'm trying to use the ChatUI space with Docker. I have a private, custom model which I've trained.
I want to access it in a private space using Docker ChatUI
I seem to be running into permissions errors.
Things I've tried:
Following the instructions set out here: https://huggingface.co/blog/Llama2-for-non-engineers (I used Llama2 with a custom dataset)
Creating it with / without the MongoDB URI
Adding an existing secret as the HF_TOKEN
Creating a new "HUGGING_FACE_HUB_TOKEN" in my settings and in the new space and using that
Addint he new token as a secret in the space where the model was generated
Hardcoding the access token in .env.local.template to see if it gives a temp fix (it didn't)
Does it matter if I don't have a centralised secret that is explicitly named as "HF_TOKEN"?
Error:
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6576f9fe-00986ef531649f933739e793;0d286b3c-5e65-45c1-a1f9-7efea56654dd)
Error: DownloadError
Repository Not Found for url: https://huggingface.co/api/models/<USERNAME>/<MODELNAME>.
Please make sure you specified the correct repo_id and repo_type.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
Warning: Transient problem: connection refused Will retry in 10 seconds. 59
Warning: retries left.
|
https://github.com/huggingface/chat-ui/issues/623
|
open
|
[
"support"
] | 2023-12-12T08:10:31Z
| 2023-12-28T13:58:22Z
| 1
|
aidansys17
|
huggingface/text-generation-inference
| 1,332
|
How can I set log output to local file
|
### Feature request
I want to set the TGI log to file instead of stdout.
### Motivation
I want to set the TGI log to file instead of stdout.
### Your contribution
how can I use params in command of env variables to set log output to file.
|
https://github.com/huggingface/text-generation-inference/issues/1332
|
closed
|
[
"Stale"
] | 2023-12-12T07:54:26Z
| 2024-01-18T01:46:56Z
| null |
soulseen
|
pytorch/serve
| 2,849
|
Broken pipe on big response tensors
|
### 🐛 Describe the bug
We have a model which essentially does image segmentation of sorts.
The output tensor is of this size: `[batch, 920, 920]`, fp32.
I keep getting broken pipe errors in this:
From my debugging, it essentially fails after I return this tensor from my `postprocess` method in base handler.
Is there a limit to response size for torchserve?
Thanks for the help!
### Error logs
the main container logs:
```
hariomapp-torchserve-1 | java.lang.InterruptedException: null
hariomapp-torchserve-1 | at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1679) ~[?:?]
hariomapp-torchserve-1 | at java.util.concurrent.LinkedBlockingDeque.pollFirst(LinkedBlockingDeque.java:515) ~[?:?]
hariomapp-torchserve-1 | at java.util.concurrent.LinkedBlockingDeque.poll(LinkedBlockingDeque.java:677) ~[?:?]
hariomapp-torchserve-1 | at org.pytorch.serve.wlm.Model.pollBatch(Model.java:367) ~[model-server.jar:?]
hariomapp-torchserve-1 | at org.pytorch.serve.wlm.BatchAggregator.getRequest(BatchAggregator.java:36) ~[model-server.jar:?]
hariomapp-torchserve-1 | at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:194) [model-server.jar:?]
hariomapp-torchserve-1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
hariomapp-torchserve-1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
hariomapp-torchserve-1 | at java.lang.Thread.run(Thread.java:833) [?:?]
```
Model logs
```
2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - Backend worker process died.
2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - Traceback (most recent call last):
2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - File "/home/venv/lib/python3.9/site-packages/ts/model_service_worker.py", line 258, in <module>
2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - worker.run_server()
2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - File "/home/venv/lib/python3.9/site-packages/ts/model_service_worker.py", line 226, in run_server
2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - self.handle_connection(cl_socket)
2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - File "/home/venv/lib/python3.9/site-packages/ts/model_service_worker.py", line 183, in handle_connection
2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - cl_socket.sendall(resp)
2023-12-12T07:11:26,936 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - BrokenPipeError: [Errno 32] Broken pipe
2023-12-12T07:11:28,676 [INFO ] W-9000-msk_fracture_4.0.0-stdout MODEL_LOG - s_name_part0=/home/model-server/tmp/.ts.sock, s_name_part1=9000, p
```
### Installation instructions
Using docker, simply ran the stock image in dockerhub
compose file:
```yml
version: '3'
services:
torchserve:
image: pytorch/torchserve:latest-gpu
ports:
- 9080:8080
- 9081:8081
- 9082:8082
- 7070:7070
- 7071:7071
volumes:
- ./modelstore:/home/model-server/model-store
environment:
- TS_METRICS_MODE=prometheus
command: torchserve --model-store /home/model-server/model-store
```
### Model Packaing
I simply take a tensor as input and return raw tensor generated by model in output.
Essentially I get a `tuple[dict[str, Tensor], dict[str, Tensor]]` from the model, all tensor values would have the same size and have the batch size as first dimension.
handler
```python
from ts.torch_handler.base_handler import BaseHandler
import pickle
import base64
import logging
import torch
logger = logging.getLogger(__name__)
class ModelHandler(BaseHandler):
def preprocess(self, data):
all_tensors = [pickle.loads(d["body"]) for d in data]
result = torch.cat(all_tensors, 0)
result.to(self.device)
return result
def _single_result(self, data, i):
"""
we get this:
{
"90_rot": tensor[1.000, 2.999, etc.],
...other keys, same structure
}
We take the index'th element out in value, so its tensor[1.00] but its size is torch.Size([])
t[i].tolist() gives a number, the actual number we want to send back
But remote expects a [number] format, so we send that
"""
return {
k: [v[i].tolist()] for k, v in data.items()
}
def _get_len_batch(self, data):
"""The final dict has a str[dict, tensor[length]]. The length is the batch size
It is guaranteed that for each key, the length of the tensor is the same
"""
key = next(iter(data))
return len(data[key])
def _single_tuple(sel
|
https://github.com/pytorch/serve/issues/2849
|
open
|
[
"triaged"
] | 2023-12-12T07:30:27Z
| 2023-12-29T11:17:16Z
| 3
|
hariom-qure
|
huggingface/alignment-handbook
| 74
|
A question about the SFTTrainer (also a theoretical question about SFT in general)
|
I have a general question about Supervised Fine Tuning (SFT) for Dialogue applications.
Should the SFT process use the same LM objective (next-token prediction) that is used in pre-training a language model?
The "Dialogue" task is predicting "assistant" tokens, right? Shouldn't the objective be predicting only those tokens? Is one way to do this is to set labels for only assistant tokens and ignore the labels on others?
The SFTTrainer [implementation](https://github.com/huggingface/trl/blob/main/trl/trainer/sft_trainer.py#L381) does not set labels - as far as I understand, this leads to "labels" being cloned to "input_ids" and shifted right (within transformers code) leading to using "next-token" prediction objective.
More on a philosophical note - if using the same objective as pre-training for SFT, why shouldn't that be called "Fine Tuning" the model (On a dialogue dataset of course) rather than "Supervised Fine Tuning". What am I missing? Is there a reference paper that explains this well? The right approach to do SFT for Dialogue applications?
It is not obvious hence the question. For example, the [InstructGPT](https://arxiv.org/abs/2203.02155) paper mentions SFT but mainly redirects to the (seemingly) first attempt at SFT in [this](https://arxiv.org/pdf/2109.10862.pdf) paper which talks about a "Summarization" task but not a "Dialogue" task.
In that paper, when human labelers are asked to summarize and then when the paper mentions "Behavioral Cloning" is used to finetune the LLM to adapt to this task, I'd imagine that only "Summary" section is considered label but not the entire prompt/document. Following that principle, for "Dialogue" tasks, intuitively, I'd imagine that only "assistant" turns should be part of labels.
(By the way I already asked [this](https://github.com/huggingface/trl/issues/1083) in trl repository as well but not sure which is the best repository to ask the question (this repository is for alignment tasks in which SFT is a step - hence posted here too).
|
https://github.com/huggingface/alignment-handbook/issues/74
|
open
|
[] | 2023-12-12T06:54:02Z
| 2024-01-22T14:34:15Z
| 3
|
PradeepKadubandi
|
huggingface/transformers.js
| 453
|
Summarization Parameters not working
|
### Question
I've tried several of the supported summarization models with the code used in the browser extension example.
The only one I get any results from in a reasonable time is t5-small.
My problem with it is that despite any parameters I try to pass in the result is always same length.
I've traced through the code and it appears that the config params get passed in.
I've tried max_new_tokens, min_new_tokens, max_length, no joy.
I initially started specifying 2.5.3 and last tried just letting cdn handle it, looks like 2.10.x, no joy, same thing.
Could someone please provide me with an example of getting, in my case, the t5-small model running a summarization task that implements parameters as to output?
|
https://github.com/huggingface/transformers.js/issues/453
|
open
|
[
"question"
] | 2023-12-12T06:21:52Z
| 2023-12-19T21:52:32Z
| null |
kwlayman
|
huggingface/safetensors
| 400
|
torch.nn.Module named_parameters() seem to be failing for safetensors
|
### System Info
safetensors==0.4.1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Reproduction
Noticed this issue with the new Mixtral model
https://github.com/vllm-project/vllm/issues/2020
Is there any way to fix this with safetensors?
### Expected behavior
Load the mixtral model in safe tensor format
|
https://github.com/huggingface/safetensors/issues/400
|
closed
|
[
"Stale"
] | 2023-12-11T18:54:06Z
| 2024-01-17T01:48:50Z
| 1
|
0-hero
|
huggingface/optimum
| 1,583
|
Add support for Chatglm2 & qwen onnx models
|
### Feature request
Need to export ChatGLM2 & Qwen models to onnx using hf optimum.
ChatGLM2: model-card-> [https://huggingface.co/THUDM/chatglm2-6b](https://github.com/huggingface/optimum/issues/url)
Qwen: model-card-> [https://huggingface.co/Qwen/Qwen-7B-Chat](https://github.com/huggingface/optimum/issues/url)
### Motivation
I would like to make the process of exporting llm models to onnx simpler. There should be a generic boilerplate code which can export the models to onnx by simply passing hugging_face model_id.
### Your contribution
I have this piece of code for the export: I'm using this code to export chatglm2: [https://gist.github.com/manishghop/9be5aee6ed3d7551c751cc5d9f7eb8c3](https://github.com/huggingface/optimum/issues/url)
i use it for both chatglm2 & qwen by simply updating model_id.
Is there a way to run the inference of these onnx models?
|
https://github.com/huggingface/optimum/issues/1583
|
closed
|
[] | 2023-12-11T15:22:59Z
| 2024-04-24T10:21:48Z
| 4
|
manishghop
|
huggingface/peft
| 1,247
|
How to save parameters in prompt_encoder layers in p-tuning?
|
I want to resume training from checkpoint in p-tuning, but the model only save parameters in prompt_embeddings.
<img width="370" alt="image" src="https://github.com/huggingface/peft/assets/58416622/a085224f-32f2-409c-9a51-77c7438bc6a2">
|
https://github.com/huggingface/peft/issues/1247
|
closed
|
[] | 2023-12-11T02:44:59Z
| 2024-01-19T15:03:32Z
| null |
lyt719
|
huggingface/optimum-benchmark
| 102
|
How to evaluate a model that already exists locally and hasn't been uploaded yet, "model=?"
|

i really want to know how to load qwen model, thank you very much
|
https://github.com/huggingface/optimum-benchmark/issues/102
|
closed
|
[] | 2023-12-10T08:35:59Z
| 2024-01-11T08:18:17Z
| null |
WCSY-YG
|
huggingface/transformers
| 27,928
|
[Question] What is the main difference between "AutoModelForCasualLM" and "PeftModelForCausalLM"?
|
I also wrote it down in peft repo. However this issue is also related to transformers. So i write my question here again.
issue is here in peft(https://github.com/huggingface/peft/issues/1245)
Hello, Sorry for naive question.
I noticed that the``model.generate()`` function performed differently when inferrence right after train with ```trainer.model``` and after merge and unload. (Every params are the same.)
So I checked two different object with simple print function.
Difference was the object that contains model.
1. ```model = trainer.model```
```
PeftModelForCausalLM(
(base_model): LoraModel(
(model): LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): ModulesToSaveWrapper(
(original_module): Embedding(32008, 5120)
(modules_to_save): ModuleDict(
(default): Embedding(32008, 5120)
)
)
(layers): ModuleList(
(0-39): 40 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(k_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(v_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(o_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=13824, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=13824, bias=False)
)
(up_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=13824, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(i
|
https://github.com/huggingface/transformers/issues/27928
|
closed
|
[] | 2023-12-10T03:10:36Z
| 2024-02-01T00:49:07Z
| null |
daehuikim
|
huggingface/peft
| 1,245
|
[Question] What is the main difference between "AutoModelForCasualLM" and "PeftModelForCausalLM"?
|
Because This is is related to "transformers". Therefore I wrote this question in transformers repo either.
issue is here in transformers(https://github.com/huggingface/transformers/issues/27928)
Hello, Sorry for naive question.
I noticed that the``model.generate()`` function performed differently when inferrence right after train with ```trainer.model``` and after merge and unload. (Every params are the same.)
So I checked two different object with simple print function.
Difference was the object that contains model.
1. ```model = trainer.model```
```
PeftModelForCausalLM(
(base_model): LoraModel(
(model): LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): ModulesToSaveWrapper(
(original_module): Embedding(32008, 5120)
(modules_to_save): ModuleDict(
(default): Embedding(32008, 5120)
)
)
(layers): ModuleList(
(0-39): 40 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(k_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(v_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(o_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=13824, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=13824, bias=False)
)
(up_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=13824, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit
|
https://github.com/huggingface/peft/issues/1245
|
closed
|
[] | 2023-12-10T03:08:54Z
| 2023-12-11T11:15:25Z
| null |
daehuikim
|
pytorch/serve
| 2,841
|
Not able to get the data for inference when using custom handler
|
I team, I have created my own custom handler by referencing to the base-handler and the vision-handler. What I am observing is that, when I pass data to the model for inference, the data is not reaching to the hosted model endpoint.
The exact error I am getting is:
```
2023-12-09T20:08:03,580 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - Invoking custom service failed.
2023-12-09T20:08:03,580 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - Traceback (most recent call last):
2023-12-09T20:08:03,580 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/ts/service.py", line 120, in predict
2023-12-09T20:08:03,581 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - ret = self._entry_point(input_batch, self.context)
2023-12-09T20:08:03,581 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - File "/tmp/models/6ffe80d83e5341da81fe21bda0d735e0/custom_handler.py", line 139, in handle
2023-12-09T20:08:03,581 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - model_input = self.data_preprocess(data)
2023-12-09T20:08:03,582 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - File "/tmp/models/6ffe80d83e5341da81fe21bda0d735e0/custom_handler.py", line 91, in data_preprocess
2023-12-09T20:08:03,583 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - image = Image.open(io.BytesIO(image))
2023-12-09T20:08:03,585 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/PIL/Image.py", line 3280, in open
2023-12-09T20:08:03,586 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - raise UnidentifiedImageError(msg)
2023-12-09T20:08:03,586 [INFO ] W-9000-vit_l_16_1.0-stdout MODEL_LOG - PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f7677de3ce0>
```
---
When I printed my "data" before passing it for preprocessing, this is what I got:
```
2023-12-09T19:43:42,421 [INFO ] W-9000-vit_l__1.0-stdout MODEL_LOG - data: [{'data': bytearray(b'{"payload":{"allShortcutsEnabled":false,"fileTree":{"examples/image_classifier/mnist/test_data":{"items":[{"name":"0.png","path":"examples/image_classifier/mnist/test_data/0.png","contentType":"file"},{"name":"1.png","path":"examples/image_classifier/mnist/test_data/1.png","contentType":"file"},{"name":"2.png","path":"examples/image_classifier/mnist/test_data/2.png","contentType":"file"},{"name":"3.png","path":"examples/image_classifier/mnist/test_data/3.png","contentType":"file"},{"name":"4.png","path":"examples/image_classifier/mnist/test_data/4.png","contentType":"file"},{"name":"5.png","path":"examples/image_classifier/mnist/test_data/5.png","contentType":"file"},{"name":"6.png","path":"examples/image_classifier/mnist/test_data/6.png","contentType":"file"},{"name":"7.png","path":"examples/image_classifier/mnist/test_data/7.png","contentType":"file"},{"name":"8.png","path":"examples/image_classifier/mnist/test_data/8.png","contentType":"file"},{"name":"9.png","path":"examples/image_classifier/mnist/test_data/9.png","contentType":"file"}],"totalCount":10},"examples/image_classifier/mnist":{"items":[{"name":"screenshots","path":"examples/image_classifier/mnist/screenshots","contentType":"directory"},{"name":"test_data","path":"examples/image_classifier/mnist/test_data","contentType":"directory"},{"name":"torchdata","path":"examples/image_classifier/mnist/torchdata","contentType":"directory"},{"name":"Docker.md","path":"examples/image_classifier/mnist/Docker.md","contentType":"file"},{"name":"README.md","path":"examples/image_classifier/mnist/README.md","contentType":"file"},{"name":"config.properties","path":"examples/image_classifier/mnist/config.properties","contentType":"file"},{"name":"mnist.py","path":"examples/image_classifier/mnist/mnist.py","contentType":"file"},{"name":"mnist_cnn.pt","path":"examples/image_classifier/mnist/mnist_cnn.pt","contentType":"file"},{"name":"mnist_handler.py","path":"examples/image_classifier/mnist/mnist_handler.py","contentType":"file"},{"name":"mnist_ts.json","path":"examples/image_classifier/mnist/mnist_ts.json","contentType":"file"}],"totalCount":10},"examples/image_classifier":{"items":[{"name":"alexnet","path":"examples/image_classifier/alexnet","contentType":"directory"},{"name":"densenet_161","path":"examples/image_classifier/densenet_161","contentType":"directory"},{"name":"mnist","path":"examples/image_classifier/mnist","contentType":"directory"},{"name":"near_real_time_video","path":"examples/image_classifier/near_real_time_video","contentType":"directory"},{"name":"resnet_152_batch","path":"examples/image_classifier/resnet_152_batch","contentType":"directory"},{"name":"resnet_18","path":"examples/image_classifier/resnet_18","contentType":"directory"},{"name":"squeezenet","path":"examples/image_classifier/squeezenet","contentType":"directory"},{"name":"vgg_16","path":"examples/image_classifier/vgg_16","contentType":"directory"},{"name":"README.md","path":"examples/image_classifier/README.md","conten
|
https://github.com/pytorch/serve/issues/2841
|
closed
|
[
"triaged_wait",
"support"
] | 2023-12-09T20:10:19Z
| 2023-12-23T17:13:36Z
| 2
|
yogendra-yatnalkar
|
huggingface/diffusers
| 6,113
|
How to use the models from sd_control_collection hf repo in diffusers
|
How to load/convert the models at https://huggingface.co/lllyasviel/sd_control_collection/tree/main with diffusers?
```
>>> pipe = diffusers.StableDiffusionPipeline.from_single_file("diffusers_xl_canny_full.safetensors")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/.local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 261, in from_single_file
pipe = download_from_original_stable_diffusion_ckpt(
File "/home/ubuntu/.local/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1436, in download_from_original_stable_diffusion_ckpt
converted_unet_checkpoint = convert_ldm_unet_checkpoint(
File "/home/ubuntu/.local/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 426, in convert_ldm_unet_checkpoint
new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"]
KeyError: 'time_embed.0.weight'
```
Also not able to convert it via hf script: https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_controlnet_to_diffusers.py
We are able to run it through https://github.com/AUTOMATIC1111 webui. How can it be used with diffusers?
|
https://github.com/huggingface/diffusers/issues/6113
|
closed
|
[] | 2023-12-09T14:11:26Z
| 2024-06-11T18:22:03Z
| null |
anilsathyan7
|
pytorch/TensorRT
| 2,525
|
❓[Question] The only valid use of a module is looking up an attribute but found...
|
## ❓ Question
<!-- Your question -->
Hello, I have a torch scripted model that I am trying to compile with TensorRT:
```py
import cv2
import numpy as np
import torch
from torchvision.transforms import ToTensor
import torch_tensorrt
if __name__ == "__main__":
# Load the pre-trained model
model = torch.jit.load('model.jit')
# Define sample points and bounding box labels
pts_sampled = np.array([[100, 100], [800, 800]])
bbox = torch.reshape(torch.tensor(pts_sampled), [1, 1, 2, 2])
bbox_labels = torch.reshape(torch.tensor([2, 3]), [1, 1, 2])
# Read and preprocess the image
image = cv2.imread('image.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
img_tensor = ToTensor()(image)
# Compile the model with TensorRT
with torch_tensorrt.logging.debug():
trt_model = torch_tensorrt.compile(model,
inputs=[img_tensor[None, ...].cuda(),
bbox.cuda(),
bbox_labels.cuda()],
enabled_precisions={torch.float32},
workspace_size=2000000000,
truncate_long_and_double=True
)
```
This returns the following debug information and error:
```sh
INFO: [Torch-TensorRT] - ir was set to default, using TorchScript as ir
DEBUG: [Torch-TensorRT] - TensorRT Compile Spec: {
"Inputs": [
Input(shape=(1,3,1080,1920,), dtype=Float, format=Contiguous/Linear/NCHW, tensor_domain=[0, 2))Input(shape=(1,1,2,2,), dtype=Long, format=Contiguous/Linear/NCHW, tensor_domain=[0, 2))Input(shape=(1,1,2,), dtype=Long, format=Contiguous/Linear/NCHW, tensor_domain=[0, 2)) ]
"Enabled Precision": [Float, ]
"TF32 Disabled": 0
"Sparsity": 0
"Refit": 0
"Debug": 0
"Device": {
"device_type": GPU
"allow_gpu_fallback": False
"gpu_id": 0
"dla_core": -1
}
"Engine Capability": Default
"Num Avg Timing Iters": 1
"Workspace Size": 2000000000
"DLA SRAM Size": 1048576
"DLA Local DRAM Size": 1073741824
"DLA Global DRAM Size": 536870912
"Truncate long and double": 1
"Allow Shape tensors": 0
"Torch Fallback": {
"enabled": True
"min_block_size": 3
"forced_fallback_operators": [
]
"forced_fallback_modules": [
]
}
}
DEBUG: [Torch-TensorRT] - init_compile_spec with input vector
DEBUG: [Torch-TensorRT] - Settings requested for Lowering:
torch_executed_modules: [
]
Traceback (most recent call last):
File "/home/jupyter/main.py", line 79, in <module>
trt_model = torch_tensorrt.compile(model,
File "/home/jupyter/venv/lib/python3.9/site-packages/torch_tensorrt/_compile.py", line 133, in compile
return torch_tensorrt.ts.compile(
File "/home/jupyter/venv/lib/python3.9/site-packages/torch_tensorrt/ts/_compiler.py", line 139, in compile
compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))
RuntimeError:
temporary: the only valid use of a module is looking up an attribute but found = prim::SetAttr[name="W"](%self.1, %345)
```
Looking to understand what my options are and what I can change to successfully compile.
## Environment
```sh
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.27.9
Libc version: glibc-2.31
Python version: 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-5.10.0-26-cloud-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.222
BogoMIPS: 4400.44
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache:
|
https://github.com/pytorch/TensorRT/issues/2525
|
closed
|
[
"question",
"component: lowering"
] | 2023-12-08T23:09:04Z
| 2024-06-11T18:33:42Z
| null |
edmuthiah
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.