repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/setfit
| 423
|
[Q] How to examine correct/wrong predictions in trainer.evaluate()
|
Hello,
After doing "metrics = trainer.evalute()" as shown in the example code, is there a way to examine which rows in the evaluation data set were predicted correctly?
Thanks!
|
https://github.com/huggingface/setfit/issues/423
|
closed
|
[
"question"
] | 2023-09-25T23:41:53Z
| 2023-11-24T13:04:45Z
| null |
youngjin-lee
|
huggingface/chat-ui
| 461
|
The custom endpoint response doesn't stream even though the endpoint is sending streaming content
|
@nsarrazin I'm transmitting the streaming response to the chat UI, but it displays all the content simultaneously rather than progressively streaming the text generation part. Can you help me address this issue?
Reference: #380
|
https://github.com/huggingface/chat-ui/issues/461
|
open
|
[
"support"
] | 2023-09-25T07:43:57Z
| 2023-10-29T11:21:04Z
| 2
|
nandhaece07
|
huggingface/autotrain-advanced
| 279
|
How to run AutoTrain Advanced UI locally
|
How to run AutoTrain Advanced UI locally 😢
|
https://github.com/huggingface/autotrain-advanced/issues/279
|
closed
|
[] | 2023-09-25T07:25:51Z
| 2024-04-09T03:20:17Z
| null |
LronDC
|
huggingface/transformers.js
| 328
|
[Question] React.js serve sentence bert in browser keep reporting models not found.
|
my codes:
```javascript
export const useInitTransformers = () => {
const init = async () => {
// @ts-ignore
env.allowLocalModels = false;
extractor = await pipeline(
"feature-extraction",
"Xenova/all-mpnet-base-v2",
);
};
return { init };
};
```
I'm building a frontend with React that can serve sentence bert directly in browser, but no idea why even i add the line
`env.allowLocalModels = false`
before pipeline loading the model. In the production environment, it's still trying to access model locally `/models/...`, but which will never exists in this usecase.
**Is there any way i can bypass this check and directly pull the model from remote?**

|
https://github.com/huggingface/transformers.js/issues/328
|
closed
|
[
"question"
] | 2023-09-24T15:51:47Z
| 2024-10-18T13:30:11Z
| null |
bianyuanop
|
pytorch/tutorials
| 2,569
|
💡 [REQUEST] - <title>
|
### 🚀 Descirbe the improvement or the new tutorial
This tutorial “A GENTLE INTRODUCTION TO TORCH.AUTOGRAD”, the gradients of the error w.r.t. parameters, Q w.r.t a, I think the result should be a 2x2 matrix but not a 2-d vector, according to the matrix calculus.
### Existing tutorials on this topic
_No response_
### Additional context
_No response_
cc @albanD
|
https://github.com/pytorch/tutorials/issues/2569
|
closed
|
[
"question",
"core"
] | 2023-09-24T11:24:53Z
| 2023-10-27T19:23:44Z
| null |
haoyunliang
|
pytorch/vision
| 7,987
|
How to update RegionProposalNetwork loss function in Faster RCNN?
|
Excuse me if this question is stupid, but I can't seem to figure out how to do this…
I want to update the loss function of the RPN in FasterRCNN. See these lines [here](https://github.com/pytorch/vision/blob/beb4bb706b5e13009cb5d5586505c6d2896d184a/torchvision/models/detection/generalized_rcnn.py#L104-L105), which calls the `compute_loss` function [here](https://github.com/pytorch/vision/blob/main/torchvision/models/detection/rpn.py#L298). I want to modify the `compute_loss` function (the second link).
I’m trying to update this `compute_loss` function in my code like so:
```rpn.RegionProposalNetwork.compute_loss = custom_loss```
However, this is not working i.e. it has no effect. Any idea how to update the RPN’s loss function?
|
https://github.com/pytorch/vision/issues/7987
|
closed
|
[] | 2023-09-24T09:16:17Z
| 2023-10-05T14:46:37Z
| null |
darian69
|
pytorch/pytorch
| 109,958
|
How to compile torch 2.0.1 version from source?
|
### 🐛 Describe the bug
While I was using 'git clone --branch v2.0.1 https://github.com/pytorch/pytorch.git & python setup.py develop', and 'Building wheel torch-1.14.0a0+410ce96' version was being built.
### Versions
I also checked the version.txt, it shows '2.0.0a0' which should be the version in v2.0.1 tag branch.
So how should I compile torch 2.0.1 version from source? Thanks!
|
https://github.com/pytorch/pytorch/issues/109958
|
open
|
[
"oncall: releng",
"triaged"
] | 2023-09-24T00:53:04Z
| 2023-09-25T11:01:11Z
| null |
tonylin52
|
huggingface/candle
| 944
|
Question: How to tokeninize text for Llama?
|
Hello everybody,
How can I tokenize text to use with Llama? I want to fine-tune Llama on my custom data, so how can I tokenize from a String and then detokenize the logits into a String?
I have looked at the Llama example for how to detokenize, but cannot find any clear documentation on how the implementation actually works for outputting results during training.
Thanks!
|
https://github.com/huggingface/candle/issues/944
|
closed
|
[] | 2023-09-23T18:19:56Z
| 2023-09-23T23:01:13Z
| null |
EricLBuehler
|
huggingface/transformers.js
| 327
|
Calling pipeline returns `undefined`. What are possible reasons?
|
The repository if you need it ▶▶▶ [China Cups](https://github.com/piscopancer/china-cups)
## Next 13.5 / server-side approach
Just started digging into your library. Sorry for stupidity.
### `src/app/api/translate/route.ts` 👇
```ts
import { NextRequest, NextResponse } from 'next/server'
import { PipelineSingleton } from '@/utils/pipeline'
export async function GET(request: NextRequest) {
const text = request.nextUrl.searchParams.get('text')
if (!text) {
return NextResponse.json(
{
error: 'Missing text',
},
{ status: 400 },
)
}
const translator = await PipelineSingleton.getInstance()
const translation = await translator(text)
console.log(translation) // undefined
return NextResponse.json(translation)
}
```
### `src/utils/pipeline.ts` 👇
This singleton must be fine, I suppose.
```ts
import { Pipeline, pipeline } from '@xenova/transformers'
import { PretrainedOptions } from '@xenova/transformers/types/models'
function DeclarePipeline() {
return class PipelineSingleton {
static task = 'question-answering'
static model = undefined as undefined | string
static instance = null as null | Promise<Pipeline>
static async getInstance(options?: PretrainedOptions) {
if (!this.instance) {
this.instance = pipeline(this.task, this.model, options)
}
return this.instance
}
}
}
export const PipelineSingleton = (() => {
if (process.env.NODE_ENV !== 'production') {
const gl = global as any
if (!gl.PipelineSingleton) {
gl.PipelineSingleton = DeclarePipeline()
}
return gl.PipelineSingleton
}
return DeclarePipeline()
})() as ReturnType<typeof DeclarePipeline>
```
### `src/app/page.tsx`This is how I query it 👇
Btw, no errors occur on this stage
```tsx
export default async function HomePage({ searchParams }: THomePage) {
const text = 'Hello'
const translation = await axios.get(`/translate?text=${text}`).then((res) => res.data())
// const translation = await fetch(`/translate?text=${encodeURIComponent(text)}`).then((res) => res.json())
return <pre>{JSON.stringify(translation)}</pre>
```
## One more very important thing
When I **manually** go to `http://localhost:3000/api/translate?text=Hello` I very happily get this error:
```
⨯ TypeError: Value is not JSON serializable
at serializeJavascriptValueToJSONString (node:internal/deps/undici/undici:1203:15)
at Response.json (node:internal/deps/undici/undici:6746:55)
at NextResponse.json (webpack-internal:///(rsc)/./node_modules/next/dist/server/web/spec-extension/response.js:66:35)
at GET (webpack-internal:///(rsc)/./src/app/api/translate/route.ts:24:95)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async C:\web-dev\next\china-cups\node_modules\next\dist\compiled\next-server\app-route.runtime.dev.js:1:66877
```
👆 the browser cannot load this url if text=... is present 😟.
💖
|
https://github.com/huggingface/transformers.js/issues/327
|
closed
|
[
"question"
] | 2023-09-23T15:57:24Z
| 2023-09-24T06:55:08Z
| null |
piscopancer
|
pytorch/TensorRT
| 2,340
|
❓ [Question] Why import torch_tensorrt set log level to info automatically?
|
## ❓ Question
The default log level of python is warning.
Why import torch_tensorrt set log level to info automatically?
How could I set log level back to warning?
```
import logging
import torch_tensorrt
logging.info("INFO")
logging.warning("WARNING")
logging.error("ERROR")
```
stderr outputs:
```
INFO:root:INFO
WARNING:root:WARNING
ERROR:root:ERROR
```
what I want:
```
WARNING:root:WARNING
ERROR:root:ERROR
```
## What you have already tried
Below statements doesn't work
```
torch_tensorrt.logging.set_reportable_log_level(torch_tensorrt.logging.Level.Warning)
# or
logging.basicConfig(level=logging.WARNING)
```
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
Docker image from: nvcr.io/nvidia/pytorch:23.05-py3
|
https://github.com/pytorch/TensorRT/issues/2340
|
open
|
[
"question",
"No Activity"
] | 2023-09-23T13:51:10Z
| 2024-01-01T00:02:44Z
| null |
KindRoach
|
huggingface/optimum
| 1,410
|
Export TrOCR to ONNX
|
I was trying to export my fine-tuned TrOCR model to ONNX using following command. I didn't get any errors, but in onnx folder only encoder model is saved.
```
!python -m transformers.onnx --model=model_path --feature=vision2seq-lm onnx/ --atol 1e-2
```
So, regarding this, I have 2 questions.
1. How to save decoder_model.onnx, so that I can use [this inference script](https://gist.github.com/mht-sharma/f38c670930ac7df413c07327e692ee39).
2. If it is not possible to export the decoder model to ONNX, how can I perform inference using encoder_model.onnx? According to my understanding, model.generate() takes time to generate output, while the decode method doesn't consume as much time compared to the generate method. Is there any way to use encoder_model.onnx with the existing decoder model in order to optimize response time?
```
p = processor(image, return_tensors="pt").pixel_values
generated_ids = model.generate(
p,
do_sample=True,
top_k=5,
top_p=0.1,
num_beams=4,
num_return_sequences=1,
output_scores=True,
use_cache=True,
return_dict_in_generate=True
)
generated_text = processor.batch_decode(generated_ids.sequences, skip_special_tokens=True)[0]
```
Please correct me if this approach to optimize response time is wrong.
Thanks.
|
https://github.com/huggingface/optimum/issues/1410
|
closed
|
[
"onnx"
] | 2023-09-23T09:19:50Z
| 2024-10-15T16:21:52Z
| 2
|
VallabhMahajan1
|
pytorch/pytorch
| 109,880
|
[FSDP ]How to convert sharded_state_dict files into full_state_dict offline without distributed process
|
### 🚀 The feature, motivation and pitch
Currently, if I use FSDP with 128 gpus and save checkpoints with sharded_state_dict to avoid gathering the full_state_dict on rank0 for saving, there is no way to obtain the full_state_dict ckpt offline.
The only way to obtain full_state_dict is to launch the exact 128GPU distributed process with FSDP to load that sharded_state_dict model, then switch to full_state_dict config and save the ckpt to files, which is originally problem we wanted to avoid.
I cannot read the sharded_state_dict file (with `torch.load()`) individually either, except if I launch a 128gpu distributed process to read it. The file contain `ShardedTensor` which requires the same world_size=128 to load.
I would like to have an offline script to read each sharded file and write iterative to a pytorch_model_0.bin, pytorch_model_1.bin, pytorch_model_2.bin...
And then we can load the model with `AutoModelForCausalLM.from_pretrained(...)` by loading each `.bin`
Thanks!
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin
|
https://github.com/pytorch/pytorch/issues/109880
|
closed
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 2023-09-22T13:44:11Z
| 2024-05-16T01:16:12Z
| null |
nxphi47
|
pytorch/tutorials
| 2,566
|
[BUG] - Per sample gradients using function transforms not working for RNN
|
### Add Link
Hello!
I'm working on a optimization algorithm that requires computing the per sample gradients. Assuming the batch size is $N$ and the number of model parameters is $M$, I want to calculate $\partial \log p(\mathbf{x}^{(i)};\theta)/\partial \theta_j$, which is an $N \times M$ matrix. I found the [[PER-SAMPLE-GRADIENTS](https://pytorch.org/tutorials/intermediate/per_sample_grads.html)](https://pytorch.org/tutorials/intermediate/per_sample_grads.html) tutorial and began my own experiments. As a proof of concept, I defined a generative model with a tractable likelihood, such as MADE (Masked Autoencoder for Distribution Estimation), PixelCNN, RNN, etc., and sepcified the `log_prob` and `sample` methods. I utilized the function transforms methods mentioned in the tutorial, but currently, it only works for MADE (I believed it would work for NADE and PixelCNN too, since these models need only one forward pass to calculate the log likelihood of $\mathbf{x}$. For RNN however, both sampling and inference require $N$ forward pass).
Below, I've provided my code snippets, and I'm interested in figuring out why it's not working for RNN. Making it work for RNN would significantly reduce the number of parameters for my research purpose.
Thank you!
### Describe the bug
```python
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
torch.manual_seed(0)
class MADE(nn.Module):
'''A simple one-layer MADE (Masked Autoencoder for Distribution Estimation)'''
def __init__(self, n=10, device='cpu', *args, **kwargs):
super().__init__()
self.n = n
self.device = device
self.weight = nn.Parameter(torch.randn(self.n, self.n) / math.sqrt(self.n))
self.bias = nn.Parameter(torch.zeros(self.n))
mask = torch.tril(torch.ones(self.n, self.n), diagonal=-1)
self.register_buffer('mask', mask)
def pred_logits(self, x):
return F.linear(x, self.mask * self.weight, self.bias)
def forward(self, x):
logits = self.pred_logits(x)
log_probs = - F.binary_cross_entropy_with_logits(logits, x, reduction='none')
return log_probs.sum(-1)
@torch.no_grad()
def sample(self, batch_size):
x = torch.zeros(batch_size, self.n, dtype=torch.float, device=self.device)
for i in range(self.n):
logits = self.pred_logits(x)[:, i]
x[:, i] = torch.bernoulli(torch.sigmoid(logits))
return x
class GRUModel(nn.Module):
'''GRU for density estimation'''
def __init__(self, n=10, input_size=2, hidden_size=8, device='cpu'):
super().__init__()
self.n = n
self.input_size = input_size # input_size=2 when x is binary
self.hidden_size = hidden_size
self.device = device
self.gru_cell = nn.GRUCell(self.input_size, self.hidden_size)
self.fc_layer = nn.Linear(self.hidden_size, 1)
def pred_logits(self, x, h=None):
x = torch.stack([x, 1 - x], dim=1) # 1 -> (1, 0), 0 -> (0, 1), (batch_size, 2)
h_next = self.gru_cell(x, h) # h_{i+1}
logits = self.fc_layer(h_next).squeeze(1)
return h_next, logits
def forward(self, x):
log_prob_list = []
x = torch.cat([torch.zeros(x.shape[0], 1, dtype=torch.float, device=self.device), x], dim=1) # cat x_0
h = torch.zeros(x.shape[0], self.hidden_size, dtype=torch.float, device=self.device) # h_0
for i in range(self.n):
h, logits = self.pred_logits(x[:, i], h)
log_prob = - F.binary_cross_entropy_with_logits(logits, x[:, i + 1], reduction='none')
log_prob_list.append(log_prob)
return torch.stack(log_prob_list, dim=1).sum(dim=1)
@torch.no_grad()
def sample(self, batch_size):
x = torch.zeros(batch_size, self.n + 1, dtype=torch.float, device=self.device)
for i in range(self.n):
h, logits = self.pred_logits(x[:, i], h=None if i == 0 else h)
x[:, i + 1] = torch.bernoulli(torch.sigmoid(logits))
return x[:, 1:]
if __name__ == '__main__':
model = MADE()
# model = GRUModel()
# Sample from the generative model
samples = model.sample(128)
# Then I use the function transforms methods mentioned in the tutorial
# to calculate the per sample mean
from torch.func import functional_call, grad, vmap
params = {k: v.detach() for k, v in model.named_parameters()}
def loss_fn(log_probs):
return log_probs.mean(0)
def compute_loss(params, sample):
batch = sample.unsqueeze(0)
log_prob = functional_call(model, (params,), (batch,))
loss = loss_fn(log_prob)
return loss
ft_compute_grad = grad(compute_loss)
ft_compute_sample_grad = vmap(ft_compute_grad, in_dims=(None, 0))
ft_per_sample_grads = ft_compute_sample_grad(params, samples)
print(ft_pe
|
https://github.com/pytorch/tutorials/issues/2566
|
closed
|
[
"question"
] | 2023-09-22T02:15:18Z
| 2023-10-26T16:03:36Z
| null |
bnuliujing
|
huggingface/chat-ui
| 459
|
Chats Stop generation button is broken?
|
whenever I'm using the Chat UI on hf.co/chat, and I press the stop generation button it deletes both the prompt and the response?
|
https://github.com/huggingface/chat-ui/issues/459
|
open
|
[
"support"
] | 2023-09-21T19:38:38Z
| 2023-10-08T00:44:44Z
| 4
|
VatsaDev
|
huggingface/chat-ui
| 457
|
Custom Models breaking Chat-ui
|
Setting a custom model in .env.local is now breaking chat-ui for me. @jackielii @nsarrazin
If I start mongo and then run ```npm run dev``` with a .env.local file including only the mongo url, there is no issue.
Then I add the following:
```
MODELS=`[
{
"name": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
"datasetName": "OpenAssistant/oasst1",
"description": "A good alternative to ChatGPT",
"websiteUrl": "https://open-assistant.io",
"userMessageToken": "<|prompter|>", # This does not need to be a token, can be any string
"assistantMessageToken": "<|assistant|>", # This does not need to be a token, can be any string
"userMessageEndToken": "<|endoftext|>", # Applies only to user messages. Can be any string.
"assistantMessageEndToken": "<|endoftext|>", # Applies only to assistant messages. Can be any string.
"preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.9,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 1024,
"stop": ["<|endoftext|>"] # This does not need to be tokens, can be any list of strings
}
}
]`
```
and now I get:
```
Unexpected token
in JSON at position 424
SyntaxError: Unexpected token
in JSON at position 424
at JSON.parse (<anonymous>)
at eval (/Users/ronanmcgovern/TR/chat-ui/src/lib/server/models.ts:75:14)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async instantiateModule (file:///Users/ronanmcgovern/TR/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9
```
The specific line of code being referenced is this:
```
"Based on the conversation history (my previous questions are: {{previousMessages}}), give me an appropriate query to answer my question for google search. You should not say more than query. You should not say any words except the query. For the context, today is {{currentDate}}" +
```
|
https://github.com/huggingface/chat-ui/issues/457
|
closed
|
[
"support"
] | 2023-09-21T11:12:42Z
| 2023-09-21T16:03:30Z
| 10
|
RonanKMcGovern
|
huggingface/datasets
| 6,252
|
exif_transpose not done to Image (PIL problem)
|
### Feature request
I noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading.
Since the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this can create some inconsistencies (between input bboxes and input images).
For now there is no option in datasets.features.Image to specify that. We need to do the following when preparing examples (when preparing images for training, test or inference):
```
from PIL import Image, ImageOps
pil = ImageOps.exif_transpose(pil)
```
reference: https://stackoverflow.com/a/63950647/5720150
Is it possible to add this by default to the datasets.feature.Image ? or to add the option to do the ImageOps.exif_transpose?
Thank you
### Motivation
Prevent having inverted data related to exif metadata that may affect object detection tasks
### Your contribution
Changing in datasets.featrues.Image I can help with that.
|
https://github.com/huggingface/datasets/issues/6252
|
closed
|
[
"enhancement"
] | 2023-09-21T08:11:46Z
| 2024-03-19T15:29:43Z
| 2
|
rhajou
|
pytorch/TensorRT
| 2,335
|
❓ [Question] Bert lost a lot of accuracy when using fp16
|
## ❓ Question
BERT Text Classification model run in fp16 gets huge different result compared to fp32
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.13
- CPU Architecture:
- OS (e.g., Linux): REHL8
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.8.10
- CUDA version:11.7
- GPU models and configuration: Tesla T4
- Any other relevant information:
Torch-TensorRT Version: 1.3
## Additional context
Model converted from TouchScript to TensorRT
```
enabled_precisions= {torch.half} # run with 16-bit precision
trt_model = torch_tensorrt.compile(model, inputs=inputs, enabled_precisions=enabled_precisions,
truncate_long_and_double=True, require_full_compilation=False
)
```
The logs
``` shell
WARNING: [Torch-TensorRT] - For input input_ids.1, found user specified input dtype as Long, however when inspecting the graph, the input type expected was inferred to be Float
The compiler is going to use the user setting Long
This conflict may cause an error at runtime due to partial compilation being enabled and therefore
compatibility with PyTorch's data type convention is required.
If you do indeed see errors at runtime either:
- Remove the dtype spec for input_ids.1
- Disable partial compilation by setting require_full_compilation to True
WARNING: [Torch-TensorRT] - For input token_type_ids.1, found user specified input dtype as Long, however when inspecting the graph, the input type expected was inferred to be Float
The compiler is going to use the user setting Long
This conflict may cause an error at runtime due to partial compilation being enabled and therefore
compatibility with PyTorch's data type convention is required.
If you do indeed see errors at runtime either:
- Remove the dtype spec for token_type_ids.1
- Disable partial compilation by setting require_full_compilation to True
WARNING: [Torch-TensorRT] - For input attention_mask.1, found user specified input dtype as Long, however when inspecting the graph, the input type expected was inferred to be Double
The compiler is going to use the user setting Long
This conflict may cause an error at runtime due to partial compilation being enabled and therefore
compatibility with PyTorch's data type convention is required.
If you do indeed see errors at runtime either:
- Remove the dtype spec for attention_mask.1
- Disable partial compilation by setting require_full_compilation to True
WARNING: [Torch-TensorRT] - Data types for input tensors have been modified by inserting aten::to operations which cast INT64 inputs to INT32. To disable this, please recompile using INT32 inputs
WARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt
WARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt
WARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt
WARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt
WARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt
WARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt
WARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt
WARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt
WARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt
WARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt
WARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt
WARNING: [Torch-TensorRT] - Truncating intermediate graph input type from at::kLong to at::kInt
WARNING: [Torch-TensorRT TorchScript Conversion Context] - CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [Torch-TensorRT] - Truncating weight (constant in the graph) from Float64 to Float32
WARNING: [Torch-TensorRT] - There may be undefined behavior using dynamic shape and aten::size without setting allow_shape_tensors
WARNING: [Torch-TensorRT] - Truncating weight (constant in the graph) from Int64 to Int32
WARNING: [Torch-TensorRT] - There may be undefined behavior using dynamic shape and aten::size without setting allow_shape_tensors
WARNING: [Torch-TensorRT] - There may be undefined behavior using dyn
|
https://github.com/pytorch/TensorRT/issues/2335
|
closed
|
[
"question",
"No Activity"
] | 2023-09-21T07:50:12Z
| 2024-05-07T06:37:23Z
| null |
HenryYuen128
|
huggingface/optimum
| 1,401
|
BUG: running python file called onnx.py causes circular errors.
|
### System Info
```shell
latest optimum, python 3.10, linux cpu.
```
### Who can help?
@JingyaHuang, @echarlaix, @michaelbenayoun
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
https://github.com/huggingface/optimum/issues/1177
Description of Bug:
If I create a py file to run my own scripts, and name it "onnx.py", it wreaks all kinds of havoc. Specifically circular errors. It took me a while to figure it was caused by "onnx.py" being a reserved name. This is the first time I've ever come across such an issue. I'm not sure if other modules prevent these issues by ringfencing their scope to specific folders or namespaces.. or whether it's just bad luck.
Is it possible to ringfence this kind of issue by either renaming the internal onnx.py file to something that users would never use OR, customize a validation check that tells user which filenames are reserved, OR at least updating the error message so that users don't need half a day to figure out what's causing the issue?
Many thanks
### Expected behavior
That either I can use any filename for my script.py (eg. onnx.py) without issues
OR
There's a really clear error message that states "please do not use the following reserved names for your python scripts: eg1.py, eg2.py, etc"
Much appreciated
|
https://github.com/huggingface/optimum/issues/1401
|
open
|
[
"bug"
] | 2023-09-21T04:12:49Z
| 2023-10-05T14:32:40Z
| 1
|
gidzr
|
huggingface/diffusers
| 5,124
|
How to fine tune checkpoint .safetensor
|
### Describe the bug
I tried to fine tuning a model from a checkpoint (i.e https://civitai.com/models/119202/talmendoxl-sdxl-uncensored-full-model)I converted the checkpoint to diffuser format using this library:
https://github.com/waifu-diffusion/sdxl-ckpt-converter/
The model converted works fine for inference and the training script works fine if I use a standard base i.e.: "stabilityai/stable-diffusion-xl-base-1.0", but I have error when start from converted model
### Reproduction
download checkpoint: https://civitai.com/models/119202/talmendoxl-sdxl-uncensored-full-model
convert using: https://github.com/waifu-diffusion/sdxl-ckpt-converter/
tstart training with:
!accelerate launch train_text_to_image_lora_sdxl.py \
--pretrained_model_name_or_path="/content/drive/MyDrive/talmendoxlSDXL_v11Beta" \
--pretrained_vae_model_name_or_path="madebyollin/sdxl-vae-fp16-fix" \
--dataset_name="$INSTANCE_DIR_PARSED" \
--caption_column="text" \
--resolution=1024 \
--train_batch_size=1 \
--num_train_epochs=$TRAIN_EPOCHS \
--checkpointing_steps=1000000 \
--learning_rate=$LEARNING_RATE \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--seed=42 \
--output_dir="$OUTPUT_DIR" \
--enable_xformers_memory_efficient_attention \
--gradient_checkpointing \
--mixed_precision="fp16" \
--use_8bit_adam
### Logs
```shell
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
{'clip_sample_range', 'dynamic_thresholding_ratio', 'variance_type', 'thresholding'} was not found in config. Values will be initialized to default values.
Traceback (most recent call last):
File "/content/diffusers/examples/text_to_image/train_text_to_image_lora_sdxl.py", line 1271, in <module>
main(args)
File "/content/diffusers/examples/text_to_image/train_text_to_image_lora_sdxl.py", line 554, in main
text_encoder_one = text_encoder_cls_one.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 2740, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory /content/drive/MyDrive/talmendoxlSDXL_v11Beta.
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 45, in main
args.func(args)
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 979, in launch_command
simple_launcher(args)
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 628, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', 'train_text_to_image_lora_sdxl.py', '--pretrained_model_name_or_path=/content/drive/MyDrive/talmendoxlSDXL_v11Beta', '--pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix', '--dataset_name=/content/instancefolder_parsed', '--caption_column=text', '--resolution=1024', '--train_batch_size=1', '--num_train_epochs=1', '--checkpointing_steps=1000000', '--learning_rate=2e-05', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--seed=42', '--output_dir=/content/lora-trained-xl-colab', '--enable_xformers_memory_efficient_attention', '--gradient_checkpointing', '--mixed_precision=fp16', '--use_8bit_adam']' returned non-zero exit status 1.
```
### System Info
- `diffusers` version: 0.21.0.dev0
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Huggingface_hub version: 0.17.2
- Transformers version: 4.33.2
- Accelerate version: 0.21.0
- xFormers version: 0.0.21
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@williamberman, @patrickvonplaten, @sayakpau
|
https://github.com/huggingface/diffusers/issues/5124
|
closed
|
[
"bug",
"stale"
] | 2023-09-20T22:45:38Z
| 2023-11-22T15:06:19Z
| null |
EnricoBeltramo
|
pytorch/text
| 2,205
|
Declaring _MapStyleDataset inside function makes it unpicklable
|
## 🐛 Bug
**Describe the bug**
When trying to use a Dataset that was converted to map-style using `data.functional.to_map_style_dataset`, I encountered the following error message:
> ...
> File "/usr/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
> ForkingPickler(file, protocol).dump(obj)
> AttributeError: Can't pickle local object 'to_map_style_dataset.<locals>._MapStyleDataset'
After some research, I found the list of what is picklable [here](https://docs.python.org/3/library/pickle.html#what-can-be-pickled-and-unpickled) and found that for a class to be picklable, it has to be from the top level of a module
This isn't the case for `_MapStyleDataset` as it is declared within the `to_map_style_dataset` function
The fix seems simple enough (declare `_MapStyleDataset` outside the function) so I would like to know if there was anything making it undesireable ? If not, I'll create a PR for it but I would like some opinions on it
|
https://github.com/pytorch/text/issues/2205
|
open
|
[] | 2023-09-20T12:27:34Z
| 2023-09-20T12:27:34Z
| 0
|
AnthoJack
|
huggingface/diffusers
| 5,118
|
how to use controlnet's reference_only fuction with diffusers??
|
### Model/Pipeline/Scheduler description
can anyone help me to understand how to use controlnet's reference_only fuction with diffusers
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/diffusers/issues/5118
|
closed
|
[
"stale"
] | 2023-09-20T10:17:53Z
| 2023-11-08T15:07:34Z
| null |
sudip550
|
pytorch/TensorRT
| 2,327
|
❓ [Question] dynamc engines & interpolation align_corners=True
|
## ❓ Question
<!-- Your question -->
## What you have already tried
I used the latest docker with tag 23.08-py3. When converting model doing interpolation with align_corners=True and dynamic input, I got error as below.
```
RuntimeError: [Error thrown at core/conversion/converters/impl/interpolate.cpp:412] Expected !(align_corners && ctx->input_is_dynamic) to be true but got false
Torch-TensorRT currently does not support the compilation of dynamc engines from code using PyTorch [bi/tri]linear interpolation via scale factor and align_corners=True
```
And I found this check did exist in code with tag v1.4.0, but not in main branch. Will I need to clone the latest code and recompile torch-tensorrt to escape frome this error and will it work? Or any other simple way ?
<!-- A clear and concise description of what you have already done. -->
## Environment
nvcr.io/nvidia/pytorch:23.08-py3
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/2327
|
open
|
[
"question",
"component: converters"
] | 2023-09-20T07:25:34Z
| 2023-11-30T10:57:37Z
| null |
ArtemisZGL
|
huggingface/transformers.js
| 321
|
[Question] Image Embeddings for ViT
|
Is it possible to get image embeddings using Xenova/vit-base-patch16-224-in21k model? We use feature_extractor to get embeddings for sentences. Can we use feature_extractor to get image embeddings?
```js
const model_id = "Xenova/vit-base-patch16-224-in21k";
const image = await RawImage.read("https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg");
const classifier = await pipeline("image-classification", model_id);
const { image_embeddings } = await classifier.processor.feature_extractor(image);
```
|
https://github.com/huggingface/transformers.js/issues/321
|
closed
|
[
"question"
] | 2023-09-20T01:22:08Z
| 2024-01-13T01:25:03Z
| null |
hadminh
|
huggingface/optimum
| 1,395
|
TensorrtExecutionProvider documentation
|
### System Info
```shell
main, docs
```
### Who can help?
@fxmarty
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
The method described in the docs for [TRT engine building](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu#tensorrt-engine-build-and-warmup) is outdated, first mentioned [here](https://github.com/huggingface/optimum/issues/842#issuecomment-1568766399), I tested the dynamic shapes method in `optimum-benchmark` [here](https://github.com/huggingface/optimum-benchmark/pull/55#issuecomment-1721180586).
### Expected behavior
We can update the docs with this snippet:
```python
provider_options = {
"trt_engine_cache_enable": True,
"trt_engine_cache_path": "tmp/trt_cache_gpt2_example",
"trt_profile_min_shapes": "input_ids:1x16,attention_mask:1x16",
"trt_profile_max_shapes": "input_ids:1x64,attention_mask:1x64",
"trt_profile_opt_shapes": "input_ids:1x32,attention_mask:1x32",
}
ort_model = ORTModelForCausalLM.from_pretrained(
"gpt2",
export=True,
use_cache=False,
provider="TensorrtExecutionProvider",
provider_options=provider_options,
)
ort_model.generate(
input_ids=torch.tensor([[1] * 16]).to("cuda"),
max_new_tokens=64-16,
min_new_tokens=64-16,
pad_token_id=0,
eos_token_id=0,
)
```
though it's still not clear to me what's the effect of `trt_profile_opt_shapes`.
|
https://github.com/huggingface/optimum/issues/1395
|
open
|
[
"documentation",
"onnxruntime"
] | 2023-09-19T09:06:17Z
| 2023-09-19T09:57:26Z
| 1
|
IlyasMoutawwakil
|
huggingface/transformers.js
| 317
|
How to use xenova/transformers in VSCode Extension
|
Hey guys! I am trying to use xenova/transformers in CodeStory, we roll a vscode extension as well and I am hitting issues with trying to get the import working, here's every flavor of importing the library which I have tried to date.
```
const TransformersApi = Function('return import("@xenova/transformers")')();
const { pipeline, env } = await TransformersApi;
```
```
const { pipeline, env } = await import('@xenova/transformers')
```
```
const TransformersApi = require('@xenova/transformers');
const { pipeline, env } = await TransformersApi;
```
I think the crux of the issue is the node environment which VSCode uses which does not allow any of these to work, and I keep getting the deaded:
```
Error [ERR_REQUIRE_ESM]: require() of ES Module /Applications/Aide.app/Contents/Resources/app/extensions/codestory/node_modules/@xenova/transformers/src/transformers.js from /Applications/Aide.app/Contents/Resources/app/extensions/codestory/out/llm/embeddings/sentenceTransformers.js not supported.
Instead change the require of transformers.js in /Applications/Aide.app/Contents/Resources/app/extensions/codestory/out/llm/embeddings/sentenceTransformers.js to a dynamic import() which is available in all CommonJS modules.
```
after checking the js code which is generated, it ends up including the require word:
```
__importStar(require('@xenova/transformers'))
```
when I used the first option which was a function I got a very weird error btw:
```
[Extension Host] TypeError: A dynamic import callback was not specified.
at new NodeError (node:internal/errors:399:5)
at importModuleDynamicallyCallback (node:internal/process/esm_loader:39:9)
at eval (eval at <anonymous> (/Applications/Aide.app/Contents/Resources/app/extensions/codestory/out/llm/embeddings/sentenceTransformers.js:46:41), <anonymous>:3:1)
```
this is mostly comping from the node version which VSCode uses itself.
Do you guys have any suggestions on what I can do about this? Thanks!
|
https://github.com/huggingface/transformers.js/issues/317
|
open
|
[
"question"
] | 2023-09-19T01:35:21Z
| 2024-07-27T20:36:37Z
| null |
theskcd
|
huggingface/candle
| 894
|
How to fine-tune Llama?
|
Hello everybody,
I am trying to fine-tune the Llama model, but cannot load the safetensors file. I have modified the training loop for debugging and development:
```rust
pub fn run(args: &crate::TrainingCmd, common_args: &crate::Args) -> Result<()> {
let config_path = match &args.config {
Some(config) => std::path::PathBuf::from(config),
None => {
let api = hf_hub::api::sync::Api::new().unwrap();
println!("loading the model weights from {}", args.model_id);
let api = api.model(args.model_id.clone());
api.get(&args.which_model).unwrap()
}
};
let device = candle_examples::device(common_args.cpu)?;
let config = Config::tiny();
let mut varmap = candle_nn::VarMap::new();
let vb = candle_nn::VarBuilder::from_varmap(&varmap, DType::F32, &device);
varmap.load(config_path).unwrap();
/*let cache = Cache::new(false, &config, vb.pp("rot"))?;
let model = Llama::load(vb, &cache, config, true)?;
let params = candle_nn::ParamsAdamW {
lr: args.learning_rate,
..Default::default()
};
let mut opt = candle_nn::AdamW::new(varmap.all_vars(), params)?;
for (batch_index, batch) in batch_iter.enumerate() {
let (inp, tgt) = batch?;
let logits = model.forward(&inp, 0)?;
let loss = candle_nn::loss::cross_entropy(&logits.flatten_to(1)?, &tgt.flatten_to(1)?)?;
opt.backward_step(&loss)?;
if batch_index > 0 && batch_index % 1000 == 0 {
varmap.save("checkpoint.safetensors")?
}
}*/
Ok(())
}
```
I realize this error is likely because I cannot use VarMap::load to load such a large safetensors file (as described [here](https://github.com/huggingface/safetensors/blob/main/README.md#benefits)). However, how can I use VarMap (or something else that allows me to modifiy the tensor map) to load the weights? If there is not such a method, how should I implement this myself?
Thank you!
Eric
|
https://github.com/huggingface/candle/issues/894
|
closed
|
[] | 2023-09-18T22:18:04Z
| 2023-09-21T10:05:57Z
| null |
EricLBuehler
|
huggingface/candle
| 891
|
How to do fine-tuning?
|
Hello everybody,
I was looking through the Candle examples and cannot seem to find an example of fine-tuning for Llama. It appears the only example present is for training from scratch. How should I fine-tune a pretrained model on my own data? Or, more generally, how should I fine tune a model that it loaded from a safetensor file (and whose VarBuilder is immutable as discussed in #883)?
Thanks!
Eric
|
https://github.com/huggingface/candle/issues/891
|
closed
|
[] | 2023-09-18T18:37:42Z
| 2024-07-08T15:13:01Z
| null |
EricLBuehler
|
huggingface/transformers
| 26,218
|
How to manually set the seed of randomsampler generator when training using transformers trainer
|
### System Info
I used a [script](https://github.com/huggingface/transformers/blob/v4.33.0/examples/pytorch/language-modeling/run_clm.py) to continue pre-training the llama2 model. In the second epoch, the loss began to explode, so I chose to reload the checkpoint to continue training, but the loss changes were completely consistent with before, which made me doubt the iteration of the dataset is always consistent. So I tried modifying the [seed.](https://github.com/huggingface/transformers/blob/v4.33.0/examples/pytorch/language-modeling/run_clm.py#L309C33-L309C33) But in the end, my training loss is always consistent, and the state I print randomsampler is always the same.
I hope someone can tell me how to solve this problem, including where the seed of this generator is specified.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
transformers==4.33.0
pytorch==1.13.1
accelerate==0.21.0
deepspeed==0.10.0
### Expected behavior
I hope that the sampling of training data set should be different every time.
|
https://github.com/huggingface/transformers/issues/26218
|
closed
|
[] | 2023-09-18T14:19:11Z
| 2023-11-20T08:05:37Z
| null |
young-chao
|
pytorch/tutorials
| 2,563
|
Multiple GPU example limited to one GPU
|
https://github.com/pytorch/tutorials/blob/646c8b6368e4f43acc808e0ddddc569153d6a30f/beginner_source/blitz/data_parallel_tutorial.py#L60
Isn't this line limiting the example to **one** GPU no matter how many GPUs are available?
cc @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen
|
https://github.com/pytorch/tutorials/issues/2563
|
closed
|
[
"question",
"easy",
"docathon-h2-2023"
] | 2023-09-18T13:13:55Z
| 2023-11-06T17:51:57Z
| null |
9cpluss
|
huggingface/transformers.js
| 313
|
[Question] How to use remote models for automatic-speech-recognition
|
I have an html file that is
```
<!DOCTYPE html>
<html>
<body>
<script type="module">
import { pipeline,env } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.6.0';
env.allowLocalModels = false;
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en');
let output = await transcriber('https://xenova.github.io/transformers.js/audio/jfk.wav', { return_timestamps: true })
console.log(output)
</script>
</body>
</html>
```
I'm just trying to load the model, but it seems to be requesting from local url rather than hugging face. How can I enable remote models?
|
https://github.com/huggingface/transformers.js/issues/313
|
closed
|
[
"question"
] | 2023-09-18T04:56:52Z
| 2023-09-18T05:19:00Z
| null |
LehuyH
|
huggingface/candle
| 883
|
Question: How to properly use VarBuilder?
|
Hello everybody,
I am working on implementing LoRA and want to use the VarBuilder system. However, when I try to get a tensor with get_with_hints, I get a CannotFindTensor Err. To create the Tensor, I do:
```rust
vb.pp("a").get_with_hints(
...lora specific shape...
"weight",
...lora specific hints...
)
```
However, this fails with the CannotFindTensor error. How can I create the Tensor, or perhaps am I using the API incorrectly?
Thanks!
Eric
|
https://github.com/huggingface/candle/issues/883
|
closed
|
[] | 2023-09-17T20:40:27Z
| 2023-09-17T21:02:24Z
| null |
EricLBuehler
|
pytorch/xla
| 5,599
|
Stubs or wheels for other OSes/architectures
|
## ❓ Questions and Help
I'm new to torch/xla. One development pattern which I use, and which I expect to be common, is to write software on one system (eg M-series Mac laptop) which is intended to be run elsewhere. Project docs for torch/xla regarding installation specify downloading a wheel which is Linux x86 specific.
Even if my training and inference will run on Linux x86 systems, efficient and correct development strongly benefits from tools like type checkers, pylint, etc, which can quickly catch errors like incorrect methods or arguments -- but only work if _some_ amenable representation of libraries is available in the development environment.
In my current attempts to use torch/xla so far, merely following public docs has let me exercise distributed xla training in target environments, but my local branch, being unable to install the library, cannot do basic static analysis checks and certainly cannot run unit tests on modules which import xla code.
At a bare minimum the project could at least produce documentation recommending how developers on other platforms can develop against the library even if they cannot run its full range of behaviors, without having to do all development in a container.
|
https://github.com/pytorch/xla/issues/5599
|
closed
|
[
"question"
] | 2023-09-17T19:03:47Z
| 2025-04-29T13:22:51Z
| null |
abeppu
|
huggingface/transformers.js
| 310
|
How to load model from the static folder path in nextjs or react or vanilla js?
|
<!-- QUESTION GOES HERE -->
|
https://github.com/huggingface/transformers.js/issues/310
|
closed
|
[
"question"
] | 2023-09-17T14:13:57Z
| 2023-09-27T08:36:29Z
| null |
adnankarim
|
huggingface/safetensors
| 360
|
The default file format used when loading the model?
|
I guess that huggingface loads .safetensor files by default when loading models. Is this mandatory? Can I choose to load files in. bin format? (Because I only downloaded weights in bin format, and it reported an error “ could not find a file in safeTensor format”). I do not find related infomation in docs.
Thanks for your help.
|
https://github.com/huggingface/safetensors/issues/360
|
closed
|
[] | 2023-09-15T14:56:13Z
| 2023-09-19T10:34:57Z
| 1
|
Kong-Aobo
|
huggingface/diffusers
| 5,055
|
How to download config.json if it is not in the root directory.
|
Is there any way to download vae for a model where config.json is not in the root directory?
```python
vae = AutoencoderKL.from_pretrained("redstonehero/kl-f8-anime2")
```
For example, as shown above, there is no problem if config.json exists in the root directory, but if it does not exist, an error will occur.
```python
vae = AutoencoderKL.from_pretrained("hakurei/waifu-diffusion")
```
I would be glad to get your advice.
|
https://github.com/huggingface/diffusers/issues/5055
|
closed
|
[] | 2023-09-15T11:37:47Z
| 2023-09-16T00:15:58Z
| null |
suzukimain
|
pytorch/torchx
| 766
|
Is this repository no longer maintained?
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our
[documentation](https://pytorch.org/torchx).
### Question
Torch elastic redirects to this repository but it doesn't seem very active, is there a slack/ discord channel? I want to run DDP on kubernetes, is there another way i am not aware of? If torchx is best way, i'd like to contribute! Any pointers where i could start?
|
https://github.com/meta-pytorch/torchx/issues/766
|
closed
|
[] | 2023-09-15T10:37:43Z
| 2023-09-15T22:03:01Z
| 4
|
ccharest93
|
huggingface/transformers.js
| 305
|
[Question] Can I work with Peft models through the API?
|
Let's say I have the following code in Python. How would I translate that to js?
````
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "samwit/bloom-7b1-lora-tagger"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
````
|
https://github.com/huggingface/transformers.js/issues/305
|
open
|
[
"question"
] | 2023-09-14T21:02:59Z
| 2023-09-16T00:16:03Z
| null |
chrisfel-dev
|
pytorch/TensorRT
| 2,320
|
❓ [Question] How to use C++ bindings for torch tensorrt with CMake?
|
## ❓ Question
I would like to know how to use the examples provided [here](https://github.com/pytorch/TensorRT/tree/v1.4.0/examples/torchtrt_runtime_example) with CMake. The instructions seem to indicate only how to use it with a makefile. CMake is not able to find `torchtrt`, exactly as described in #1207, but unfortunately that issue has been closed without actually resolving it.
I get the following error:
```
CMake Error at CMakeLists.txt:6 (find_package):
By not providing "Findtorchtrt.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "torchtrt",
but CMake did not find one.
Could not find a package configuration file provided by "torchtrt" with any
of the following names:
torchtrtConfig.cmake
torchtrt-config.cmake
Add the installation prefix of "torchtrt" to CMAKE_PREFIX_PATH or set
"torchtrt_DIR" to a directory containing one of the above files. If
"torchtrt" provides a separate development package or SDK, be sure it has
been installed.
```
## What you have already tried
I noticed that there is a `python3.8/dist-packages/torch/share/cmake/Torch/TorchConfig.cmake`, but there are no cmake files at all in my torch_tensorrt installation, which otherwise works perfectly fine:
```
root@jetson:/opt/inference/TensorRT# find /usr/local/lib/python3.8/dist-packages -name *.cmake | grep Torch
/usr/local/lib/python3.8/dist-packages/torch/share/cmake/Torch/TorchConfigVersion.cmake
/usr/local/lib/python3.8/dist-packages/torch/share/cmake/Torch/TorchConfig.cmake
root@jetson:/opt/inference/TensorRT# find /usr/local/lib/python3.8/dist-packages -name *.cmake | grep tensorrt
root@jetson:/opt/inference/TensorRT#
```
I noticed that `torchtrtConfig.cmake` is [mentioned in the CMakeLists.txt](https://github.com/pytorch/TensorRT/blob/v1.4.0/CMakeLists.txt#L38), but it doesn't exist anywhere in my installation. Am I supposed to install Torch TensorRT with CMake in order to use the C++ API in a CMake project?
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.0
- CPU Architecture: aarch64
- OS (e.g., Linux): L4T
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): L4T docker container
- Build command you used (if compiling from source): Building from source as per instructions via bazel
- Are you using local sources or building from archives:
- Python version: 3.8
- CUDA version:
- GPU models and configuration: Jetson Orin NX 16GB
- Any other relevant information:
|
https://github.com/pytorch/TensorRT/issues/2320
|
closed
|
[
"question",
"No Activity"
] | 2023-09-14T18:42:13Z
| 2023-12-28T22:10:34Z
| null |
janblumenkamp
|
pytorch/TensorRT
| 2,319
|
❓ [Question] How do I load the torch tensorRT model on multiple gpus
|
## ❓ Question
In [TorchServe](https://github.com/pytorch/serve), we have this concept of workers. In a multi-GPU node, we can assign each GPU to a worker.
I am noticing that tensorRT model is getting loaded on GPU 0 even though we specify the correct GPU ID
for each worker.```torch.jit.load(model_pt_path, map_location=self.device)```
How do we load a tensorRT model in a a device id which is not 0 ?
## What you have already tried
I have tried loading a torchscript model, Here, it loads on all 4 GPUs
Using ```torch.jit.load(model_pt_path, map_location=self.device)``` to load the same model on each of the 4 GPUs
```
2023-09-14T18:32:19,333 [INFO ] W-9000-resnet-18_1.0-stdout MODEL_LOG - cuda:1
2023-09-14T18:32:19,333 [INFO ] W-9000-resnet-18_1.0-stdout MODEL_LOG - !!!!!!!!!!!!!!!!!!!
2023-09-14T18:32:19,355 [INFO ] W-9003-resnet-18_1.0-stdout MODEL_LOG - Torch TensorRT enabled
2023-09-14T18:32:19,356 [INFO ] W-9003-resnet-18_1.0-stdout MODEL_LOG - cuda:0
2023-09-14T18:32:19,356 [INFO ] W-9003-resnet-18_1.0-stdout MODEL_LOG - !!!!!!!!!!!!!!!!!!!
2023-09-14T18:32:19,357 [INFO ] W-9002-resnet-18_1.0-stdout MODEL_LOG - Torch TensorRT enabled
2023-09-14T18:32:19,357 [INFO ] W-9002-resnet-18_1.0-stdout MODEL_LOG - cuda:3
2023-09-14T18:32:19,357 [INFO ] W-9002-resnet-18_1.0-stdout MODEL_LOG - !!!!!!!!!!!!!!!!!!!
2023-09-14T18:32:19,359 [INFO ] W-9001-resnet-18_1.0-stdout MODEL_LOG - Torch TensorRT enabled
2023-09-14T18:32:19,359 [INFO ] W-9001-resnet-18_1.0-stdout MODEL_LOG - cuda:2
2023-09-14T18:32:19,359 [INFO ] W-9001-resnet-18_1.0-stdout MODEL_LOG - !!!!!!!!!!!!!!!!!!!
```
<img width="843" alt="Screenshot 2023-09-14 at 11 39 36 AM" src="https://github.com/pytorch/TensorRT/assets/16617092/c5f9c16b-1866-4c80-b105-9fca3219a78d">
### Have a simpler repro
```
import torch
import torch_tensorrt
model = torch.jit.load("trt_model_fp16.pt","cuda:1")
```
<img width="839" alt="Screenshot 2023-09-14 at 1 28 20 PM" src="https://github.com/pytorch/TensorRT/assets/16617092/f5be8d91-491f-4efd-ad09-3e22118cc56a">
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):3.9
- CPU Architecture:
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source):
- Are you using local sources or building from archives: pip
- Python version: 3.9
- CUDA version: 11.7
- GPU models and configuration: T4
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/2319
|
closed
|
[
"question",
"component: runtime",
"bug: triaged [verified]"
] | 2023-09-14T18:41:36Z
| 2023-09-27T19:55:28Z
| null |
agunapal
|
huggingface/diffusers
| 5,042
|
How to give number of inference steps to Wuerstchen prior pipeline
|
**this below working with default DEFAULT_STAGE_C_TIMESTEPS but it always generates with exactly 29 number of prior inference steps**
```
prior_output = prior_pipeline(
prompt=prompt,
height=height,
width=width,
num_inference_steps=prior_num_inference_steps,
timesteps=DEFAULT_STAGE_C_TIMESTEPS,
negative_prompt=negative_prompt,
guidance_scale=prior_guidance_scale,
num_images_per_prompt=num_images_per_prompt,
generator=generator,
callback=callback_prior,
)
```
when i make it like below i got this error
```
prior_output = prior_pipeline(
prompt=prompt,
height=height,
width=width,
prior_num_inference_steps = prior_num_inference_steps,
# timesteps=DEFAULT_STAGE_C_TIMESTEPS,
negative_prompt=negative_prompt,
guidance_scale=prior_guidance_scale,
num_images_per_prompt=num_images_per_prompt,
generator=generator,
callback=callback_prior,
)
```
`TypeError: WuerstchenPriorPipeline.__call__() got an unexpected keyword argument 'prior_num_inference_steps'`
But the documentation showing it???
https://huggingface.co/docs/diffusers/main/en/api/pipelines/wuerstchen
`prior_num_inference_steps (Union[int, Dict[float, int]], optional, defaults to 30) — The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. For more specific timestep spacing, you can pass customized prior_timesteps`
@sayakpaul @dome272 @patrickvonplaten @williamberman
**Here below entire code. what I want is being able to set any number of prior and decoder number of inference steps**
```
prior_output = prior_pipeline(
prompt=prompt,
height=height,
width=width,
prior_num_inference_steps = prior_num_inference_steps,
# timesteps=DEFAULT_STAGE_C_TIMESTEPS,
negative_prompt=negative_prompt,
guidance_scale=prior_guidance_scale,
num_images_per_prompt=num_images_per_prompt,
generator=generator,
callback=callback_prior,
)
if PREVIEW_IMAGES:
for _ in range(len(DEFAULT_STAGE_C_TIMESTEPS)):
r = next(prior_output)
if isinstance(r, list):
yield r
prior_output = r
decoder_output = decoder_pipeline(
image_embeddings=prior_output.image_embeddings,
prompt=prompt,
num_inference_steps = decoder_num_inference_steps,
# timesteps=decoder_timesteps,
guidance_scale=decoder_guidance_scale,
negative_prompt=negative_prompt,
generator=generator,
output_type="pil",
).images
yield decoder_output
```
|
https://github.com/huggingface/diffusers/issues/5042
|
closed
|
[
"bug"
] | 2023-09-14T15:21:31Z
| 2023-09-20T07:41:19Z
| null |
FurkanGozukara
|
huggingface/chat-ui
| 440
|
Web Search not working
|
i have been having this issues where it just searches something but then never shows me the answer it shows max tokens
i just keep seeing this
first i see the links of the resources
but then it does nothing at all

i just see this and do not even get the model response
|
https://github.com/huggingface/chat-ui/issues/440
|
closed
|
[
"support"
] | 2023-09-14T13:50:15Z
| 2023-09-20T14:16:49Z
| 5
|
bilalazhar72
|
pytorch/xla
| 5,569
|
Questions about the return value of lazyTensor pytorch xla subgraph
|
Using lazyTensor, pytorch xla will generate an xla subgraph, and its subgraph will add relevant conditions to the liveTensor trained in the current step as the return of the subgraph.
my question is:
1. What does the LiveTensor here mean, and what is the design basis for the value returned by the xla diagram? That is, what can be returned as an xla diagram. The return here refers to the ROOT node in xla.
2. What is the concept of xla image return here? Is it the return from the XLA device to the HOST device? Or what does it mean?
Below I have given a section on using xm.mark_step() to trigger a compile and run code in the training process.
std::shared_ptr<XLAGraphExecutor::Async>
XLAGraphExecutor::SyncTensorsGraphInternal(
std::vector<XLATensorPtr>* tensors, absl::Span<const std::string> devices,
const SyncTensorsConfig& config, bool warm_up_cache_only) {
tensorflow::profiler::TraceMe activity(
"SyncTensorsGraphInternal", tensorflow::profiler::TraceMeLevel::kInfo);
SyncTensorCollection coll = CollectSyncTensors(*tensors, config);
if (coll.indices.empty()) {
/* Enure previous execution is complete before exiting this
* function */
TensorCollectionBarrier(&coll);
return nullptr;
}
DebugUtil::SaveTensorsGraphInfo("ScheduleSyncTensorsGraph", *tensors,
&coll.indices);
std::vector<torch::lazy::Value> ir_values;
std::vector<torch::lazy::BackendDataPtr> tensor_data_vec;
ExtractIRAndPrepareXlaData_(tensors, coll.config, coll.indices, ir_values,
tensor_data_vec);
PostOrderData po_data = RunPostOrder(ir_values, &coll);
coll.hash = torch::lazy::HashCombine(
coll.hash, torch::lazy::Hash(po_data.parameter_sequence));
TF_VLOG(4) << "Parameter sequence graph hash "
<< torch::lazy::HashToString(coll.hash);
std::shared_ptr<Async> async =
TryRunCachedSync(tensors, &coll, &po_data, tensor_data_vec);
if (async != nullptr) {
return async;
}
CompilationResult compile_result =
Compile(*tensors, devices, coll, &po_data, ir_values);
TORCH_LAZY_VALUE_METRIC("TensorsGraphSize", compile_result.emitted_nodes);
TF_VLOG(5) << "TensorsGraphSize=" << compile_result.emitted_nodes;
auto cached_computation = std::make_shared<CachedComputation>(
std::move(compile_result.computation), compile_result.is_sharded);
GetComputationCache()->Add(coll.hash, cached_computation);
if (warm_up_cache_only) {
return nullptr;
} else {
return ScheduleSyncTensorsGraph(
tensors, &coll, std::move(compile_result.parameters_data),
compile_result.device.toString(), std::move(cached_computation),
tensor_data_vec);
}
}
|
https://github.com/pytorch/xla/issues/5569
|
open
|
[
"question",
"runtime"
] | 2023-09-14T13:08:17Z
| 2025-04-29T13:46:42Z
| null |
ckfgihub
|
huggingface/chat-ui
| 438
|
running the app with websearch fails
|
Hey after adding the serper api key I'm trying to run the app locally "nmp run dev" and I get an issue related to websearch:
```
[vite]: Rollup failed to resolve import "@xenova/transformers" from "C:/Users/username/chat-ui/src/lib/server/websearch/sentenceSimilarity.ts".
This is most likely unintended because it can break your application at runtime.
If you do want to externalize this module explicitly add it to
`build.rollupOptions.external`
error during build:
Error: [vite]: Rollup failed to resolve import "@xenova/transformers" from "C:/Users/username/chat-ui/src/lib/server/websearch/sentenceSimilarity.ts".
This is most likely unintended because it can break your application at runtime.
If you do want to externalize this module explicitly add it to
`build.rollupOptions.external`
at viteWarn (file:///C:/Users/username/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:48142:27)
at onRollupWarning (file:///C:/Users/username/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:48174:9)
at onwarn (file:///C:/Users/username/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:47902:13)
at file:///C:/Users/username/chat-ui/node_modules/rollup/dist/es/shared/node-entry.js:24152:13
at Object.logger [as onLog] (file:///C:/Users/username/chat-ui/node_modules/rollup/dist/es/shared/node-entry.js:25825:9)
at ModuleLoader.handleInvalidResolvedId (file:///C:/Users/rachel_shalom/chat-ui/node_modules/rollup/dist/es/shared/node-entry.js:24738:26)
at file:///C:/Usersusername/chat-ui/node_modules/rollup/dist/es/shared/node-entry.js:24698:26
```
how do I externelize this module and should I? anyone had this issue?
|
https://github.com/huggingface/chat-ui/issues/438
|
closed
|
[
"support"
] | 2023-09-14T11:21:35Z
| 2023-09-14T12:08:00Z
| 2
|
RachelShalom
|
pytorch/TensorRT
| 2,318
|
❓ Why cant't I compile Torch-TensorRT 1.0.0?
|
## ❓Why cant't I compile Torch-TensorRT 1.0.0?
## What you have already tried
I've been trying to compile versions 1.0.0 and 1.1.0 of Torch-TensorRT in my Jetson Xavier NX 16GB, I had followed the official guides of installation mentioned in this [issue](https://github.com/pytorch/TensorRT/discussions/1077).
## Environment
I have the next environment:
- Jetpack 4.6
- Python 3.6.9
- Pytorch 1.10.0
- Torchvision 0.11.1
- CUDA 10.2
- CUDNN 8.2.1
- TensorRT 8.0.1.6
- GPU models and configuration: Jetson Xavier NX with JetPack 4.6
## The error
Finally, when I launch python3 py/setup.py install --use-cxx11-abi the error happend
`running install
using CXX11 ABI build
Jetpack version: 4.6
building libtorchtrt
INFO: Analyzed target //:libtorchtrt (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
ERROR: /home/iovi/SW/TensorRT/core/lowering/BUILD:10:11: Compiling core/lowering/register_trt_placeholder_ops.cpp failed: (Exit 1): gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 61 arguments skipped)
Use --sandbox_debug to see verbose messages from the sandbox
core/lowering/register_trt_placeholder_ops.cpp:16:34: error: invalid user-defined conversion from 'torch::jit::<lambda(torch::jit::Stack&)>' to 'torch::jit::OperationCreator {aka std::function<void(std::vector<c10::IValue>*)> (*)(const torch::jit::Node*)}' [-fpermissive]
aliasAnalysisFromSchema()),
^
core/lowering/register_trt_placeholder_ops.cpp:15:24: note: candidate is: torch::jit::<lambda(torch::jit::Stack&)>::operator void (*)(torch::jit::Stack&)() const <near match>
[](Stack& stack) { /*noop*/ },
^
core/lowering/register_trt_placeholder_ops.cpp:15:24: note: no known conversion from 'void (*)(torch::jit::Stack&) {aka void (*)(std::vector<c10::IValue>&)}' to 'torch::jit::OperationCreator {aka std::function<void(std::vector<c10::IValue>*)> (*)(const torch::jit::Node*)}'
In file included from external/libtorch/include/torch/csrc/jit/runtime/custom_operator.h:5:0,
from core/lowering/register_trt_placeholder_ops.cpp:1:
external/libtorch/include/torch/csrc/jit/runtime/operator.h:98:3: note: initializing argument 2 of 'torch::jit::Operator::Operator(std::__cxx11::string, torch::jit::OperationCreator, c10::AliasAnalysisKind)'
Operator(
^~~~~~~~
Target //:libtorchtrt failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 225,037s, Critical Path: 54,01s
INFO: 46 processes: 13 internal, 33 processwrapper-sandbox.
FAILED: Build did NOT complete successfully
`
|
https://github.com/pytorch/TensorRT/issues/2318
|
closed
|
[
"question",
"No Activity"
] | 2023-09-14T09:30:20Z
| 2024-01-01T00:02:46Z
| null |
VictorIOVI
|
huggingface/diffusers
| 5,032
|
How to unfuse_lora only the first one after I have added multiple lora?
|
base.load_lora_weights("models/safetensors/SDXL/国风插画SDXL.safetensors")
base.fuse_lora(lora_scale=.7)
base.load_lora_weights("models/safetensors/SDXL/sd_xl_offset_example-lora_1.0.safetensors")
base.fuse_lora(lora_scale=.8)
Now, When I execute unfuse_lora() only the most recent one has been unfuse .
so,how to unfuse '国风插画SDXL.safetensors' or unfuse all lora weights
|
https://github.com/huggingface/diffusers/issues/5032
|
closed
|
[
"stale"
] | 2023-09-14T08:10:46Z
| 2023-10-30T15:06:34Z
| null |
yanchaoguo
|
pytorch/kineto
| 804
|
Will PyTorch Profiler TensorBoard Plugin continue to evolve? It seems that it cannot support PyTorch 2.0
|
https://github.com/pytorch/kineto/issues/804
|
closed
|
[
"question",
"plugin"
] | 2023-09-14T02:21:09Z
| 2023-12-28T16:44:59Z
| null |
BadTrasher
|
|
huggingface/optimum
| 1,384
|
Documentation Request: Table or heuristic for Ortmodel Method to Encoder/Decoder to .onnx File to Task
|
### Feature request
Hi there
Could you provide either a table (where explicit rules apply - see attached image), or a heuristic, so I can tell which ML models, optimised file types, with which tasks, apply to which inference methods and inference tasks?
The example table below will help to clarify, and isn't necessarily prescriptive, because I may have mixed some concepts.
In case you mention, yes - I'm aware that it's possible to run a pipeline with the wrong model, and an error message will spit out all the accepted architectures/models (roberta, gpt, etc) for a method type. However,
a) this is very time-consuming, hit and miss, and
b) these 'lists' don't explain the relationships to the underlying architectures and files.. (ie. model_merged, encoder-decoder, encoder only, decoder only, that result from the pytorch, safetensor files.)
For example, will all models exported/optimised for text-generation always be encoder-decoder and always use the ORTSeq2SeqModel method (for illustrative purposes), or will this depend on a combination of the original model architecture and the task applied during optimisation, which may result in one or more usable methods for inference?
It's a massive learning curve for me, but seems it would be relatively straightforward to someone who works with this stuff . It probably just needs to go from peoples' heads into a document.
Thanks muchly! it'll be a massive time saver and help with conceptual understanding.
### Motivation
I'm trying to understand how to mix and match the models, optimisations, tasks, and inference methods.. Been trawling HF, ONNX, and general information but cannot find anything like this that exists, and would save a BUNCH of testing trial and error time. (like I've wasted directly and indirectly almost a week of trialling and there's probably very simple rules for this)
Part of the time wasted has been selecting models and running CLI command to optimise/quantize for a task, only to discover I have no idea with ORTModel method to use, as these don't relate to task but model architecture instead (or a combination of both), and brute forcing an understanding with testing and trying to come up with my own heuristics.
Maybe this type of knowledge is assumed? but for newbs like me it's extremely daunting and feels like I may be trying to re-invent the wheel.
### Your contribution
(table for illustrative purposes.. the dummy data is wrong.. )

|
https://github.com/huggingface/optimum/issues/1384
|
closed
|
[
"Stale"
] | 2023-09-14T01:45:38Z
| 2025-04-24T02:11:24Z
| 4
|
gidzr
|
pytorch/rl
| 1,522
|
[BUG] It's not clear how to call an advantage module with batched envs and pixel observations.
|
## Describe the bug
When you get a tensordict rollout of shape `(N_envs, N_steps, C, H, W)` out of a collector and you want to apply an advantage module that starts with `conv2d` layers:
1. directly applying the module will crash with the `conv2d` layer complaining about the input size e.g. `RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [2, 128, 4, 84, 84]`
2. flattening the tensordict first with `rollout.reshape(-1)` so that it has shape `[B, C, H, W]` and then calling the advantage module will run but issue the warning `torchrl/objectives/value/advantages.py:99: UserWarning: Got a tensordict without a time-marked dimension, assuming time is along the last dimension.` leaving you unsure of wether the advantages were computed correctly.
So it's not clear how one should proceed.
- [x] I have checked that there is no similar issue in the repo (**required**)
- [x] I have read the [documentation](https://github.com/pytorch/rl/tree/main/docs/) (**required**)
- [x] I have provided a minimal working example to reproduce the bug (**required**)
|
https://github.com/pytorch/rl/issues/1522
|
open
|
[
"bug"
] | 2023-09-13T21:04:29Z
| 2024-03-27T16:37:49Z
| null |
skandermoalla
|
huggingface/optimum
| 1,379
|
Can't use bettertransformer to train vit?
|
### System Info
```shell
Traceback (most recent call last):
File "test_bettertransformer_vit.py", line 95, in <module>
main()
File "test_bettertransformer_vit.py", line 92, in main
test_train_time()
File "test_bettertransformer_vit.py", line 86, in test_train_time
out_vit = model(pixel_values).last_hidden_state
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/vit/modeling_vit.py", line 587, in forward
encoder_outputs = self.encoder(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/vit/modeling_vit.py", line 413, in forward
layer_outputs = layer_module(hidden_states, layer_head_mask, output_attentions)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.local/lib/python3.8/site-packages/optimum/bettertransformer/models/encoder_models.py", line 1186, in forward
raise NotImplementedError(
NotImplementedError: Training and Autocast are not implemented for BetterTransformer + ViT. Please open an issue.
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
def test_train_time():
model = ViTModel.from_pretrained(model_pth).to('cuda')
processor = ViTImageProcessor.from_pretrained(model_pth)
pixel_values=clip_process(processor, pic_pth).cuda()
if args.flash:
model = model.to_bettertransformer()
model.train()
begin_time = time.time()
for i in range(args.nums):
out_vit = model(pixel_values).last_hidden_state
print('use flash: {}, train vit time {:.2f}'.format(args.flash, time.time() - begin_time))
### Expected behavior
none
|
https://github.com/huggingface/optimum/issues/1379
|
closed
|
[
"bug"
] | 2023-09-13T12:49:53Z
| 2025-02-20T08:38:26Z
| 1
|
lijiaoyang
|
pytorch/examples
| 1,190
|
main.py: TensorBoard in case of Multi-processing Distributed Data Parallel Training
|
Dear developers
It is so great that you've provided a examples/imagenet/main.py script which looks amazing.
I'm looking how to setup a _Multi-processing Distributed Data Parallel Training_, for instance 8 GPUs on a single node but I can also use multi-nodes multi-gpus. I must say that I have never had so great infrastructure that I'm discovering at the same times.
Now, I was used to view the evolution of the Accuracies (Top 1, Top 5, train/val) during the training (rather common isn't it), but looking at the code (main.py) I do not see the
```python
from torch.utils.tensorboard import SummaryWriter
...
writer = SummaryWriter(logs_dir)
...
```
and similar code used in the train/validate routines like
```python
if writer is not None:
suffix = "train"
writer.add_scalar(f'top5_{suffix}', top5.avg, global_step=epoch)
writer.add_scalar(f'top1_{suffix}', top1.avg, global_step=epoch)
```
Now, in the multi-gpus processing I would imagine that one has to deal with "which gpu among the whole sets of gpus should/must do the job". But I am pretty sure that many experts are doing such things routinely.
Is there a foreseen new version of main.py that would integrate such TensorBoard features in case of Multi-processing Distributed Data Parallel Training? In the mean while may be someone can help to setup such modifications.
|
https://github.com/pytorch/examples/issues/1190
|
open
|
[] | 2023-09-13T11:19:44Z
| 2023-09-13T11:19:44Z
| 0
|
jecampagne
|
huggingface/text-generation-inference
| 1,015
|
how to text-generation-benchmark through the local tokenizer
|
The command i run in docker is
```
text-generation-benchmark --tokenizer-name /data/checkpoint-5600/
```
The error log is
```
2023-09-12T11:22:01.245495Z INFO text_generation_benchmark: benchmark/src/main.rs:132: Loading tokenizer
2023-09-12T11:22:01.245966Z INFO text_generation_benchmark: benchmark/src/main.rs:141: Downloading tokenizer
2023-09-12T11:22:31.270784Z WARN cached_path::cache: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:564: ETAG fetch failed for https://huggingface.co//data/checkpoint-5600//resolve/main/tokenizer.json, retrying in 1957 milliseconds...
2023-09-12T11:23:03.228297Z WARN cached_path::cache: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:564: ETAG fetch failed for https://huggingface.co//data/checkpoint-5600//resolve/main/tokenizer.json, retrying in 2202 milliseconds...
2023-09-12T11:23:35.430766Z WARN cached_path::cache: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:564: ETAG fetch failed for https://huggingface.co//data/checkpoint-5600//resolve/main/tokenizer.json, retrying in 4671 milliseconds...
2023-09-12T11:24:10.102170Z ERROR cached_path::cache: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:555: Max retries exceeded for https://huggingface.co//data/checkpoint-5600//resolve/main/tokenizer.json
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Model \"/data/checkpoint-5600/\" on the Hub doesn't have a tokenizer"', benchmark/src/main.rs:153:78
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Aborted (core dumped)
```
I notice `Downloading tokenizer` in error log, and i feel very strange about it beacause `/data/checkpoint-5600/` is my local model path. So i find the src code as following:
https://github.com/huggingface/text-generation-inference/blob/1f69fb9ed4fb91fe0bb9b94edda5729c67e6f02a/benchmark/src/main.rs#L134-L154
But i notice that only `tokenizer_config.json` in my local model path but no `tokenizer.json`. And i see that it is the same is as the hub model, for example https://huggingface.co/openlm-research/open_llama_7b_v2/tree/main
Then i want to bypass by renaming `tokenizer_config.json` to `tokenizer.json` in my local model path, it still doesn't work:
```
2023-09-12T11:29:52.461487Z INFO text_generation_benchmark: benchmark/src/main.rs:132: Loading tokenizer
2023-09-12T11:29:52.462513Z INFO text_generation_benchmark: benchmark/src/main.rs:138: Found local tokenizer
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error("expected `,` or `}`", line: 2, column: 18)', benchmark/src/main.rs:139:69
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Aborted (core dumped)
```
Finally i want to know the `tokenizer_config.json` and `tokenizer.json` expressed here are the same thing?
|
https://github.com/huggingface/text-generation-inference/issues/1015
|
closed
|
[
"Stale"
] | 2023-09-12T12:10:41Z
| 2024-06-07T09:39:32Z
| null |
jessiewiswjc
|
huggingface/autotrain-advanced
| 260
|
How to create instruction dataset (Q&A) for fine-tuning from PDFs?
|
https://github.com/huggingface/autotrain-advanced/issues/260
|
closed
|
[] | 2023-09-12T02:54:07Z
| 2023-12-18T15:31:13Z
| null |
mahimairaja
|
|
huggingface/transformers.js
| 295
|
[Question] Issue with deploying model to Vercel using NextJS and tRPC
|
Hi I'm trying to deploy my model to Vercel via NextJS and tRPC and have the .cache folder generated using the postinstall script
```
// @ts-check
let fs = require("fs-extra");
let path = require("path");
async function copyXenovaToLocalModules() {
const paths = [["../../../node_modules/@xenova", "../node_modules/@xenova"]];
for (const pathTuple of paths) {
const [src, dest] = [
path.join(__dirname, pathTuple[0]),
path.join(__dirname, pathTuple[1]),
];
await fs.remove(dest).catch(() => {});
await fs.copy(src, dest).catch(() => {});
// Create .cache folder for dest paths
const cacheDir = path.join(dest, "transformers", ".cache");
await fs.mkdir(cacheDir).catch(() => {});
}
}
copyXenovaToLocalModules();
```
When I run this, I get the following error:
```
env {
backends: {
onnx: { wasm: [Object], webgl: {}, logLevelInternal: 'warning' },
tfjs: {}
},
__dirname: '/vercel/path0/packages/api/node_modules/@xenova/transformers',
version: '2.5.4',
allowRemoteModels: true,
remoteHost: 'https://huggingface.co/',
remotePathTemplate: '{model}/resolve/{revision}/',
allowLocalModels: true,
localModelPath: '/vercel/path0/packages/api/node_modules/@xenova/transformers/models/',
useFS: true,
useBrowserCache: false,
useFSCache: true,
cacheDir: '/vercel/path0/packages/api/node_modules/@xenova/transformers/.cache/',
useCustomCache: false,
customCache: null
}
An error occurred while writing the file to cache: [Error: ENOENT: no such file or directory, mkdir '/vercel'] {
errno: -2,
code: 'ENOENT',
syscall: 'mkdir',
path: '/vercel'
}
An error occurred while writing the file to cache: [Error: ENOENT: no such file or directory, mkdir '/vercel'] {
errno: -2,
code: 'ENOENT',
syscall: 'mkdir',
path: '/vercel'
}
An error occurred while writing the file to cache: [Error: ENOENT: no such file or directory, mkdir '/vercel'] {
errno: -2,
code: 'ENOENT',
syscall: 'mkdir',
path: '/vercel'
}
```
Can someone help me with this?
|
https://github.com/huggingface/transformers.js/issues/295
|
closed
|
[
"question"
] | 2023-09-11T11:13:11Z
| 2023-09-12T15:23:17Z
| null |
arnabtarwani
|
huggingface/transformers.js
| 291
|
[Question] Using transformers.js inside an Obsidian Plugin
|
I'm trying to run transfomer.js inside of Obsidian but running into some errors:
<img width="698" alt="Screenshot 2023-09-10 at 3 05 43 PM" src="https://github.com/xenova/transformers.js/assets/11430621/a6b4b83e-6a1e-44bb-9a46-c3966d058146">
This code is triggering the issues:
```js
class MyClassificationPipeline {
static task = "text-classification";
static model = "Xenova/distilbert-base-uncased-finetuned-sst-2-english";
static instance = null;
static async getInstance(progress_callback = null) {
if (this.instance === null) {
// Dynamically import the Transformers.js library
console.log('before import')
let { pipeline, env } = await import("@xenova/transformers");
console.log('after import')
// NOTE: Uncomment this to change the cache directory
// env.cacheDir = './.cache';
this.instance = pipeline(this.task, this.model, {
progress_callback,
});
}
return this.instance;
}
}
export default MyClassificationPipeline;
// Comment out this line if you don't want to start loading the model as soon as the server starts.
// If commented out, the model will be loaded when the first request is received (i.e,. lazily).
// MyClassificationPipeline.getInstance();
```
[Link to source](https://github.com/different-ai/obsidian-ml/blob/master/embeddings.js)
[These are the lines that are calling the code above](https://github.com/different-ai/obsidian-ml/blob/0bd169c6e0c3f385e7238a78c585932fe0320bc9/hello.js#L27-L29)
Context about Obsidian plugins:
- Obsidian plugin is just a single imported js file.
- Most of the time it's bundled using esbuild.
In my case, this is [my esbuild setup](https://github.com/different-ai/obsidian-ml/blob/master/esbuild.config.mjs)
----
How should I be tackling this, what would be the recommended way to bundle transformer.js?
|
https://github.com/huggingface/transformers.js/issues/291
|
open
|
[
"question"
] | 2023-09-10T22:12:07Z
| 2024-04-30T13:52:06Z
| null |
benjaminshafii
|
huggingface/candle
| 807
|
How to use the kv_cache?
|
Hi, how would I use the kv_cache? Let's say I want a chat like type of thing, how would I save the kv_cache and load it so that all the tokens won't have to be computed again?
|
https://github.com/huggingface/candle/issues/807
|
closed
|
[] | 2023-09-10T21:39:31Z
| 2025-11-22T23:18:58Z
| null |
soupslurpr
|
huggingface/transformers
| 26,061
|
How to perform batch inference?
|
### Feature request
I want to pass a list of tests to model.generate.
text = "hey there"
inputs = tokenizer(text, return_tensors="pt").to(0)
out = model.generate(**inputs, max_new_tokens=184)
print(tokenizer.decode(out[0], skip_special_tokens=True))
### Motivation
I want to do batch inference.
### Your contribution
Testing
|
https://github.com/huggingface/transformers/issues/26061
|
closed
|
[] | 2023-09-08T20:59:37Z
| 2023-10-23T16:04:20Z
| null |
ryanshrott
|
pytorch/vision
| 7,947
|
Why image shape different between Image.open and torchvision.io.read_image
|
### 🐛 Describe the bug
EXIF image:

I have a JPEG image above with EXIF information and I tried to load this image into pytorch for augmentation.
1. try with opencv
```
import cv2
img = cv2.imread("1.jpg")
print(img.shape[0], img.shape[1])
```
the result is
```
201 151
```
2. try with pillow
```
from PIL import Image
img3 = Image.open("1.jpg")
print(img3.size)
```
the result is
```
(201, 151)
```
3. try with torchvison.io
```
import torchvision as tv
img4 = tv.io.read_image("1.jpg")
print(img4.shape)
```
the result is
```
torch.Size([3, 151, 201])
```
The result of torchvison.io is in [image_channels, image_height, image_width] format, which means the image is not rotated. However, opencv and pillow will deal with the EXIF information and rotate the image to the correct orientation.
I wonder if torchvision.io.read_image misses the EXIF information in jpeg or not?
### Versions
Name: torchvision
Version: 0.9.1
Summary: image and video datasets and models for torch deep learning
Home-page: https://github.com/pytorch/vision
Name: Pillow
Version: 9.4.0
Summary: Python Imaging Library (Fork)
Home-page: https://python-pillow.org
|
https://github.com/pytorch/vision/issues/7947
|
closed
|
[
"question"
] | 2023-09-08T10:17:45Z
| 2023-09-25T09:40:25Z
| null |
kero-ly
|
pytorch/tutorials
| 2,554
|
Autograd - M factor missing in Matrix Vector Multiplication?
|
In [this](https://github.com/pytorch/tutorials/blob/main/beginner_source/blitz/autograd_tutorial.py) tutorial, once the vector v is multiplied by the Jacobian, shouldn't there be an additional factor of M in the results?
cc @albanD @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen
|
https://github.com/pytorch/tutorials/issues/2554
|
closed
|
[
"question",
"core",
"medium"
] | 2023-09-08T08:51:18Z
| 2023-11-02T19:30:44Z
| null |
sudz123
|
huggingface/text-generation-inference
| 998
|
How to insert a custom stop symbol, like </s>?
|
### Feature request
nothing
### Motivation
nothing
### Your contribution
nothing
|
https://github.com/huggingface/text-generation-inference/issues/998
|
closed
|
[] | 2023-09-08T07:06:08Z
| 2023-09-08T07:13:38Z
| null |
babytdream
|
huggingface/safetensors
| 355
|
Safe tensors cannot be easily freed!
|
### System Info
Hi,
I am using the safetensors for loading Falcon-180B model. I am loading the ckpts one by one on CPU, and then try to remove the tensors by simply calling `del` function. However, I am seeing that CPU memory keeps increasing until it runs out of memory and system crashes (I am also calling `gc.collect()` after deleting tensors). Is there any good way to release the safetensor memory.
Thanks,
Reza
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Reproduction
```
from safetensors.torch import load_file
sd_ = load_file(ckpt_path)
lens = len(sd_.keys())
for _ in range(lens):
data = sd_.popitem()
del data
del sd_
gc.collect()
```
### Expected behavior
release the memory after calling `gc.collect()`
|
https://github.com/huggingface/safetensors/issues/355
|
closed
|
[
"Stale"
] | 2023-09-07T22:13:15Z
| 2024-08-30T10:22:01Z
| 4
|
RezaYazdaniAminabadi
|
huggingface/transformers.js
| 285
|
The generate API always returns the same number of tokens as output nomatter what is min_tokens
|
Here is the code I am trying
```js
import { pipeline } from '@xenova/transformers';
import { env } from '@xenova/transformers';
let generator = await pipeline('text2text-generation', 'Xenova/LaMini-Flan-T5-783M');
let output = await generator('write a blog on Kubernetes?', {
max_new_tokens: 512,min_new_tokens:512,min_length:300
});
console.log(output)
```
So no matter whatever is min_new_tokens or min_length (even if I try one of them only), output just remains same length
|
https://github.com/huggingface/transformers.js/issues/285
|
closed
|
[
"bug"
] | 2023-09-07T13:30:39Z
| 2023-09-17T21:57:14Z
| null |
allthingssecurity
|
huggingface/chat-ui
| 430
|
Server does not support event stream content error for custom endpoints
|
is there anyone faced the issue such as "Server does not support event stream content" when parsing the custom endpoint results.
what is the solution for this error?
In order to reproduce the issue,
User enter prompts saying "how are you" -> call goes to custom endpoint -> Endpoint returns response as string -> error popsup "Server does not support event stream content"
|
https://github.com/huggingface/chat-ui/issues/430
|
closed
|
[] | 2023-09-07T10:01:18Z
| 2023-09-15T00:01:56Z
| 3
|
nandhaece07
|
huggingface/sentence-transformers
| 2,300
|
How to convert embedding vector to text ?
|
I use the script below to convert text to embeddings
```
model = SentenceTransformer('all-MiniLM-L6-v2')
embeddings = model.encode(text)
```
But how to convert embeddings to text ?
|
https://github.com/huggingface/sentence-transformers/issues/2300
|
closed
|
[] | 2023-09-07T09:19:22Z
| 2025-09-01T11:44:34Z
| null |
chengzhen123
|
huggingface/transformers.js
| 283
|
[Question] Model type for tt/ee not found, assuming encoder-only architecture
|
Reporting this as requested by the warning message, but as a question because I'm not entirely sure if it's a bug:

Here's the code I ran:
```js
let quantized = false; // change to `true` for a much smaller model (e.g. 87mb vs 345mb for image model), but lower accuracy
let { AutoProcessor, CLIPVisionModelWithProjection, RawImage, AutoTokenizer, CLIPTextModelWithProjection } = await import('https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.4/dist/transformers.min.js');
let imageProcessor = await AutoProcessor.from_pretrained('Xenova/clip-vit-base-patch16');
let visionModel = await CLIPVisionModelWithProjection.from_pretrained('Xenova/clip-vit-base-patch16', {quantized});
let tokenizer = await AutoTokenizer.from_pretrained('Xenova/clip-vit-base-patch16');
let textModel = await CLIPTextModelWithProjection.from_pretrained('Xenova/clip-vit-base-patch16', {quantized});
function cosineSimilarity(A, B) {
if(A.length !== B.length) throw new Error("A.length !== B.length");
let dotProduct = 0, mA = 0, mB = 0;
for(let i = 0; i < A.length; i++){
dotProduct += A[i] * B[i];
mA += A[i] * A[i];
mB += B[i] * B[i];
}
mA = Math.sqrt(mA);
mB = Math.sqrt(mB);
let similarity = dotProduct / (mA * mB);
return similarity;
}
// get image embedding:
let image = await RawImage.read('https://i.imgur.com/RKsLoNB.png');
let imageInputs = await imageProcessor(image);
let { image_embeds } = await visionModel(imageInputs);
console.log(image_embeds.data);
// get text embedding:
let texts = ['a photo of an astronaut'];
let textInputs = tokenizer(texts, { padding: true, truncation: true });
let { text_embeds } = await textModel(textInputs);
console.log(text_embeds.data);
let similarity = cosineSimilarity(image_embeds.data, text_embeds.data);
console.log(similarity);
```
|
https://github.com/huggingface/transformers.js/issues/283
|
closed
|
[
"question"
] | 2023-09-07T05:01:34Z
| 2023-09-08T13:17:07Z
| null |
josephrocca
|
huggingface/safetensors
| 354
|
Is it possible to append to tensors along a primary axis?
|
### Feature request
it would be really cool to be able to append to a safetensor file so you can continue to add data along, say, a batch dimension
### Motivation
for logging data during train runs that can be visualized from an external tool. something like a live application that lazily loads the saved data. this is super useful for reinforcement learning
### Your contribution
i could submit a PR if necessary.
|
https://github.com/huggingface/safetensors/issues/354
|
closed
|
[
"Stale"
] | 2023-09-06T17:54:56Z
| 2023-12-11T01:48:44Z
| 2
|
verbiiyo
|
huggingface/huggingface_hub
| 1,643
|
We couldn't connect to 'https://huggingface.co/' to load this model and it looks like distilbert-base-uncased is not the path to a directory conaining a config.json file. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
|
### System Info
Hello, I have been using hugging face transformers with a lot of success. I have been able to create many successful fine-tuned pre-trained text classification models using various HF transformers and have been using HF integration with SageMaker in a SageMaker conda_pytorch_310 notebook.
my code looks like this:
```!pip install "transformers==4.17.0" "datasets[s3]==1.18.4" --upgrade```
``` tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)```
Yesterday I was able to successfully download, fine tune and make inferences using distilbert-base-uncased, and today I am getting: ```OSError: We couldn't connect to 'https://huggingface.co/' to load this model and it looks like mattmdjaga/segformer_b2_clothes is not the path to a directory conaining a config.json file.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.```
Looking through the traceback I see: ```HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/mattmdjaga/segformer_b2_clothes/resolve/main/config.json
During handling of the above exception, another exception occurred:```
....
```File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/file_utils.py:2052, in _raise_for_status(request) 2050 raise RevisionNotFoundError((f"404 Client Error: Revision Not Found for url: {request.url}"))-> 2052 request.raise_for_status()```
I have tried many different models, both text classification and non-text classification and getting the same error. This worked yesterday and nothing has changed since then. I also have confirmed that nothing has changed on our end to cause this error ,and confirmed all the model names.
Any insights would be appreciated!
@Wauplin
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
### Expected behavior
model successfully downloads
|
https://github.com/huggingface/huggingface_hub/issues/1643
|
closed
|
[] | 2023-09-06T17:18:45Z
| 2023-09-07T15:51:12Z
| null |
a-rhodes-vcu
|
huggingface/setfit
| 417
|
Passing multiple evaluation metrics to SetFitTrainer
|
Hi there, after reading the docs I find that one can easily get the f1 score or accuracy by passing the respective string as the `metric` argument to the trainer. However, how can I get both or even other metrics, such as f1_per_class?
Thanks :)
|
https://github.com/huggingface/setfit/issues/417
|
closed
|
[
"question"
] | 2023-09-06T11:38:08Z
| 2023-11-24T13:31:08Z
| null |
fhamborg
|
huggingface/optimum
| 1,357
|
[RFC] MusicGen `.to_bettertransformer()` integration
|
### Feature request
Add support for MusicGen Better Transformer integration. MusicGen is composed of three sub-models:
1. Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5
2. MusicGen decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations. The pre-trained MusicGen models use the BART decoder structure
3. Audio codec: used to encode an audio prompt to use as prompt tokens, and recover the audio waveform from the audio tokens predicted by the decoder. The pre-trained MusicGen models use the [EnCodec model](https://huggingface.co/docs/transformers/main/model_doc/encodec)
=> the text encoder uses the T5 attention module, and the MusicGen decoder uses the BART attention module. Thus, there are no extra attention layers we need to add to optimum. The audio codec is not transformer based, so we don't need to export it to better transformer.
The question is simply how to get the integration working with the sub-model structure. The config file for MusicGen is nested in the same way as the model structure, containing sub-configs for each of the three components: https://huggingface.co/docs/transformers/main/model_doc/musicgen#transformers.MusicgenConfig
=> this means that the text encoder config is accessed as `config.text_encoder`, and the text encoder model as `model.text_encoder`. Likewise, the MusicGen decoder config is accessed as `config.decoder`, and the text encoder model as `model.decoder`. We need to export the pairs of {models, configs} to their better transformer counterparts, e.g. {`model.text_encoder`, `config.text_encoder`} -> `better_transformer_text_encoder`, and {`model.decoder`, `config.decoder`} -> `better_transformer_decoder`.
Ideally, we'd like to be able to export the entire model to better transformer in one go:
```python
from transformers import MusicgenForConditionalGeneration
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
model = model.to_bettertransformer()
```
However. we can't simply export {`model`, `config`} like this, since the top-level config does not contain the config attributes for the sub-models. It's just a place-holder for the sub-model configs.
A simple workaround is to export the text encoder and decoder separately:
```python
from transformers import MusicgenForConditionalGeneration
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
model.text_encoder = model.text_encoder.to_bettertransformer()
model.decoder = model.decoder.to_bettertransformer()
```
=> but this diverges from the better transformer API
### Motivation
~9M MusicGen [downloads](https://huggingface.co/models?search=facebook/musicgen) per month -> huge interest in running the model!
### Your contribution
Happy to help with the integration!
|
https://github.com/huggingface/optimum/issues/1357
|
closed
|
[] | 2023-09-06T10:25:50Z
| 2024-01-10T17:31:44Z
| 1
|
sanchit-gandhi
|
pytorch/serve
| 2,569
|
Failure in loading Deepspeed large model example
|
### 🐛 Describe the bug
I am trying to follow the example to perform inference with the OPT-30B model according to this example: https://github.com/pytorch/serve/tree/master/examples/large_models/deepspeed
However, as specified in the [model-config.yaml](https://github.com/pytorch/serve/blob/master/examples/large_models/deepspeed/opt/model-config.yaml) file, a `checkpoints.json` file is required. This file gets used here: https://github.com/pytorch/serve/blob/master/ts/handler_utils/distributed/deepspeed.py#L40
As a result, the model fails to load. The error logs are attached below.
### Error logs
```
2023-09-05T23:22:14,652 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - Failed to load model opt, exception Cannot copy out of meta tensor; no data!
2023-09-05T23:22:14,652 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - Traceback (most recent call last):
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py", line 131, in load_model
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - service = model_loader.load(
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.10/site-packages/ts/model_loader.py", line 135, in load
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - initialize_fn(service.context)
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File "/home/model-server/tmp/models/c1130e4b01c345b9be913ef8414518cb/custom_handler.py", line 55, in initialize
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - ds_engine = get_ds_engine(self.model, ctx)
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.10/site-packages/ts/handler_utils/distributed/deepspeed.py", line 35, in get_ds_engine
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - ds_engine = deepspeed.init_inference(
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.10/site-packages/deepspeed/__init__.py", line 342, in init_inference
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - engine = InferenceEngine(model, config=ds_inference_config)
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.10/site-packages/deepspeed/inference/engine.py", line 154, in __init__
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - self.module.to(device)
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2053, in to
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - return super().to(*args, **kwargs)
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1145, in to
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - return self._apply(convert)
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - module._apply(fn)
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - module._apply(fn)
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - module._apply(fn)
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 820, in _apply
2023-09-05T23:22:14,653 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - param_applied = fn(param)
2023-09-05T23:22:14,654 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1143, in convert
2023-09-05T23:22:14,654 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
2023-09-05T23:22:14,654 [INFO ] W-29500-opt_1.0-stdout MODEL_LOG - NotImplementedError: Cannot copy out of meta tensor; no data!
```
### Installation instructions
Docker image URI: `763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference:2.0.1-gpu-py310-cu118-ubuntu20.04-ec2`
EC2 instance: `g5dn.24xlarge`
### Model Packaing
Created model artifact by following this example:
https://github.com/pytorch/serve/tree/master/examples/large_models/deepspeed
### config.properties
_No response_
### Versions
```
---------------------------
|
https://github.com/pytorch/serve/issues/2569
|
open
|
[
"question",
"triaged",
"example"
] | 2023-09-05T23:35:46Z
| 2023-09-11T17:35:14Z
| null |
sachanub
|
huggingface/diffusers
| 4,906
|
How to check whether the image is flagged as inappropriate automated?
|
Is there a way to know whether the generated image (without seeing it) was flagged as inappropriate?
|
https://github.com/huggingface/diffusers/issues/4906
|
closed
|
[] | 2023-09-05T17:51:07Z
| 2023-09-07T05:49:46Z
| null |
sarmientoj24
|
huggingface/diffusers
| 4,905
|
How to convert pretrained SDXL .safetensors model to diffusers folder format
|
As SDXL is gaining adoption, more and more community based models pop up that that are just saved as a .safetensors file. E.g the popular Realistic Vision: https://civitai.com/models/139562?modelVersionId=154590
When running train_dreambooth_lora_sdxl.py, the training script expects the diffusers folder format to accelerate text encoder, unet etc. As far as I know, there is no possible way to use `StableDiffusionXLPipeline.from_single_file()` to do the same.
Is there a way to convert a SDXL 1.0 fine-tuned .safetensors file to the diffusers folder format?
I found this but it doesn't seem to be applicable to SDXL scripts/convert_lora_safetensor_to_diffusers.py.
|
https://github.com/huggingface/diffusers/issues/4905
|
closed
|
[] | 2023-09-05T17:01:27Z
| 2023-09-06T09:55:54Z
| null |
agcty
|
huggingface/transformers.js
| 280
|
[Question] How to run multiple pipeline or multiple modal?
|
<!-- QUESTION GOES HERE -->
I am trying to transcribe from audio source and need to do multi language translation. I had tried transcribing using Xenova/whisper- and and take text input and feed in to "Xenova/m2m100_418M" modal but due to multiple pipeline it's failed. Is there any way to achieve
this?
|
https://github.com/huggingface/transformers.js/issues/280
|
closed
|
[
"question"
] | 2023-09-05T11:33:44Z
| 2023-11-01T11:32:15Z
| null |
sundarshahi
|
huggingface/optimum
| 1,346
|
BetterTransfomer Support for the GPTBigCode model
|
### Feature request
is it possible to support GPTBigCode with BetterTransformer?
https://huggingface.co/docs/transformers/model_doc/gpt_bigcode
### Motivation
A very popular Decoder model for Code.
### Your contribution
hope you can achieve it. Thanks.
|
https://github.com/huggingface/optimum/issues/1346
|
closed
|
[] | 2023-09-04T16:52:56Z
| 2023-09-08T14:51:17Z
| 5
|
amarazad
|
pytorch/TensorRT
| 2,284
|
❓ [Question] Timeline for TensorRT 9.0 support
|
## ❓ Question
What is the timeline to support TensorRT 9.0 ?
## What you have already tried
Using Nvidia's 9.0 TensorRT [release](https://github.com/NVIDIA/TensorRT/tree/release/9.0) is incompatible with the latest version of torch-tensorrt (which requires TensorRT 8.6).
|
https://github.com/pytorch/TensorRT/issues/2284
|
closed
|
[
"question"
] | 2023-09-04T07:26:02Z
| 2023-09-06T16:56:33Z
| null |
tdeboissiere
|
pytorch/serve
| 2,564
|
[Docs] More information regarding text generation & LLM inference
|
### 📚 The doc issue
I am new to TorchServe and was looking for some features that I need to be able to consider using TorchServe for LLM text generation.
Today, there are a couple inference serving solutions out there, including [text-generation-inference](https://github.com/huggingface/text-generation-inference) and [vLLM](https://vllm.ai). It would be great if the documentation can mention how TorchServe compares with these at the moment. For instance,
- Does TorchServe support continuous batching?
- Does TorchServe support paged attention?
- Does TorchServe support streaming generated text through its inference API?
- What are some LLMs that TorchServe is known to work well with, e.g. Llama2, Falcon? Apart from the Hugging Face integration example provided.
### Suggest a potential alternative/fix
A dedicated page for text generation and LLM inference could make sense given that there would be a lot of people interested in this.
|
https://github.com/pytorch/serve/issues/2564
|
open
|
[
"documentation",
"question",
"llm"
] | 2023-09-03T17:40:16Z
| 2023-09-05T17:45:08Z
| null |
jaywonchung
|
huggingface/chat-ui
| 426
|
`stream` is not supported for this model
|
Hello Eperts,
Trying to run https://github.com/huggingface/chat-ui by providing models like EleutherAI/pythia-1b, gpt2-large. With all these models, there is this consitent error
{"error":["Error in `stream`: `stream` is not supported for this model"]}
Although I can see that hosted inference API for these models are working well from their hugging face pages like this: https://huggingface.co/gpt2-large
Could someone please help?
|
https://github.com/huggingface/chat-ui/issues/426
|
open
|
[
"question",
"models"
] | 2023-09-02T05:30:47Z
| 2023-12-24T16:39:21Z
| null |
newUserForTesting
|
huggingface/diffusers
| 4,871
|
How to run "StableDiffusionXLPipeline.from_single_file"?
|
I got an error when I ran the following code and it got an error on the line "pipe = StableDiffusionXLPipeline." and how to solve it?
notes:
I don't have a model refiner, I just want to run a model with a DIffuser XL
```
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
import torch
pipe = StableDiffusionXLPipeline.from_single_file(
"/content/model/model.safetensors", torch_dtype=torch.float16).to("cuda")
image = pipe(
prompt,
negative_prompt=negative_prompt,
width=Width,
height=Height,
guidance_scale=7,
target_size=(1024,1024),
original_size=(4096,4096),
num_inference_steps=25
).images[0]
```
```
/usr/local/lib/python3.10/dist-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-2-67122e524ae5>](https://localhost:8080/#) in <cell line: 4>()
2 import torch
3
----> 4 pipe = StableDiffusionXLPipeline.from_single_file(
5 "/content/model/model.safetensors", torch_dtype=torch.float16).to("cuda")
6
1 frames
[/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py](https://localhost:8080/#) in download_from_original_stable_diffusion_ckpt(checkpoint_path, original_config_file, image_size, prediction_type, model_type, extract_ema, scheduler_type, num_in_channels, upcast_attention, device, from_safetensors, stable_unclip, stable_unclip_prior, clip_stats_path, controlnet, load_safety_checker, pipeline_class, local_files_only, vae_path, vae, text_encoder, tokenizer, config_files)
1564 )
1565 else:
-> 1566 pipe = pipeline_class(
1567 vae=vae,
1568 text_encoder=text_model,
TypeError: StableDiffusionXLPipeline.__init__() got an unexpected keyword argument 'safety_checker'
```
|
https://github.com/huggingface/diffusers/issues/4871
|
closed
|
[] | 2023-09-01T22:42:25Z
| 2023-09-09T03:35:53Z
| null |
Damarcreative
|
huggingface/optimum
| 1,334
|
Enable CLI export of decoder-only models without present outputs
|
### Feature request
Currently `optimum-cli export onnx` only supports exporting text-generation models with present outputs (`--task text-generation`) or with past+present outputs (``--task text-generation-with-past`). It would be useful to be able to export a variant without any caching structures if they will not be used.
Example of how `--task text-generation` is not sufficient for this usecase:
<details>
```
optimum-cli export onnx --model facebook/opt-125m --task text-generation TEST
...
Validating ONNX model TEST/decoder_model.onnx...
-[✓] ONNX model output names match reference model (present.7.key, present.2.key, present.3.key, present.2.value, present.3.value, present.10.value, logits, present.8.key, present.0.value, present.10.key, present.1.key, present.1.value, present.11.key, present.9.value, present.6.value, present.4.value, present.7.value, present.5.value, present.5.key, present.8.value, present.9.key, present.4.key, present.6.key, present.0.key, present.11.value)
- Validating ONNX Model output "logits":
-[✓] (2, 16, 50272) matches (2, 16, 50272)
-[x] values not close enough, max diff: 3.719329833984375e-05 (atol: 1e-05)
- Validating ONNX Model output "present.0.key":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.0.value":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.1.key":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.1.value":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.2.key":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.2.value":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.3.key":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.3.value":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.4.key":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[x] values not close enough, max diff: 1.8358230590820312e-05 (atol: 1e-05)
- Validating ONNX Model output "present.4.value":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.5.key":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.5.value":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.6.key":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.6.value":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.7.key":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.7.value":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.8.key":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.8.value":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.9.key":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.9.value":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.10.key":
-[✓] (2, 12, 16, 64) matches (2, 12, 16, 64)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "present.10.value":
-[✓] (2, 12, 16, 64) matches
|
https://github.com/huggingface/optimum/issues/1334
|
closed
|
[] | 2023-09-01T15:56:27Z
| 2023-09-13T11:43:36Z
| 3
|
mgoin
|
huggingface/transformers.js
| 274
|
[Question] How to convert to ONNX a fine-tuned model
|
Hi, we're playing with this library to see if it can be useful for our project. I find it very easy and well done (congratulations).
The idea is not to use it directly as a frontend library but via node.js.
We've tried scripting a model directly from HF (google/flan-t5-small) and it worked but we're having trouble using a fine-tuned model.
Here what we tried. We fine-tuned a model (again google/flan-t5-small) and then converted it using the onnx script (in README.md).
The script generated the following files:
```
onnx/decoder_model_quantized.onnx
onnx/decoder_model.onnx
onnx/encoder_model_quantized.onnx
onnx/encoder_model.onnx
config.json
generation_config.json
quantize_config.json
special_tokens_map.json
spice.model
tokenizer_config.json
tokenizer.json
```
But when we tried to use it it gave us this error:
`local_files_only=true` or `env.allowRemoteModels=false` and file was not found locally at ./models/google/flan-t5-small-2/onnx/decoder_model_merged_quantized.onnx
Some advice or useful doc/link?
Thanks
|
https://github.com/huggingface/transformers.js/issues/274
|
open
|
[
"question"
] | 2023-09-01T15:27:21Z
| 2023-09-01T16:12:12Z
| null |
mrddter
|
huggingface/datasets
| 6,203
|
Support loading from a DVC remote repository
|
### Feature request
Adding support for loading a file from a DVC repository, tracked remotely on a SCM.
### Motivation
DVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible through the `DVCFileSystem`.
I have a Gitlab repository where multiple files are tracked using DVC and stored in a GCP bucket. I would like to be able to load these files using `datasets` directly using an URL. My goal is to write a generic code that abstracts the storage layer, such that my users will only have to pass in an `fsspec`-compliant URL and the corresponding files will be loaded.
### Your contribution
I managed to instantiate a `DVCFileSystem` pointing to a Gitlab repo from a `fsspec` chained URL in [this pull request](https://github.com/iterative/dvc/pull/9903) to DVC.
```python
from fsspec.core import url_to_fs
fs, _ = url_to_fs("dvc::https://gitlab.com/repository/group/my-repo")
```
From now I'm not sure how to continue, it seems that `datasets` expects the URL to be fully qualified like so: `dvc::https://gitlab.com/repository/group/my-repo/my-folder/my-file.json` but this fails because `DVCFileSystem` expects the URL to point to the root of an SCM repo. Is there a way to make this work with `datasets`?
|
https://github.com/huggingface/datasets/issues/6203
|
closed
|
[
"enhancement"
] | 2023-09-01T14:04:52Z
| 2023-09-15T15:11:27Z
| 4
|
bilelomrani1
|
huggingface/optimum
| 1,328
|
Documentation for OpenVINO missing half()
|
### System Info
```shell
N/A
```
### Who can help?
@echarlaix
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
The documentation for OpenVINO is missing information does not have any information about using `half()` to run models on GPU. The docs used to have this information, but it was removed.
Is this not required anymore? I.e. perhaps `model.to("GPU")` does this automatically? If so, how would one run on GPU with FP32 precision?
### Expected behavior
half() documented with a small example
|
https://github.com/huggingface/optimum/issues/1328
|
closed
|
[
"bug"
] | 2023-08-31T20:44:28Z
| 2023-08-31T20:46:34Z
| 1
|
ngaloppo
|
huggingface/autotrain-advanced
| 249
|
How to save model locally after sft
|
I am wondering how to save model locally after sft
|
https://github.com/huggingface/autotrain-advanced/issues/249
|
closed
|
[] | 2023-08-31T14:59:04Z
| 2023-08-31T17:01:44Z
| null |
Diego0511
|
huggingface/chat-ui
| 425
|
Is it possible to modify it so that .env.local environment variables are set at runtime?
|
Currently for every different deployment of Chat-UI it is required to rebuild the Docker image with different .env.local environment variables. Is it theoretically possible to have it so that 1 image can be used for all deployments, but with different secrets passed at runtime? What environment variables and for what reason are truly needed at build time for Chat-UI to function? In #204 it says `HF_ACCESS_TOKEN` is needed at build time, but what if we use `OPENID` authentication instead? Is there anything else blocking this type of use case?
|
https://github.com/huggingface/chat-ui/issues/425
|
open
|
[
"enhancement",
"back",
"hacktoberfest"
] | 2023-08-31T12:55:17Z
| 2024-03-14T20:05:38Z
| 4
|
martinkozle
|
huggingface/text-generation-inference
| 959
|
How to enter the docker image to modify the environment
|
### System Info
dokcer image: ghcr.io/huggingface/text-generation-inference:1.0.2
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [X] My own modifications
### Reproduction
I want to enter the image to modify the environment,like: tiktoken.
`docker run -it ghcr.io/huggingface/text-generation-inference:1.0.2 /bin/bash`
I get:
error: unexpected argument '/bin/bash' found
Usage: text-generation-launcher [OPTIONS]
### Expected behavior
no error
thx!
|
https://github.com/huggingface/text-generation-inference/issues/959
|
closed
|
[] | 2023-08-31T11:14:13Z
| 2023-08-31T20:12:55Z
| null |
Romaosir
|
huggingface/safetensors
| 352
|
Attempt to convert `PygmalionAI/pygmalion-2.7b` to `safetensors`
|
### System Info
- `transformers` version: 4.32.1
- Platform: Linux-5.15.0-1039-gcp-x86_64-with-glibc2.31
- Python version: 3.9.5
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.20.3
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Reproduction
Hey guys I am trying to save the `PygmalionAI/pygmalion-2.7b` weights to `safetensors`. Based on [this thread](https://github.com/huggingface/text-generation-inference/issues/922#issuecomment-1698942643) I have manually downloaded the [weights](https://huggingface.co/PygmalionAI/pygmalion-2.7b/resolve/main/pytorch_model.bin) and tried to run the following:
```
weights = torch.load("pytorch_model.bin")
weights = {k: v.clone().contiguous() for k, v in weights.items()}
save_file(weights, "model.safetensors")
```
and everything went well. However, when trying to load the model I encounter the following issue:
```
AttributeError: 'NoneType' object has no attribute 'get'
```
I inspected the files and can't figure out what goes wrong... I have pushed everything to `https://huggingface.co/JulesBelveze/pygmalion-2.7b-safetensors`
Any recommendation on how to proceed would be awesome 🤓
Cheers!
### Expected behavior
Expecting the following code snippet to load properly load the model (and not throw the above error)
```
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("JulesBelveze/pygmalion-2.7b-safetensors")
```
|
https://github.com/huggingface/safetensors/issues/352
|
closed
|
[
"Stale"
] | 2023-08-31T10:25:19Z
| 2023-12-11T01:48:45Z
| 2
|
JulesBelveze
|
huggingface/autotrain-advanced
| 246
|
how to load the fine-tuned model in the local?
|
hi
thz for your super convenient package makes easier for cookies like me to fine-tune a new model. However, as a cookie, I dont really know how to load my fine-tuned model and apply.
I was fine-tuning in Google colab and download on my PC but know how to call it out?
thz bro
|
https://github.com/huggingface/autotrain-advanced/issues/246
|
closed
|
[] | 2023-08-31T08:15:11Z
| 2023-12-18T15:31:11Z
| null |
kennyluke1023
|
huggingface/diffusers
| 4,849
|
how to use multiple GPUs to train textual inversion?
|
I train the textual inversion fine tuning cat toy example from [here](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
my env:
diffusers: 0.20.0
torch: 1.12.1+cu113
accelerate: 0.22.0
train script, as follow:
```
CUDA_VISIBLE_DEVICES="0,1,2,3" python -u textual_inversion.py --pretrained_model_name_or_path=$MODEL_NAME --train_data_dir=$DATA_DIR --learnable_property="object" --placeholder_token="<cat-toy>" --initializer_token="toy" --resolution=512 --train_batch_size=1 --gradient_accumulation_steps=4 --max_train_steps=3000 --learning_rate=5.0e-04 --scale_lr --lr_scheduler="constant" --lr_warmup_steps=0 --output_dir="textual_inversion_cat"
```
But it only trained in cuda:0, Is there any way to solve the problem of training on a multi gpus?Thanks.
|
https://github.com/huggingface/diffusers/issues/4849
|
closed
|
[] | 2023-08-31T02:56:39Z
| 2023-09-11T01:07:49Z
| null |
Adorablepet
|
pytorch/xla
| 5,525
|
Query bazel deps of XLAC.so?
|
## ❓ Questions and Help
I'm trying to see bazel dependencies of `//:_XLAC.so` target by running the following command (as described in [bazel guide](https://bazel.build/query/guide))
```
bazel query "deps(//:_XLAC.so)"
```
It shows me the following errors:
```bash
ERROR: An error occurred during the fetch of repository 'mkl_dnn_acl_compatible'
ERROR: no such package '@mkl_dnn_acl_compatible//': Unable to load package for @tsl//tensorflow/third_party/mkl_dnn:mkldnn_acl.BUILD: BUILD file not found in directory 'tensorflow/third_party/mkl_dnn' of external repository @tsl.
ERROR: Evaluation of query "deps(//:_XLAC.so)" failed
```
Full output:
```bash
root@dd45b88976fe:~/workspace/pytorch/xla# bazel query "deps(//:_XLAC.so)"
Starting local Bazel server and connecting to it...
DEBUG: /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/xla/third_party/repo.bzl:132:14:
Warning: skipping import of repository 'tf_runtime' because it already exists.
DEBUG: /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/xla/third_party/repo.bzl:132:14:
Warning: skipping import of repository 'llvm-raw' because it already exists.
DEBUG: /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl:132:14:
Warning: skipping import of repository 'pybind11_bazel' because it already exists.
DEBUG: /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl:132:14:
Warning: skipping import of repository 'pybind11' because it already exists.
INFO: Repository mkl_dnn_acl_compatible instantiated at:
/root/workspace/pytorch/xla/WORKSPACE:76:15: in <toplevel>
/root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/xla/workspace2.bzl:90:19: in workspace
/root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/workspace2.bzl:636:21: in workspace
/root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/workspace2.bzl:165:20: in _tf_repositories
/root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl:136:21: in tf_http_archive
Repository rule _tf_http_archive defined at:
/root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl:89:35: in <toplevel>
ERROR: An error occurred during the fetch of repository 'mkl_dnn_acl_compatible':
Traceback (most recent call last):
File "/root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl", line 55, column 31, in _tf_http_archive_impl
link_dict = _get_link_dict(ctx, ctx.attr.link_files, ctx.attr.build_file)
File "/root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl", line 47, column 54, in _get_link_dict
link_dict[ctx.path("BUILD.bazel")] = ctx.path(Label(build_file))
Error in path: Unable to load package for @tsl//tensorflow/third_party/mkl_dnn:mkldnn_acl.BUILD: BUILD file not found in directory 'tensorflow/third_party/mkl_dnn' of external repository @tsl. Add a BUILD file to a directory to mark it as a package.
ERROR: /root/workspace/pytorch/xla/WORKSPACE:76:15: fetching _tf_http_archive rule //external:mkl_dnn_acl_compatible: Traceback (most recent call last):
File "/root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl", line 55, column 31, in _tf_http_archive_impl
link_dict = _get_link_dict(ctx, ctx.attr.link_files, ctx.attr.build_file)
File "/root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/tsl/third_party/repo.bzl", line 47, column 54, in _get_link_dict
link_dict[ctx.path("BUILD.bazel")] = ctx.path(Label(build_file))
Error in path: Unable to load package for @tsl//tensorflow/third_party/mkl_dnn:mkldnn_acl.BUILD: BUILD file not found in directory 'tensorflow/third_party/mkl_dnn' of external repository @tsl. Add a BUILD file to a directory to mark it as a package.
ERROR: /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/xla/xla/service/cpu/BUILD:1008:11: no such package '@mkl_dnn_acl_compatible//': Unable to load package for @tsl//tensorflow/third_party/mkl_dnn:mkldnn_acl.BUILD: BUILD file not found in directory 'tensorflow/third_party/mkl_dnn' of external repository @tsl. Add a BUILD file to a directory to mark it as a package. and referenced by '@xla//xla/service/cpu:runtime_matmul_mkl'
ERROR: /root/.cache/bazel/_bazel_root/346fc8b061ac3bdcc6b91de97c708483/external/xla/xla/service/cpu/BUILD:944:11: no such package '@mkl_dnn_acl_compatible//': Unable to load package for @tsl//tensorflow/third_party/mkl_dnn:mkldnn_acl.BUILD: BUILD file not found in directory 'tensorflow/third_party/mkl_dnn' of external repository @tsl. Add a BUILD file to a directory to mark it as a package. and referenced by '@xla//xla/service/cpu:runtime_con
|
https://github.com/pytorch/xla/issues/5525
|
open
|
[
"question",
"build"
] | 2023-08-30T21:27:58Z
| 2025-04-30T12:34:57Z
| null |
apivovarov
|
huggingface/chat-ui
| 423
|
AI response appears without user message, then both appear after refresh.
|
I was experimenting with my own back-end and was wanting to get a feel for the interface. Here is what my code looks like:
```py
import json
import random
from fastapi import FastAPI, Request
from fastapi.responses import Response, StreamingResponse
app = FastAPI()
async def yielder():
yield "data:" + json.dumps(
{
"details": {
"finish_reason": "length",
"generated_tokens": 1,
"seed": None,
},
"generated_text": "what is happening",
"token": {"id": random.randrange(0, 2**32), "logprob": -0.34, "special": False, "text": "it's alive!"},
},separators=(',', ':')
) + "\n\n\n"
@app.post("/generate")
@app.post("/")
async def generate(request: Request):
reqj = await request.json()
print(reqj)
return StreamingResponse(
yielder(),
media_type="text/event-stream",
headers={"Content-Type": "text/event-stream"},
)
```
Upon sending a message, "hi", I get this:

After refreshing the page, everything is rendered properly:

What's going on?
Here is what I used as a reference, which was recommended to me on the HF Discord: [link](https://github.com/gururise/openai_text_generation_inference_server/blob/main/server.py)
Thanks in advance.
|
https://github.com/huggingface/chat-ui/issues/423
|
closed
|
[] | 2023-08-30T19:04:14Z
| 2023-09-13T19:44:23Z
| 5
|
konst-aa
|
huggingface/datasets
| 6,195
|
Force to reuse cache at given path
|
### Describe the bug
I have run the official example of MLM like:
```bash
python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name togethercomputer/RedPajama-Data-1T \
--dataset_config_name arxiv \
--per_device_train_batch_size 10 \
--preprocessing_num_workers 20 \
--validation_split_percentage 0 \
--cache_dir /project/huggingface_cache/datasets \
--line_by_line \
--do_train \
--pad_to_max_length \
--output_dir /project/huggingface_cache/test-mlm
```
it successfully runs and at my cache folder has `cache-1982fea76aa54a13_00001_of_00020.arrow`..... `cache-1982fea76aa54a13_00020_of_00020.arrow ` as tokenization cache of `map` method. And the cache works fine every time I run the command above.
However, when I switched to jupyter notebook (since I do not want to load datasets every time when I changed other parameters not related to the dataloading). It is not recognizing the cache files and starts to re-run the entire tokenization process.
I changed my code to
```python
tokenized_datasets = raw_datasets["train"].map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=[text_column_name],
load_from_cache_file=True,
desc="Running tokenizer on dataset line_by_line",
# cache_file_names= {"train": "cache-1982fea76aa54a13.arrow"}
cache_file_name="cache-1982fea76aa54a13.arrow",
new_fingerprint="1982fea76aa54a13"
)
```
it still does not recognize the previously cached files and trying to re-run the tokenization process.
### Steps to reproduce the bug
use jupyter notebook for dataset map function.
### Expected behavior
the map function accepts the given cache_file_name and new_fingerprint then load the previously cached files.
### Environment info
- `datasets` version: 2.14.4.dev0
- Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
|
https://github.com/huggingface/datasets/issues/6195
|
closed
|
[] | 2023-08-30T18:44:54Z
| 2023-11-03T10:14:21Z
| 2
|
Luosuu
|
huggingface/trl
| 713
|
How to use custom evaluate function with multi-gpu deepspeed
|
I am trying to use `deepspeed` multi-gpu training with `SFTTrainer` for a hh-rlhf. My modified trainer looks something like this
```python
class SFTCustomEvalTrainer(SFTTrainer):
def evaluate(
self,
eval_dataset = None,
ignore_keys = None,
metric_key_prefix: str = "eval",
):
breakpoint()
.... custom eval code
```
However, I only want to run one instance of evaluate on the 0th GPU. When using `--nproc_per_node 2`, I get two processes entering the breakpoint in customized `evaluate` function. How can I restrict deepspeed to only use one GPU for evaluation and multi-gpu for training?
|
https://github.com/huggingface/trl/issues/713
|
closed
|
[] | 2023-08-30T17:33:40Z
| 2023-11-10T15:05:23Z
| null |
abaheti95
|
huggingface/optimum
| 1,323
|
Optimisation and Quantisation for Translation models / tasks
|
### Feature request
Currently, the opimisation and quantisation functions look for mode.onnx in a folder, and will perform opt and quant on those files. When exporting a translation targeted ONNX, multiple files for encoding and decoding, and these can't be optimised or quantised.
I've tried a hacky approach to change names of each of these files and then applying opt and quant, and this fails. I suspect it's more than just namings.
Is it possible to optimise and quant translation ONNX files in future?
### Motivation
I would like to get smaller more efficient translation models
### Your contribution
Nothing really that I can contribute to building the solution, as I don't have that level of experience and understanding.
|
https://github.com/huggingface/optimum/issues/1323
|
closed
|
[] | 2023-08-30T06:36:17Z
| 2023-09-29T00:47:39Z
| 2
|
gidzr
|
huggingface/datasets
| 6,193
|
Dataset loading script method does not work with .pyc file
|
### Describe the bug
The huggingface dataset library specifically looks for ‘.py’ file while loading the dataset using loading script approach and it does not work with ‘.pyc’ file.
While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ?
### Steps to reproduce the bug
1. Create a dataset loading script to read the custom data.
2. compile the code to make sure that .pyc file is created
3. Delete the loading script and re-run the code. Usually, python should make use of complied .pyc files. However, in this case, the dataset library errors out with the message that it's unable to find the data loader loading script.
### Expected behavior
The code should make use of .pyc file and run without any error.
### Environment info
NA
|
https://github.com/huggingface/datasets/issues/6193
|
open
|
[] | 2023-08-29T19:35:06Z
| 2023-08-31T19:47:29Z
| 3
|
riteshkumarumassedu
|
huggingface/transformers.js
| 270
|
[Question] How to stop warning log
|
I am using NodeJS to serve a translation model.
There are so many warning log when translation processing. How to stop this?
`2023-08-29 23:04:32.061 node[3167:31841] 2023-08-29 23:04:32.061977 [W:onnxruntime:, graph.cc:3490 CleanUnusedInitializersAndNodeArgs] Removing initializer '/model/decoder/layers.2/encoder_attn_layer_norm/Constant_output_0'. It is not used by any node and should be removed from the model.
2023-08-29 23:04:32.061 node[3167:31841] 2023-08-29 23:04:32.061987 [W:onnxruntime:, graph.cc:3490 CleanUnusedInitializersAndNodeArgs] Removing initializer '/model/decoder/layers.0/encoder_attn_layer_norm/Constant_output_0'. It is not used by any node and should be removed from the model.
2023-08-29 23:04:32.062 node[3167:31841] 2023-08-29 23:04:32.061997 [W:onnxruntime:, graph.cc:3490 CleanUnusedInitializersAndNodeArgs] Removing initializer '/model/decoder/layers.4/self_attn_layer_norm/Constant_1_output_0'. It is not used by any node and should be removed from the model.`
|
https://github.com/huggingface/transformers.js/issues/270
|
open
|
[
"question"
] | 2023-08-29T16:08:41Z
| 2025-08-02T15:48:45Z
| null |
tuannguyen90
|
huggingface/chat-ui
| 420
|
Error: ENOSPC: System limit for number of file watchers reached
|
Error: ENOSPC: System limit for number of file watchers reached, watch '/home/alvyn/chat-ui/vite.config.ts'
at FSWatcher.<computed> (node:internal/fs/watchers:247:19)
at Object.watch (node:fs:2418:34)
at createFsWatchInstance (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50470:17)
at setFsWatchListener (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50517:15)
at NodeFsHandler._watchWithNodeFs (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50672:14)
at NodeFsHandler._handleFile (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50736:23)
at NodeFsHandler._addToNodeFs (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50978:21)
at async file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:51973:21
at async Promise.all (index 1)
Emitted 'error' event on FSWatcher instance at:
at FSWatcher._handleError (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:52169:10)
at NodeFsHandler._addToNodeFs (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50986:18)
at async file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:51973:21
at async Promise.all (index 1) {
errno: -28,
syscall: 'watch',
code: 'ENOSPC',
path: '/home/alvyn/chat-ui/vite.config.ts',
filename: '/home/alvyn/chat-ui/vite.config.ts'
}
|
https://github.com/huggingface/chat-ui/issues/420
|
closed
|
[
"support"
] | 2023-08-29T14:54:49Z
| 2023-09-20T15:11:26Z
| 2
|
alvynabranches
|
huggingface/transformers.js
| 268
|
[Question] Chunks from transcription always empty text
|
This example works fine:

But ATM I am sending Float32 to the worker here (i also confirm the audio is valid by playing it back)
https://github.com/quantuminformation/coherency/blob/main/components/audio-recorder.js#L104
But after transcribing here:
https://github.com/quantuminformation/coherency/blob/main/worker.js#L140
my chunks only contain `""`


any ideas where my setup is going wrong?
|
https://github.com/huggingface/transformers.js/issues/268
|
open
|
[
"question"
] | 2023-08-29T13:49:00Z
| 2023-11-04T19:48:30Z
| null |
quantuminformation
|
huggingface/diffusers
| 4,831
|
How to preview the image during generation,any demo for gradio?
|
How to preview the image during generation,any demo for gradio?
|
https://github.com/huggingface/diffusers/issues/4831
|
closed
|
[] | 2023-08-29T13:32:07Z
| 2023-08-30T15:31:31Z
| null |
wodsoe
|
huggingface/transformers.js
| 267
|
[Question] multilingual-e5-* models don't work with pipeline
|
I just noticed that the `Xenova/multilingual-e5-*` model family doesn't work in the transformers.js pipeline for feature-extraction with your (@xenova) onnx versions on HF.
My code throws an error.
```Javascript
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.4';
async function allocatePipeline() {
let pipe = await pipeline("feature-extraction", "Xenova/multilingual-e5-small");
let out = await pipe("I love transformers", { pooling: 'mean', normalize: false });
document.getElementById("output").innerHTML = out.data;
}
allocatePipeline();
```
Live example [here](https://geo.rocks/minimal-transformersjs-example-gte).
```
Uncaught (in promise) Error: An error occurred during model execution: "Missing the following inputs: token_type_ids.
at transformers@2.5.4:70:5612
at y (transformers@2.5.4:70:5971)
at M (transformers@2.5.4:70:8450)
at transformers@2.5.4:70:10792
at Function.forward (transformers@2.5.4:70:10799)
at Function._call (transformers@2.5.4:70:10675)
at Function.e [as model] (transformers@2.5.4:88:508)
at Function._call (transformers@2.5.4:73:1424)
at Function._call (transformers@2.5.4:73:6152)
at e (transformers@2.5.4:88:508)
```
However, HF user Supabase converted the models differently so that they are actually usable with the pipeline, e.g. [gte-small](https://huggingface.co/Supabase/gte-small#javascript). I noticed that Supabase added the vocab.txt file - is it possible that this or other files are missing in your versions or is there a more complex reason for this?
I'm pretty interested in the gte family as they are the most performant small models currently available (according to the MTEB leaderboard).
|
https://github.com/huggingface/transformers.js/issues/267
|
closed
|
[
"question"
] | 2023-08-29T12:39:26Z
| 2023-08-30T12:05:02Z
| null |
do-me
|
pytorch/xla
| 5,510
|
Kaggle Pytorch/XLA notebooks. How to import torch_xla?
|
I tried to use Kaggle [Pytorch/XLA notebooks](https://www.kaggle.com/code/aivovarov/pytorch-xla-2-0-on-kaggle/edit) with "Pin to original env" and "Always use the latest env" (in notebook options).
- pin to original env (2023-04-04_ uses python 3.7 , pytorch 1.13.0-cpu
- the latest env uses python 3.10, pytorch 2.0.0-cpu
Both envs do not have torch_xla package .
I tried to download [torch_xla-nightly wheel](https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-nightly-cp310-cp310-linux_x86_64.whl) but got error `wget: unable to resolve host address ‘storage.googleapis.com’`
Do we have any proven solution on how to use Pytorch/XLA with Kaggle?
|
https://github.com/pytorch/xla/issues/5510
|
open
|
[
"question"
] | 2023-08-28T20:15:19Z
| 2025-04-29T13:52:29Z
| null |
apivovarov
|
huggingface/transformers
| 25,803
|
[Model] How to evaluate Idefics Model's ability with in context examples?
|
Hi the recent release of Idefics-9/80B-Instruct model is superbly promising!
We would like to evaluate them on a customized benchmarks with in context examples. May I ask how should I arrange the prompt template, especially for `instruct` version?
We had some problems previously when evaluating the model on single images, the model will ramble and wont stop, but managed to resolve them somehow.
For single image we use the template to evaluate instruct version model.
```
User:<fake_token_around_image><image><fake_token_around_image>{prompt} Assistant:
```
Would it be perfectly correct (matching your training template?) or do you have better recommendation. Sorry we have a customized pipeline so it's not easy to adopt your designed `IdeficsProcessor`. 😭
Also we migrate the code on `image_attention_mask` with
```
# supporting idefics processing
def get_formatted_prompt(prompt: str="", in_context_prompts: list = []) -> str:
# prompts = [
# "User:",
# "https://hips.hearstapps.com/hmg-prod/images/cute-photos-of-cats-in-grass-1593184777.jpg",
# "Describe this image.\nAssistant: An image of two kittens in grass.\n",
# "User:",
# "http://images.cocodataset.org/train2017/000000190081.jpg",
# "Describe this image.\nAssistant:",
# ]
# prompts = f"User:<fake_token_around_image><image><fake_token_around_image>{prompt} Assistant:<answer>"
prompts = f"User:<fake_token_around_image><image><fake_token_around_image>{prompt} Assistant:"
return prompts
def get_image_attention_mask(output_input_ids, max_num_images, tokenizer, include_image=True):
# image_attention_mask, _ = image_attention_mask_for_packed_input_ids(output_input_ids, tokenizer)
# image_attention_mask = incremental_to_binary_attention_mask(image_attention_mask, num_classes=max_num_images)
if include_image:
image_attention_mask, _ = image_attention_mask_for_packed_input_ids(output_input_ids, tokenizer)
image_attention_mask = incremental_to_binary_attention_mask(
image_attention_mask, num_classes=max_num_images
)
else:
# in full language mode we set the image mask to all-0s
image_attention_mask = torch.zeros(
output_input_ids.shape[0], output_input_ids.shape[1], 1, dtype=torch.bool
)
return image_attention_mask
lang_x = self.tokenizer(
[
get_formatted_prompt(question, []),
],
return_tensors="pt",
)
image_attention_mask = get_image_attention_mask(lang_x['input_ids'], 1, self.tokenizer)
```
I have read all related blogs and docs but still got confused about the usage of `<end_of_utterance>`. Is it used to break the in context examples with query example?
My guess is
```
User:<fake_token_around_image><image><fake_token_around_image>{in_context_prompt} Assistant: {in_context_answer} <end_of_utterance> User:<fake_token_around_image><image><fake_token_around_image>{prompt} Assistant:
```
Besides, very curious that the model would generate the normal `<end_of_utterance>` at the last of sentence instead of normal llama's `<|endofchunk|>`?
|
https://github.com/huggingface/transformers/issues/25803
|
closed
|
[] | 2023-08-28T19:39:02Z
| 2023-10-11T08:06:48Z
| null |
Luodian
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.