repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/chat-ui
| 417
|
CodeLlama Instruct Configuration
|
Hello Guys,
Could you guide me in the right direction to get the configuration of the Code Llama Instruct model right?
I have this config so far:
```
{
"name": "Code Llama",
"endpoints": [{"url": "http://127.0.0.1:8080"}],
"description": "Programming Assistant",
"userMessageToken": "[INST]",
"assistantMessageToken": "[/INST]",
"parameters": {
"temperature": 0.9,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 1048
}
}
```
The model starts with the "right" output, but then it produces garbage.
I am running the TGI backend.
Thx!
|
https://github.com/huggingface/chat-ui/issues/417
|
open
|
[
"support",
"models"
] | 2023-08-28T13:42:09Z
| 2023-09-13T18:17:50Z
| 9
|
schauppi
|
huggingface/transformers.js
| 265
|
Unexpected token
|
I added this code to my React project.
```
import { pipeline } from "@xenova/transformers";
async function sentimentAnalysis() {
// Allocate a pipeline for sentiment-analysis
let pipe = await pipeline("sentiment-analysis");
let out = await pipe("I love transformers!");
console.log(out);
}
sentimentAnalysis();
```
I am surprised the docs don't tell me to download a model, so I think this code will auto-download it... anyway I get this issue...
./node_modules/@xenova/transformers/src/env.js 38:84
Module parse failed: Unexpected token (38:84)
File was processed with these loaders:
* ./node_modules/babel-loader/lib/index.js
You may need an additional loader to handle the result of these loaders.
|
| var RUNNING_LOCALLY = FS_AVAILABLE && PATH_AVAILABLE;
> var __dirname = RUNNING_LOCALLY ? path.dirname(path.dirname(url.fileURLToPath(import.meta.url))) : './';
|
| // Only used for environments with access to file system
Seems like I need access to the filesystem... but that can't be right because this runs in the browser ... ?
|
https://github.com/huggingface/transformers.js/issues/265
|
closed
|
[
"question"
] | 2023-08-28T13:34:42Z
| 2023-08-28T16:00:10Z
| null |
patrickinminneapolis
|
huggingface/diffusers
| 4,814
|
How to add more weight to the text prompt in ControlNet?
|
Hi,
I want to know if there is a quick way of adding more weight to the text prompt in ControlNet during inference.
If so, which parameter needs to be changed?
Thanks,
|
https://github.com/huggingface/diffusers/issues/4814
|
closed
|
[
"stale"
] | 2023-08-28T13:05:16Z
| 2023-10-30T15:07:45Z
| null |
miquel-espinosa
|
huggingface/autotrain-advanced
| 239
|
how to start without " pip install autotrain-advanced"
|
Dear,
Thanks for your work.
After installing through `pip`, running
**`autotrain llm --train --project_name my-llm --model luodian/llama-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs 3 --trainer sft`**
can achieve fine-tuning on your own data.
If I want to run the project from source code for fine-tuning, which function should I start from?
That is, from which function do the `autotrain` and `llm` parameters come from?
Best,
|
https://github.com/huggingface/autotrain-advanced/issues/239
|
closed
|
[] | 2023-08-28T10:02:37Z
| 2023-12-18T15:30:42Z
| null |
RedBlack888
|
huggingface/datasets
| 6,186
|
Feature request: add code example of multi-GPU processing
|
### Feature request
Would be great to add a code example of how to do multi-GPU processing with 🤗 Datasets in the documentation. cc @stevhliu
Currently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work for me out-of-the-box.
Let's say you have a PyTorch model that can do translation, and you have multiple GPUs. In that case, you'd like to duplicate the model on each GPU, each processing (translating) a chunk of the data in parallel.
Here's how I tried to do that:
```
from datasets import load_dataset
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from multiprocess import set_start_method
import torch
import os
dataset = load_dataset("mlfoundations/datacomp_small")
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
# put model on each available GPU
# also, should I do it like this or use nn.DataParallel?
model.to("cuda:0")
model.to("cuda:1")
set_start_method("spawn")
def translate_captions(batch, rank):
os.environ["CUDA_VISIBLE_DEVICES"] = str(rank % torch.cuda.device_count())
texts = batch["text"]
inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt").to(model.device)
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["eng_Latn"], max_length=30
)
translated_texts = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
batch["translated_text"] = translated_texts
return batch
updated_dataset = dataset.map(translate_captions, with_rank=True, num_proc=2, batched=True, batch_size=256)
```
I've personally tried running this script on a machine with 2 A100 GPUs.
## Error 1
Running the code snippet above from the terminal (python script.py) resulted in the following error:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 125, in _main
prepare(preparation_data)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 289, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/niels/python_projects/datacomp/datasets_multi_gpu.py", line 16, in <module>
set_start_method("spawn")
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 247, in set_start_method
raise RuntimeError('context has already been set')
RuntimeError: context has already been set
```
## Error 2
Then, based on [this Stackoverflow answer](https://stackoverflow.com/a/71616344/7762882), I put the `set_start_method("spawn")` section in a try: catch block. This resulted in the following error:
```
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/dataset_dict.py", line 817, in <dictcomp>
k: dataset.map(
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2926, in map
with Pool(nb_of_missing_shards, initargs=initargs, initializer=initializer) as pool:
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 215, in __init__
self._repopulate_pool()
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 329, in _repopulate_pool_static
w.start()
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/process.py", line 121, in start
self._popen = self._Popen(self)
File "/home/niels/anaconda3/envs/datacomp/l
|
https://github.com/huggingface/datasets/issues/6186
|
closed
|
[
"documentation",
"enhancement"
] | 2023-08-28T10:00:59Z
| 2024-10-07T09:39:51Z
| 18
|
NielsRogge
|
huggingface/autotrain-advanced
| 238
|
How to Train Consecutively Using Checkpoints
|
Hi, I've been using your project and it's been great.
I'm a complete beginner in the field of AI, so sorry for such a basic question.
Is there a way to train consecutively with checkpoints?
Thank you!
|
https://github.com/huggingface/autotrain-advanced/issues/238
|
closed
|
[] | 2023-08-28T08:31:30Z
| 2023-12-18T15:30:42Z
| null |
YOUNGASUNG
|
huggingface/transformers.js
| 264
|
[Question] TypeScript rewrite
|
<!-- QUESTION GOES HERE -->
Hi Joshua. I found your idea is extremely exciting.
I am a frontend developer who has worked on TypeScript professionally for three years. Would you mind me doing a TypeScript re-write, so this npm package can have a better DX. If I successfully transform the codebase into TypeScript and pass all the tests, would you mind merging it into main?
I just forked this repo. https://github.com/Lantianyou/transformers.js
|
https://github.com/huggingface/transformers.js/issues/264
|
open
|
[
"question"
] | 2023-08-28T08:29:06Z
| 2024-04-27T12:05:24Z
| null |
Lantianyou
|
huggingface/text-generation-inference
| 934
|
How to use fine tune model in text-generation-inference
|
Hi Team
I fine tune the llama 2 13b model and using merge_and_upload() functionality, I merge the model.
How I can use this merge model using text-generation-inference.
**Following command given an error**

**Error**

|
https://github.com/huggingface/text-generation-inference/issues/934
|
closed
|
[] | 2023-08-28T07:36:25Z
| 2023-08-28T08:53:28Z
| null |
chintanshrinath
|
huggingface/peft
| 869
|
How to correctly use Prefixing Tuning?
|
### System Info
peft 0.5.0
transformers 4.32.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```
model = AutoModelForSeq2SeqLM.from_pretrained('bigscience/T0pp', load_in_8bit=True)
model = prepare_model_for_int8_training(model)
config = PrefixTuningConfig(
task_type=TaskType.SEQ_2_SEQ_LM,
num_virtual_tokens=100,
token_dim=model.config.hidden_size,
num_transformer_submodules=1,
num_attention_heads=model.config.num_heads,
num_layers=model.config.num_layers,
encoder_hidden_size=1792,
)
model = get_peft_model(model, config)
```
### Expected behavior
I'm assuming `num_layers`, `num_attention_heads`, and `token_dim` need to match the base model. In the sample `num_transformer_submodules` is 1. But encoder-decoder has two transformers right? Should this be 2?
When I run the code above I got
```
File "/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 551, in forward
position_bias = position_bias + mask # (batch_size, n_heads, seq_length, key_length)
RuntimeError: The size of tensor a (3) must match the size of tensor b (103) at non-singleton dimension 3
```
When I print out the shape of `position_bias` and `mask`. `mask` has 100 more tokens than `position_bias` seems like on the decoder side. It's also taking in the prefix embeddings
|
https://github.com/huggingface/peft/issues/869
|
closed
|
[] | 2023-08-27T18:03:06Z
| 2024-11-05T09:49:01Z
| null |
Vincent-Li-9701
|
huggingface/transformers
| 25,783
|
How to re-tokenize the training set in each epoch?
|
I have a special tokenizer which can tokenize the sentence based on some propability distribution.
For example, 'I like green apple' ->'[I],[like],[green],[apple]'(30%) or '[I],[like],[green apple]' (70%).
Now in the training part, I want the Trainer can retokenize the dataset in each epoch. How can I do so?
|
https://github.com/huggingface/transformers/issues/25783
|
closed
|
[] | 2023-08-27T16:23:25Z
| 2023-09-01T13:01:43Z
| null |
tic-top
|
pytorch/rl
| 1,473
|
[Feature Request] How to create a compound actor?
|
## Motivation
I created an environment with a compound action space: a list of continuous values (robot joint angles) and a boolean value (suction gripper on or off).
In [the PPO tutorial](https://pytorch.org/rl/tutorials/coding_ppo.html) the policy_module is a ProbabilisticActor which takes "loc" and "scale" inputs. I want to make an actor which is a combination of this (for the joint angles) and something else that uses a Bernoulli distribution to generate boolean action values for the gripper.
It kind of looks like this may already be supported by using a TensorDictSequential, but it's not clear how that would work.
## Solution
I would like to see an example in the docs of a compound action space like this.
## Alternatives
Maybe there's another way where one actor is created for each type of action space? Then how to combine them for use with a DataCollector?
## Additional context
The environment is a robot arm manipulation scenario using box2d.
## Checklist
- [x] I have checked that there is no similar issue in the repo (**required**)
|
https://github.com/pytorch/rl/issues/1473
|
closed
|
[
"enhancement"
] | 2023-08-27T15:49:38Z
| 2023-11-03T17:54:54Z
| null |
hersh
|
huggingface/optimum
| 1,318
|
Is it possible to compile pipeline (with tokenizer) to ONNX Runtime?
|
### Feature request
Is it possible to compile the entire pipeline, tokenizer and transformer, to run with ONNX Runtime? My goal is to remove the `transformers` dependency entirely for runtime, to reduce serverless cold start.
### Motivation
I could not find any examples, and could not make this work, so I wonder if compiling tokenizer with ONNX is possible at all.
### Your contribution
I could try implementing this, or add an example to documentation if this is possible already.
|
https://github.com/huggingface/optimum/issues/1318
|
open
|
[
"feature-request",
"onnxruntime"
] | 2023-08-26T17:57:52Z
| 2023-08-28T07:58:13Z
| 1
|
j-adamczyk
|
huggingface/trl
| 695
|
Reward is getting lower and lower with each epoch, What can be the issue in training?
|
Hello,
I am trying to optimize a T5 fine-tuned model for text generation task. At the moment, I am using BLEU score (between two texts) as a reward function. Before the optimization with PPO, model is able to produce an average BLEU score of 35% however with ppo, after each epoch, the reward is reducing so far. What is something I am doing wrong or should look into as I am new to RL? as the goal of PPO is to improve the reward or atleast make it more than the original bleu score of 35% that we got before model was optimized with PPO.
this is my code:
```from transformers import AutoModelForSeq2SeqLM
#loading the fine-tuned model
active_model=AutoModelForSeq2SeqLMWithValueHead.from_pretrained('small_gen_clean_prem/')
ref_model = AutoModelForSeq2SeqLMWithValueHead.from_pretrained('small_gen_clean_prem/')
batch_size = 200
config = PPOConfig(
batch_size=batch_size,
learning_rate=1.41e-5,
mini_batch_size=16,
gradient_accumulation_steps=1 #if I set to more than 1, I get empty tensors error
)
ppo_trainer = PPOTrainer(config, active_model, ref_model, tokenizer)
generation_kwargs = {
"min_length": -1,
"top_k": 0.0,
"top_p": 1.0,
"do_sample": True,
"pad_token_id": tokenizer.eos_token_id
}
output_min_length = 4
output_max_length = 512
output_length_sampler = LengthSampler(output_min_length, output_max_length)`
score_all=[]
for i in range(20):
input_tensors=[]
output_tensors=[]
score_=[]
for data in valid_dataset:
query_txt =data['input']
query_tensor = tokenizer.encode(query_txt, return_tensors="pt").to(device)
input_tensors.append(query_tensor.squeeze(0))
desired_txt = data['ground_truth']
print('desired text\n:',desired_txt)
response_tensor = ppo_trainer.generate([item for item in query_tensor], return_prompt=False,length_sampler=output_length_sampler, **generation_kwargs)
response_txt = tokenizer.decode(response_tensor[0], skip_special_tokens=True, max_new_tokens=512)
output_tensors.append(response_tensor[0].squeeze(0))
score = sentence_bleu([response_txt.split(),desired_txt.split()])
score_.append(score)
reward = [torch.FloatTensor([score]) for score in score_]
score_all.append(np.mean(score_))
train_stats = ppo_trainer.step(input_tensors,output_tensors,reward)
```
In the graph attached, y-axis is average mean score in each epoch.
<img width="377" alt="scores_ppo" src="https://github.com/huggingface/trl/assets/25576435/a07c26d9-46a8-432e-bf07-60eaaa0aeedc">
|
https://github.com/huggingface/trl/issues/695
|
closed
|
[] | 2023-08-26T00:22:04Z
| 2023-11-01T15:06:14Z
| null |
sakinafatima
|
huggingface/dataset-viewer
| 1,733
|
Add API fuzzer to the tests?
|
Tools exist, see https://openapi.tools/
|
https://github.com/huggingface/dataset-viewer/issues/1733
|
closed
|
[
"question",
"tests"
] | 2023-08-25T21:44:10Z
| 2023-10-04T15:04:16Z
| null |
severo
|
huggingface/diffusers
| 4,778
|
[Discussion] How to allow for more dynamic prompt_embed scaling/weighting/fusion?
|
We have a couple of issues and requests for the community that ask for the possibility to **dynamically** change certain knobs of Stable Diffusion that are applied at **every denoising step**.
- 1. **Prompt Fusion**. as stated [here](https://github.com/huggingface/diffusers/issues/4496). To implement prompt fusion in a general way we need to give the user the possibility to define some kind of "prompt" scheduler where every denoising timestep can receive a different `prompt_embeds` and `negative_prompt_embeds`.
=> A very obvious way to allow for this would be to allow passing a list of list of prompts and list of list of `prompt_embeddings`
- 2. **Dynamic prompt weighting**. A1111 and InvokeAI both have functionalities that allow to weight the prompt embeddings differently at each timestep. InvokeAI has this implemented in `compel` via a `conditioning_scheduler` see here: https://github.com/damian0815/compel/blob/d15e883bbbfae5b3fbd8d60065aa330c99a662b4/src/compel/compel.py#L93
Such a scheduler could for example allow the user to not just define a unique `prompt_embedding` condition (e.g. putting more word on a certain word), but also allowing to dynamically change that condition during the course of denoising.
This is also asked by SD.Next (cc @vladmandic).
=> Here we have a couple of options, the simplest is probably to just allow passing a list of `prompt_embeddings` assuming that the user just takes care of the prompt weighting themselves. We could then also nicely integrate this with `compel`.
- 3. **Dynamic `guidance_scale` / `cfg` weighting**. Many people have found that a `cfg` scheduling works really well for `SDXL`. It's related to 2. as it's also a knob to tweak text embeddings weights over the course of inference but it's much more global where as 2. is can be more condition specific. This is also related to https://github.com/huggingface/diffusers/pull/4569#issuecomment-1678667625 which proposes dynamic scaling.
=> Here we could solve this by allowing the user to provide a list of `guidance_scales`. In addition we could maybe introduce something like `guidance_scaling_type="static/dynamic" to allow for #4569
**Overall**:
=> It's not too difficult to make these features work, but it'll require some very good docs about `prompt_embeds` and `negative_prompt_embeds`. We also have to think about edge cases like SDXL which has two text encoders. We also have to think about how this can be applied to other models such as Kandinsky, IF.
Curios to hear your thoughts here. Also would love to discuss a design proposal of how we can better support things in a coherent, library-wide design @sayakpaul @williamberman @yiyixuxu @DN6
|
https://github.com/huggingface/diffusers/issues/4778
|
closed
|
[
"stale"
] | 2023-08-25T10:03:17Z
| 2023-11-09T21:42:39Z
| null |
patrickvonplaten
|
huggingface/transformers.js
| 260
|
[Question] CDN download for use in a worker
|
Is there a way to get this to work inside a worker:
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.3';
</script>
```
I noticed you do this:
```js
import { pipeline, env } from "@xenova/transformers";
```
I'm trying to avoid any node modules for this project I am on
|
https://github.com/huggingface/transformers.js/issues/260
|
closed
|
[
"question"
] | 2023-08-24T18:24:51Z
| 2023-08-29T13:57:19Z
| null |
quantuminformation
|
huggingface/notebooks
| 428
|
How to load idefics fine tune model for inference?
|
Hi, recently I fine tune idefics model with peft. I am not able to load the model.
Is there any way to load the model with peft back for inference?
|
https://github.com/huggingface/notebooks/issues/428
|
open
|
[] | 2023-08-24T13:39:22Z
| 2024-04-25T10:39:55Z
| null |
imrankh46
|
huggingface/peft
| 857
|
How to load fine tune IDEFICS model with peft for inference?
|
### Feature request
Request for IDEFICS model.
### Motivation
I fine tune IDEFICS on custom dataset, but when I load they showing error.
### Your contribution
Add class like AutoPeftModelforVisionTextToText() class, to easily load the model.
|
https://github.com/huggingface/peft/issues/857
|
closed
|
[] | 2023-08-24T12:34:44Z
| 2023-09-01T15:46:50Z
| null |
imrankh46
|
huggingface/datasets
| 6,176
|
how to limit the size of memory mapped file?
|
### Describe the bug
Huggingface datasets use memory-mapped file to map large datasets in memory for fast access.
However, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over the limit, memory cannot be allocated), however, when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed.
So is there a way to explicitly limit the size of memory mapped file?
### Steps to reproduce the bug
python
>>> from datasets import load_dataset
>>> dataset = load_dataset("c4", "en", streaming=True)
### Expected behavior
In a normal environment, this will not have any problem.
However, when the system allocates a portion of the memory to the program and when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed.
### Environment info
linux cluster with SGE(Sun Grid Engine)
|
https://github.com/huggingface/datasets/issues/6176
|
open
|
[] | 2023-08-24T05:33:45Z
| 2023-10-11T06:00:10Z
| null |
williamium3000
|
huggingface/autotrain-advanced
| 225
|
How to make inference the model
|
When I launch
**autotrain llm --train --project_name my-llm --model meta-llama/Llama-2-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft**
I have this output

**I have two questions.**
**1.-** The output is telling that the training is finished, however I only watch the log of 1 epoch. **Is there any way to see the 'training loss' param of the 3 epochs**.
**2.-** After training, I try to make inferece with Text-generation-Inference HF application. However I have an error because config.json is not in the model folder. The output model is this. **Why is not present this file? Should I do something more?**.

|
https://github.com/huggingface/autotrain-advanced/issues/225
|
closed
|
[] | 2023-08-23T20:24:23Z
| 2023-12-18T15:30:40Z
| null |
amgomezdev
|
huggingface/autotrain-advanced
| 223
|
How to use captions with Dreambooth?
|
I'm trying to train an SDXL model with Dreambooth using captions for each image (I have found that this made quite a difference when training for style with the 1.5 model). How can I achieve that using autotrain? If I understand [this line](https://github.com/huggingface/autotrain-advanced/blob/main/src/autotrain/trainers/dreambooth/main.py#L290C13-L290C13) correctly, it will pick it up if it's in the file name, is that right? And if yes, how does it play together with the specified prompt?
|
https://github.com/huggingface/autotrain-advanced/issues/223
|
closed
|
[] | 2023-08-23T15:32:16Z
| 2023-12-18T15:30:39Z
| null |
MaxGfeller
|
huggingface/trl
| 677
|
how to run reward_trainer.py
|
ValueError: Some specified arguments are not used by the HfArgumentParser: ['-f', '/Users/samittan/Library/Jupyter/runtime/kernel-32045810-5e16-48f4-8d44-c7a7f975f8a4.json']
|
https://github.com/huggingface/trl/issues/677
|
closed
|
[] | 2023-08-23T09:39:52Z
| 2023-11-02T15:05:32Z
| null |
samitTAN
|
huggingface/chat-ui
| 412
|
preprompt not being injected for Llama 2
|
1. When I alter the preprompt for a Llama 2 type model, it appears to have no impact. It's as though the preprompt is not there. Sample config for .env.local:
```
MODELS=`[
{
"name": "Trelis/Llama-2-7b-chat-hf-function-calling",
"datasetName": "Trelis/function_calling_extended",
"description": "function calling Llama-7B-chat",
"websiteUrl": "https://research.Trelis.com",
"preprompt": "Respond in French to all questions",
"userMessageToken": "[INST]",
"assistantMessageToken": "[/INST]",
"parameters": {
"temperature": 0.01,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 1024
},
"endpoints": [{
"url": "http://127.0.0.1:8080"
}]
}
]`
```
Other notes:
- The same model responds to changes in system message when run in colab.
- Here, with chat-ui, I'm running with a tgi server.
- Llama-chat has weird templating whereby the first system and user have to be wrapped in INST. The best that can be done with the default templating is just to separately wrap the system message and each user input in [INST] and [/INST]. That said, I don't think that deviation should be significant enough to mean that the preprompt is ignored... but maybe it is OR maybe I'm making some other mistake?
|
https://github.com/huggingface/chat-ui/issues/412
|
closed
|
[
"support",
"models"
] | 2023-08-23T09:15:24Z
| 2023-09-18T12:48:07Z
| 7
|
RonanKMcGovern
|
huggingface/unity-api
| 15
|
How to download the model to the local call API
|
Because my internet connection is not very good, I would like to download the model to my local machine and use the Hugging Face API for calling. How can I achieve this?
|
https://github.com/huggingface/unity-api/issues/15
|
closed
|
[] | 2023-08-23T08:08:40Z
| 2023-11-08T10:26:34Z
| null |
haldon98
|
huggingface/evaluate
| 485
|
How to use `SubTask` with metrics that require valid `config_name`
|
## Issue
Currently I there does not seem to be a way to define the `config_name` for metric for a `SubTask` inside an `evaluate.EvaluationSuite`.
## Version
evaluate version: 0.4.0
transformers version 4.32.0
Python version Python 3.10.6
## Example
For example, consider the following `EvaluationSuite` which tried to run the "glue" metric which requires a `config_name` when calling `evaluate.load`:
Code in `suite.py`:
```python
import evaluate
from evaluate.evaluation_suite import SubTask
class Suite(evaluate.EvaluationSuite):
def __init__(self, name):
super().__init__(name)
self.preprocessor = lambda x: {"text": x["text"].lower()}
self.suite = [
SubTask(
task_type="text-classification",
data="glue",
subset="sst2",
split="validation[:10]",
args_for_task={
"metric": "glue",
"input_column": "sentence",
"label_column": "label",
"label_mapping": {
"LABEL_0": 0.0,
"LABEL_1": 1.0
}
}
),
]
```
Now consider running this `EvaluationSuite` with the following:
```python
from evaluate import EvaluationSuite
suite = EvaluationSuite.load('suite.py')
results = suite.run("gpt2")
```
Running this code results in the following error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[60], line 2
1 suite = EvaluationSuite.load('suite.py')
----> 2 results = suite.run("gpt2")
File /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/evaluation_suite/__init__.py:124, in EvaluationSuite.run(self, model_or_pipeline)
122 args_for_task["subset"] = task.subset
123 args_for_task["split"] = task.split
--> 124 results = task_evaluator.compute(**args_for_task)
126 results["task_name"] = task_name + "/" + task.subset if task.subset else task_name
127 results["data_preprocessor"] = str(task.data_preprocessor) if task.data_preprocessor is not None else None
File /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/evaluator/text_classification.py:136, in TextClassificationEvaluator.compute(self, model_or_pipeline, data, subset, split, metric, tokenizer, feature_extractor, strategy, confidence_level, n_resamples, device, random_state, input_column, second_input_column, label_column, label_mapping)
127 metric_inputs, pipe_inputs = self.prepare_data(
128 data=data, input_column=input_column, second_input_column=second_input_column, label_column=label_column
129 )
130 pipe = self.prepare_pipeline(
131 model_or_pipeline=model_or_pipeline,
132 tokenizer=tokenizer,
133 feature_extractor=feature_extractor,
134 device=device,
135 )
--> 136 metric = self.prepare_metric(metric)
138 # Compute predictions
139 predictions, perf_results = self.call_pipeline(pipe, pipe_inputs)
File /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/evaluator/base.py:447, in Evaluator.prepare_metric(self, metric)
445 metric = load(self.default_metric_name)
446 elif isinstance(metric, str):
--> 447 metric = load(metric)
449 return metric
File /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/loading.py:735, in load(path, config_name, module_type, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, **init_kwargs)
731 evaluation_module = evaluation_module_factory(
732 path, module_type=module_type, revision=revision, download_config=download_config, download_mode=download_mode
733 )
734 evaluation_cls = import_main_class(evaluation_module.module_path)
--> 735 evaluation_instance = evaluation_cls(
736 config_name=config_name,
737 process_id=process_id,
738 num_process=num_process,
739 cache_dir=cache_dir,
740 keep_in_memory=keep_in_memory,
741 experiment_id=experiment_id,
742 hash=evaluation_module.hash,
743 **init_kwargs,
744 )
746 if module_type and module_type != evaluation_instance.module_type:
747 raise TypeError(
748 f"No module of module type '{module_type}' not found for '{path}' locally, or on the Hugging Face Hub. Found module of module type '{evaluation_instance.module_type}' instead."
749 )
File /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/module.py:182, in EvaluationModule.__init__(self, config_name, keep_in_memory, cache_dir, num_process, process_id, seed, experiment_id, hash, max_conc
|
https://github.com/huggingface/evaluate/issues/485
|
open
|
[] | 2023-08-22T23:15:43Z
| 2023-08-23T16:38:18Z
| null |
tybrs
|
huggingface/diffusers
| 4,716
|
How to handle SDXL long prompt
|
### Describe the bug
I am unable to use embeds prompt in order to handle prompt that is longer than 77 tokens.
### Reproduction
```python
import itertools
import os.path
import random
import string
import time
import typing as typ
import torch
from diffusers import StableDiffusionXLPipeline
from tqdm import tqdm
import bb
from web_sdxl import seed_everything
seed_everything(42)
def generate_random_string(length):
letters = string.ascii_letters
result = ''.join(random.choice(letters) for _ in range(length))
return result
def get_pipeline_embeds(pipeline, prompt, negative_prompt, device):
""" Get pipeline embeds for prompts bigger than the maxlength of the pipe
:param pipeline:
:param prompt:
:param negative_prompt:
:param device:
:return:
"""
max_length = pipeline.tokenizer.model_max_length
# simple way to determine length of tokens
count_prompt = len(prompt.split(" "))
count_negative_prompt = len(negative_prompt.split(" "))
# create the tensor based on which prompt is longer
if count_prompt >= count_negative_prompt:
input_ids = pipeline.tokenizer(prompt, return_tensors="pt", truncation=False).input_ids.to(device)
shape_max_length = input_ids.shape[-1]
negative_ids = pipeline.tokenizer(negative_prompt, truncation=False, padding="max_length",
max_length=shape_max_length, return_tensors="pt").input_ids.to(device)
else:
negative_ids = pipeline.tokenizer(negative_prompt, return_tensors="pt", truncation=False).input_ids.to(device)
shape_max_length = negative_ids.shape[-1]
input_ids = pipeline.tokenizer(prompt, return_tensors="pt", truncation=False, padding="max_length",
max_length=shape_max_length).input_ids.to(device)
concat_embeds = []
neg_embeds = []
for i in range(0, shape_max_length, max_length):
concat_embeds.append(pipeline.text_encoder(input_ids[:, i: i + max_length])[0])
neg_embeds.append(pipeline.text_encoder(negative_ids[:, i: i + max_length])[0])
return torch.cat(concat_embeds, dim=1), torch.cat(neg_embeds, dim=1)
model_path = "fine_tuned_models/sdxl-sarit"
device = "mps" if torch.backends.mps.is_available() else "cpu"
out_dir: str = "gluta40"
age_prompts: typ.List[str] = [
"young asian girl",
"a photograph of an angel with sly expression, wearing a see-thru short roman style dress, beautiful asian mixed european woman face, beautiful eyes, black hair, looking down, hyper realistic and detailed, 16k",
]
hand_prompts: typ.List[str] = [
"left hand holding a gluta40 jar one hand, right hand is behind her back",
"right hand holding a gluta40 jar one hand, left hand is behind her back",
]
face_angle_prompts: typ.List[str] = [
"straight face",
]
hair_prompts: typ.List[str] = [
"black long tied hair",
"black long hair",
]
background_prompts: typ.List[str] = [
"no background, hold both hands, bad hands",
]
negative_prompt: str = "disfigured, disproportionate, bad anatomy, bad proportions, ugly, out of frame, mangled, asymmetric, cross-eyed, depressed, immature, stuffed animal, out of focus, high depth of field, cloned face, cloned head, age spot, skin blemishes, collapsed eyeshadow, asymmetric ears, imperfect eyes, unnatural, conjoined, missing limb, missing arm, missing leg, poorly drawn face, poorly drawn feet, poorly drawn hands, floating limb, disconnected limb, extra limb, malformed limbs, malformed hands, poorly rendered face, poor facial details, poorly rendered hands, double face, unbalanced body, unnatural body, lacking body, long body, cripple, cartoon, 3D, weird colors, unnatural skin tone, unnatural skin, stiff face, fused hand, skewed eyes, surreal, cropped head, group of people, too many fingers, bad hands, six fingers"
combined_list = list(itertools.product(age_prompts, hand_prompts, face_angle_prompts, hair_prompts, background_prompts))
random.shuffle(combined_list)
for item in tqdm(combined_list, total=len(combined_list)):
age, hand, face_angle, hair, background = item
if not os.path.exists(out_dir):
os.makedirs(out_dir)
prompt: str = ", ".join(item)
print(prompt)
out_filename: str = f"{out_dir}/{prompt.replace(' ', '_')}"
if not os.path.exists(f"{out_filename}_0.png"):
try:
pipe = StableDiffusionXLPipeline.from_pretrained(model_path, safety_checker=None,
requires_safety_checker=False)
pipe.to(device)
prompt_embeds, negative_prompt_embeds = get_pipeline_embeds(pipe, prompt, negative_prompt, device)
images = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds,
num_images_per_prompt=3, width=768,
|
https://github.com/huggingface/diffusers/issues/4716
|
closed
|
[
"bug"
] | 2023-08-22T16:28:25Z
| 2023-08-27T02:46:18Z
| null |
elcolie
|
huggingface/candle
| 547
|
How to turn off automatic translation for whisper
|
When I input Chinese wav file , whisper outputs the English translation
```
ls@LeeeSes-MacBook-Air ~/r/candle (main)> cargo run --release --features accelerate --example whisper -- --model small --language zh --input /Users/ls/Downloads/output.wav
Finished release [optimized] target(s) in 0.38s
Running `target/release/examples/whisper --model small --language zh --input /Users/ls/Downloads/output.wav`
Running on CPU, to run on GPU, build this example with `--features cuda`
loaded wav data: Header { audio_format: 1, channel_count: 1, sampling_rate: 16000, bytes_per_second: 32000, bytes_per_sample: 2, bits_per_sample: 16 }
pcm data loaded 287216
loaded mel: [1, 80, 4500]
0.0s -- 30.0s: This is a free online audio recorder application program. You can record sound from microphone. After recording, you can edit sound and edit any parts, adjust the balance and sound. Let's use the recording first.
30.0s -- 45.0s: I'm sorry.
```
|
https://github.com/huggingface/candle/issues/547
|
closed
|
[] | 2023-08-22T11:16:45Z
| 2023-08-22T18:52:40Z
| null |
LeeeSe
|
huggingface/trl
| 674
|
How to load the model and the checkpoint after trained the model?
|
I trained my model using the code in the sft_trainer.py. And I save the checkpoint and the model in the same dir.
But I don't know how to load the model with the checkpoint. Or I just want to konw that `trainer.save_model(script_args.output_dir)` means I have save a trained model, not just a checkpoint?
I try many ways to load the trained model but errors like
```
RuntimeError: Error(s) in loading state_dict for PrefixEncoder:
Missing key(s) in state_dict: "embedding.weight".
```
So, how to load the model???
|
https://github.com/huggingface/trl/issues/674
|
closed
|
[] | 2023-08-22T10:31:01Z
| 2023-11-27T21:34:30Z
| null |
ccwdb
|
huggingface/text-generation-inference
| 899
|
text-generation-launcher tool how to use multi gpu cards?
|
### System Info
text-generation-launcher 1.0.0 how to use multi gpu cards?
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
CUDA_VISIBLE_DEVICES=0,1,2,3 text-generation-launcher --model-id falcon-40b-instruct --sharded true --num-shard 1 --quantize bitsandbytes-fp4 does not used multi gpu A10 card. Error with GPU 0 OutOfMemoryError: CUDA out of memory.
### Expected behavior
Normal load the model and http post.
|
https://github.com/huggingface/text-generation-inference/issues/899
|
closed
|
[] | 2023-08-22T10:09:17Z
| 2023-08-22T10:13:06Z
| null |
luefei
|
huggingface/chat-ui
| 411
|
Chat-ui crashes TGI?
|
Hey!
When I deploy TGI Endpoint locally and test it with the following cli request:
`curl 127.0.0.1:8080/generate_stream \
-X POST \
-d '{"inputs":"def calculate_fibonacci(n:str):","parameters":{"max_new_tokens":100}}' \
-H 'Content-Type: application/json'`
It works without any problem. Even load tests with locust.io work without problems.
This is the response from tgi with the curl command:
`2023-08-22T08:29:52.944813Z INFO HTTP request{otel.name=POST /generate_stream http.client_ip= http.flavor=1.1 http.host=127.0.0.1:8080 http.method=POST http.route=/generate_stream http.scheme=HTTP http.target=/generate_stream http.user_agent=curl/7.82.0 otel.kind=server trace_id=772a4a52f29b540aac2b3b331ea5247a http.status_code=200 otel.status_code="OK"}:generate_stream{parameters=GenerateParameters { best_of: None, temperature: None, repetition_penalty: None, top_k: None, top_p: None, typical_p: None, do_sample: false, max_new_tokens: 100, return_full_text: None, stop: [], truncate: None, watermark: false, details: false, decoder_input_details: false, seed: None } total_time="5.639886919s" validation_time="153.888µs" queue_time="184.627µs" inference_time="5.639548636s" time_per_token="56.395486ms" seed="None"}: text_generation_router::server: router/src/server.rs:452: Success`
But if I want to call tgi with the chat-ui it works the first time (I get an streaming response in the chat-ui), but then the tgi freezes?
EDIT: This is the output I get from tgi (I get two responses from tgi?):
`2023-08-22T11:38:32.027037Z INFO HTTP request{otel.name=POST / http.client_ip= http.flavor=1.1 http.host=127.0.0.1:8080 http.method=POST http.route=/ http.scheme=HTTP http.target=/ http.user_agent=undici otel.kind=server trace_id=a55b57fc395cc1f8fa59dcd111733cd4 http.status_code=200 otel.status_code="OK"}:compat_generate{default_return_full_text=false}:generate_stream{parameters=GenerateParameters { best_of: None, temperature: Some(0.9), repetition_penalty: Some(1.2), top_k: Some(50), top_p: Some(0.95), typical_p: None, do_sample: false, max_new_tokens: 1048, return_full_text: Some(false), stop: [], truncate: Some(1000), watermark: false, details: false, decoder_input_details: false, seed: None } total_time="1.803072692s" validation_time="139.35µs" queue_time="209.805µs" inference_time="1.802724034s" time_per_token="56.335126ms" seed="Some(14814785333613176252)"}: text_generation_router::server: router/src/server.rs:450: Success
`
`
2023-08-22T11:38:32.643776Z INFO HTTP request{otel.name=POST / http.client_ip= http.flavor=1.1 http.host=127.0.0.1:8080 http.method=POST http.route=/ http.scheme=HTTP http.target=/ http.user_agent=undici otel.kind=server trace_id=7064d891ae5c88c74aaba2f06cacd5d3}:compat_generate{default_return_full_text=false}:generate{parameters=GenerateParameters { best_of: None, temperature: None, repetition_penalty: None, top_k: None, top_p: None, typical_p: None, do_sample: false, max_new_tokens: 20, return_full_text: Some(false), stop: [], truncate: None, watermark: false, details: false, decoder_input_details: false, seed: None } total_time="519.787388ms" validation_time="77.98µs" queue_time="78.433µs" inference_time="519.63134ms" time_per_token="57.736815ms" seed="None"}: text_generation_router::server: router/src/server.rs:287: Success`
EDIT: I get the following output in my terminal with the second response from tgi:
`
SyntaxError: Unexpected token d in JSON at position 0
at JSON.parse (<anonymous>)
at Module.generateFromDefaultEndpoint (/Users/xx/Desktop/chat-ui/src/lib/server/generateFromDefaultEndpoint.ts:73:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async POST (/Users/xx/Desktop/chat-ui/src/routes/conversation/[id]/summarize/+server.ts:30:26)
at async Module.render_endpoint (/Users/xx/Desktop/chat-ui/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:47:20)
at async resolve (/Users/xx/Desktop/chat-ui/node_modules/@sveltejs/kit/src/runtime/server/respond.js:388:17)
at async Object.handle (/Users/xx/Desktop/chat-ui/src/hooks.server.ts:66:20)
at async Module.respond (/Users/xx/Desktop/chat-ui/node_modules/@sveltejs/kit/src/runtime/server/respond.js:259:20)
at async file:///Users/xx/Desktop/chat-ui/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:506:22`
chat-ui version: 0.5.0
tgi-version: 1.0.1
Chat-UI Model Config:
```
MODELS=`[
{
"name": "Vicuna",
"datasetName": "OpenAssistant/oasst1",
"endpoints": [{"url": "http://127.0.0.1:8080/generate_stream"}],
"description": "A good alternative to ChatGPT",
"websiteUrl": "https://open-assistant.io",
"userMessageToken": "USER:",
"assistantMessageToken": "ASSISTANT:",
"messageEndToken": "</s>",
"preprompt": "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\n
|
https://github.com/huggingface/chat-ui/issues/411
|
open
|
[] | 2023-08-22T08:48:02Z
| 2023-08-23T06:45:26Z
| 0
|
schauppi
|
huggingface/accelerate
| 1,870
|
[Question] How to optimize two loss alternately with gradient accumulation?
|
I want to update a model by optimizing two loss alternately with gradient accumulation like this
```python
# Suppose gradient_accumulation is set to 2.
optimizer = optim(unet.parameters())
with accelerator.accumulate(unet):
outputs = unet(input)
loss1 = loss_func1(outputs)
loss1.backward()
optimizer.step()
optimizer.zero_grad()
with accelerator.accumulate(unet):
outputs = unet(input)
loss2 = loss_func2(outputs)
loss2.backward()
optimizer.step()
optimizer.zero_grad()
```
Is this correct? It appears from the [documentation](https://huggingface.co/docs/accelerate/usage_guides/gradient_accumulation#converting-it-to-accelerate) that `accelerator.accumulate` will normalize the loss and then backpropagate without updating the gradient until reaching `gradient_accumulation_steps`. My main concern is that the gradients accumulated by two different losses for the same model will affect each other.
Hope to find some help here, thanks in advance.
|
https://github.com/huggingface/accelerate/issues/1870
|
closed
|
[] | 2023-08-21T12:49:19Z
| 2023-10-24T15:06:33Z
| null |
hkunzhe
|
huggingface/candle
| 538
|
How to disable openssl-sys being included?
|
I would like to stop openssl-sys from being included in my project when using candle, I'm not sure how to do this. I tried adding the below to my Cargo.toml but it didn't change anything. The reason I want to do it is because I get an error when trying to compile my library to aarch64-linux-android saying that pkg-config has not been configured to support cross-compilation and that I should install a sysroot for the target platform, but I'd like to not include it anyways since I won't be needing it and will be loading everything locally. Thanks.
```
hf-hub = { version = "0.2.0", default-features = false }
tokenizers = { version = "0.13.4", default-features = false }
```
|
https://github.com/huggingface/candle/issues/538
|
closed
|
[] | 2023-08-21T10:47:26Z
| 2023-08-21T20:38:57Z
| null |
soupslurpr
|
pytorch/pytorch
| 107,580
|
Doc is unclear on how to install pytorch with Cuda via pip
|
### 📚 The doc issue

I've been looking on how to install torch with CUDA via pip for almost one day and the doc is absolutely not helping on how to do so.
### Suggest a potential alternative/fix
Explain clearly how to install pytorch using pip with CUDA or not.
```
To install pytorch with CUDA using pip, you first need to install CUDA on your system if it is compatible with it and then install pytorch with the following command in your shell:
`pip install ...........`
```
|
https://github.com/pytorch/pytorch/issues/107580
|
open
|
[
"triaged",
"topic: docs"
] | 2023-08-21T09:57:56Z
| 2023-08-22T08:42:08Z
| null |
MidKnightXI
|
huggingface/optimum
| 1,298
|
Support BetterTransfomer for the Baichuan LLM model
|
### Feature request
is it possible to support Baichuan model with BetterTransformer?
https://huggingface.co/baichuan-inc/Baichuan-13B-Chat
### Motivation
A very popular Chinese and English large language model.
### Your contribution
hope you can achieve it. Thanks.
|
https://github.com/huggingface/optimum/issues/1298
|
closed
|
[
"feature-request",
"bettertransformer",
"Stale"
] | 2023-08-21T08:18:16Z
| 2025-05-04T02:17:22Z
| 1
|
BobLiu20
|
huggingface/candle
| 533
|
How to convert token to text?
|
Hello, thank you for this ML library in Rust. Sorry if this is a noob question, I'm new to machine learning and this is my first time trying to use a text generation model. I'm using the latest git version. In the quantized llama example, how would I convert a token to a string? I see the print_token function but I want to convert it to a string and maybe push to a vector so I can return all the generated text when it is finished processing.
|
https://github.com/huggingface/candle/issues/533
|
closed
|
[] | 2023-08-21T06:36:08Z
| 2023-08-21T07:51:37Z
| null |
soupslurpr
|
huggingface/safetensors
| 333
|
Slow load weight values from a HF model on a big-endian machine with the latest code
|
### System Info
Python: 3.10
PyTorch: the latest main branch (i.e. 2.0.1+)
safetensors: 0.3.3
Platform: s390x (big-endian)
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Reproduction
I executed the following code using 0.3.1 and 0.3.3, and w/o safetensors.
```
import time
import torch
from transformers import T5ForConditionalGeneration, AutoTokenizer
try:
import safetensors
print("safetensors version:", safetensors.__version__)
except:
print("safetensors not installed")
torch.serialization.set_default_load_endianness(torch.serialization.LoadEndianness.LITTLE)
model = "google/flan-t5-xxl"
tokenizer = AutoTokenizer.from_pretrained(model)
input_text = "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
input = tokenizer(input_text, return_tensors="pt").input_ids
t0 = time.perf_counter()
#model = T5ForConditionalGeneration.from_pretrained(model, low_cpu_mem_usage=True, use_safetensors=False)
model = T5ForConditionalGeneration.from_pretrained(model, low_cpu_mem_usage=True, use_safetensors=True)
t1 = time.perf_counter()
print("load elapsed time:", t1-t0)
output = model.decoder.forward(input_ids=input) ## intentionally use decoder.forward() instead of generate()
t2 = time.perf_counter()
print("forward elapsed time:", t2-t1)
```
Findings
- Old version (0.3.1) w/o swapping data is quite faster than 0.3.3 w/ swapping data, which we understand.
- 0.3.3 is a bit slow than `torch.load`, which implies we could have some room to improve.
The result is the best time of five tries after I downloaded model files into local file system.
```
$ python flan-t5.py
safetensors not installed
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:21<00:00, 4.37s/it]
load elapsed time: 22.09646322298795
forward elapsed time: 1.4204098680056632
```
```
$ python flan-t5.py
safetensors version: 0.3.3
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:25<00:00, 5.05s/it]
load elapsed time: 25.486608179984614
forward elapsed time: 1.4887599580106325
```
```
$ python flan-t5.py
safetensors version: 0.3.1
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 35.73it/s]
load elapsed time: 0.37154227000428364
forward elapsed time: 1.1782474629580975
```
### Expected behavior
We expect that we can alleviate the overhead of swapping data. The overhead of 4x looks too large.
|
https://github.com/huggingface/safetensors/issues/333
|
closed
|
[
"Stale"
] | 2023-08-20T18:19:44Z
| 2023-12-12T01:48:51Z
| 9
|
kiszk
|
huggingface/chat-ui
| 409
|
Deploy Chat UI Spaces Docker template with a PEFT adapter
|
I tried to accomplish this, but the container failed to launch the chat-ui app, as it seems to assume the model would be a non-adapted model.
Is there a way to make it work?
|
https://github.com/huggingface/chat-ui/issues/409
|
closed
|
[
"bug",
"back"
] | 2023-08-20T05:26:50Z
| 2023-09-11T09:37:29Z
| 4
|
lrtherond
|
huggingface/datasets
| 6,163
|
Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32
|
### Describe the bug
I am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are integer into string by placing inverted commas and also without inverted commas.
Can anyone please help me out?
FYI : I am using Chrome browser.
Error type: ArrowInvalid
Details: Failed to parse string: '[254,254]' as a scalar of type int32

### Steps to reproduce the bug
Kindly let me know how to fix this?
### Expected behavior
Kindly let me know how to fix this?
### Environment info
Kindly let me know how to fix this?
|
https://github.com/huggingface/datasets/issues/6163
|
open
|
[] | 2023-08-19T11:34:40Z
| 2025-07-22T12:04:46Z
| 2
|
shishirCTC
|
huggingface/sentence-transformers
| 2,278
|
How to set the no. of epochs for fine-tuning SBERT?
|
Hello,
I am fine-tuning an biencoder SBERT model on domain specific data for semantic similarity. There is no loss value posted by the `fit ` function from the package. Any idea how to know if the model is overfitting or underfiting the dataset after each epoch? This could help me in deciding the appropriate no. of epochs required for fine-tuning.
Thank you.
|
https://github.com/huggingface/sentence-transformers/issues/2278
|
open
|
[] | 2023-08-18T18:14:05Z
| 2024-01-29T17:00:13Z
| null |
power-puff-gg
|
huggingface/setfit
| 409
|
model_head.pkl not found on HuggingFace Hub
|
i got message:
"model_head.pkl not found on HuggingFace Hub, initialising classification head with random weights. You should TRAIN this model on a downstream task to use it for predictions and inference."
is there something missing or is it normal?
|
https://github.com/huggingface/setfit/issues/409
|
closed
|
[
"question"
] | 2023-08-18T07:52:20Z
| 2023-11-24T14:20:51Z
| null |
andysingal
|
huggingface/autotrain-advanced
| 216
|
How to do inference after train llama2
|
i trained model using this command
```
autotrain llm --train --project_name 'llama2-indo-testing' \
--model meta-llama/Llama-2-7b-hf \
--data_path data/ \
--text_column text \
--use_peft \
--use_int4 \
--learning_rate 2e-4 \
--train_batch_size 2 \
--num_train_epochs 3 \
--trainer sft \
--model_max_length 2048 \
--push_to_hub \
--repo_id fhadli/llama2-7b-hf-id \
--block_size 2048 \
> training.log
```
after that, i tried to load the model using this script
```
from transformers import AutoTokenizer
import transformers
import torch
model = "/home/muhammad.fhadli/explorasi/llama2-indo/llama2-indo-testing"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
```
but it gave me this error, can someone please explain why i got this error, or what is the rigth way to do inference?
```
Traceback (most recent call last):
File "play.py", line 8, in <module>
pipeline = transformers.pipeline(
File "/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 705, in pipeline
config = AutoConfig.from_pretrained(model, _from_pipeline=task, **hub_kwargs, **model_kwargs)
File "/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 983, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/configuration_utils.py", line 617, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/configuration_utils.py", line 672, in _get_config_dict
resolved_config_file = cached_file(
File "/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/utils/hub.py", line 388, in cached_file
raise EnvironmentError(
OSError: /home/muhammad.fhadli/explorasi/llama2-indo/llama2-indo-testing/ does not appear to have a file named config.json. Checkout 'https://huggingface.co//home/muhammad.fhadli/explorasi/llama2-indo/llama2-indo-testing//None' for available files.
```
here is the content inside my folder
```
$ls /home/muhammad.fhadli/explorasi/llama2-indo/llama2-indo-testing/
adapter_config.json optimizer.pt rng_state_0.pth scheduler.pt tokenizer_config.json tokenizer.model training_args.bin
adapter_model.bin README.md rng_state_1.pth special_tokens_map.json tokenizer.json trainer_state.json
```
|
https://github.com/huggingface/autotrain-advanced/issues/216
|
closed
|
[] | 2023-08-18T04:36:37Z
| 2023-12-18T15:30:38Z
| null |
muhammadfhadli1453
|
huggingface/diffusers
| 4,662
|
How to call a different scheduler when training a model from repo
|
I notice that the settings in train_dreambooth_lora_sdxl.py and the scheduler config from the repo seem to conflict. In the .py the noise scheduler is DDPM but whenever training starts it seems to still indicate that I am using the repo config scheduler, ie. EulerDiscreteScheduler. It used to be you could specify scheduler config by path but that seemed to have deprecated at some point.
|
https://github.com/huggingface/diffusers/issues/4662
|
closed
|
[] | 2023-08-17T21:40:10Z
| 2023-08-18T04:18:11Z
| null |
jmaccall316
|
huggingface/transformers
| 25,576
|
How can i make a PR for autotokenzier to adapt RWKV world
|
### Feature request
Ususally we use own tokenzier with the transformer pipeline,
like this https://github.com/xiaol/Huggingface-RWKV-World/blob/fca236afd5f2815b0dbe6c7ce3c92e51526e2e14/generate_hf_cfg.py#L79C1-L79C1
So far we have a lot of models using new tokenzier, using pipeline with autotokenizer is critically needed.
How can i add new tokenizer to autotokenzier to make this pipeline smooth and peace.
Thank you.
### Motivation
1. make everyone use RWKV world smoothly, and RWKV v5 world is coming.
2. can support huggingface communtiy with this awesome models , make opensource more open.
3. i really don't like llama models always on the top of open llm leardboards.
4. more...
### Your contribution
I made a lots of models based on RWKV 4 world ,https://huggingface.co/xiaol , especially 128k context models.
|
https://github.com/huggingface/transformers/issues/25576
|
closed
|
[] | 2023-08-17T16:36:44Z
| 2023-09-25T08:02:43Z
| null |
xiaol
|
huggingface/accelerate
| 1,854
|
How to further accelerate training with 24 cards for 1.3b+ models using accelerate?
|
I found that when using DeepSpeed Zero (2 or 3) to train 1.3 billion and larger models (such as llama-7b or gpt-neo-1.3b), the training time for 8 * 32G V100 is almost the same as 24 * 32G V100 (I guess it's because of the additional communication overhead introduced by DeepSpeed). Is there any way to further accelerate training by utilizing 24 cards? Currently, Megatron-LM integration is limited to gpt-2 and gpt-j and also, I'm not sure whether this will help.
|
https://github.com/huggingface/accelerate/issues/1854
|
closed
|
[] | 2023-08-17T15:01:09Z
| 2023-09-24T15:05:52Z
| null |
Micheallei
|
huggingface/datasets
| 6,156
|
Why not use self._epoch as seed to shuffle in distributed training with IterableDataset
|
### Describe the bug
Currently, distributed training with `IterableDataset` needs to pass fixed seed to shuffle to keep each node use the same seed to avoid overlapping.
https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1174-L1177
My question is why not directly use `self._epoch` which is set by `set_epoch` as seed? It's almost the same across nodes.
https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1790-L1801
If not using `self._epoch` as shuffling seed, what does this method do to prepare an epoch seeded generator?
https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1206
### Steps to reproduce the bug
As mentioned above.
### Expected behavior
As mentioned above.
### Environment info
Not related
|
https://github.com/huggingface/datasets/issues/6156
|
closed
|
[] | 2023-08-17T10:58:20Z
| 2023-08-17T14:33:15Z
| 3
|
npuichigo
|
huggingface/diffusers
| 4,643
|
when i load a controlnet model,where is the inference code?
|
I have read the code of con in diffusers/models/controlnet.py.
but when I load a con weight,where is the code?
tks
|
https://github.com/huggingface/diffusers/issues/4643
|
closed
|
[] | 2023-08-17T02:50:59Z
| 2023-08-17T04:55:28Z
| null |
henbucuoshanghai
|
huggingface/dataset-viewer
| 1,689
|
Handle breaking change in google dependency?
|
See https://huggingface.co/datasets/bigscience/P3/discussions/6#64dca122e3e44e8000c45616
Should we downgrade the dependency, or fix the datasets?
|
https://github.com/huggingface/dataset-viewer/issues/1689
|
closed
|
[
"question",
"dependencies",
"P2"
] | 2023-08-16T14:31:28Z
| 2024-02-06T14:59:59Z
| null |
severo
|
huggingface/optimum
| 1,286
|
Support BetterTransfomer for the GeneFormer model
|
### Feature request
is it possible to support GeneFormer model with BetterTransformer?
https://huggingface.co/ctheodoris/Geneformer
### Motivation
It's a new paper with an active community in the Hugging Face repository. The training and inference speed is not fast enough.
### Your contribution
Nothing at this time because I don't want to add it by myself. I am requesting this because of this statement from the hugging face website:
Let us know by opening an issue in 🤗 Optimum if you want more models to be supported, or check out the [contribution guideline](https://huggingface.co/docs/optimum/bettertransformer/tutorials/contribute) if you want to add it by yourself!
|
https://github.com/huggingface/optimum/issues/1286
|
closed
|
[
"feature-request",
"bettertransformer",
"Stale"
] | 2023-08-16T03:32:48Z
| 2025-05-07T02:13:16Z
| 1
|
seyedmirnezami
|
pytorch/torchx
| 753
|
Feature: Support for Multiple NodeSelectors and Tolerations in TorchX for Kubernetes
|
## Description
<!-- concise description of the feature/enhancement -->
I’m currently working with TorchX in conjunction with Volcano scheduling for my training jobs on an Amazon EKS cluster. I’ve also integrated Karpenter autoscaler for effective node scaling. Additionally, I’m using managed node groups with labeled nodes that have specific taints applied.
Our internal data and machine learning teams have the requirement to specify NodeSelectors and Tolerations to target jobs on particular nodes or managed node groups. While referring to the documentation provided here: [TorchX Specifications](https://pytorch.org/torchx/main/specs.html), I observed that capabilities={“[node.kubernetes.io/instance-type](http://node.kubernetes.io/instance-type)”: “”} are used as NodeSelectors when the job is created through Volcano. However, this approach doesn’t seem to allow for sending a list of labels, which our use case demands.
Furthermore, I’m also interested in incorporating tolerations into these jobs to ensure proper scheduling and execution in our environment. If any of you have experience in implementing NodeSelectors and Tolerations in TorchX within an Amazon EKS setup, I would highly appreciate your insights and advice.
If there’s no previous experience with this scenario, I’m considering raising a feature request to address these needs. Your guidance and input would be greatly valued.
**_NOTE TO MAINTAINERS_**
_I'm eager to contribute by creating a pull request for this exciting new feature, even though I'm still getting familiar with the repository and the whole PyTorch environment. Since I'm new to the process, I'd really appreciate some guidance on how to set up and run TorchX locally, as well as how to carry out unit and integration tests. This knowledge will be invaluable in making sure my contributions align well with the existing code and testing procedures. Thanks a lot for your support!_
## Motivation/Background
<!-- why is this feature/enhancement important? provide background context -->
In our current setup, we are utilizing TorchX, Volcano scheduling, and Karpenter autoscaling to manage training jobs on our Amazon EKS cluster. We have specific requirements to target jobs on nodes with certain labels and taints due to the nature of our workloads. However, the existing TorchX functionality only allows for specifying a single NodeSelector label, which is limiting for our use case. Additionally, we need the ability to incorporate tolerations into our job specifications for effective scheduling.
## Detailed Proposal
<!-- provide a detailed proposal -->
I propose enhancing the TorchX functionality to allow users to provide multiple `NodeSelector` labels as a `Dict[str, str]` and `tolerations` as a list of `V1Toleration` in the pod definition. This will enable users to precisely target nodes and managed node groups based on a wider range of labels and handle scheduling constraints effectively.
The changes will involve modifying the `role_to_pod` method to accept two new parameters:
**node_selectors: Dict[str, str]**: This parameter will allow users to provide multiple node selector labels for their jobs. Modifying the existing one to accept more than one.
**tolerations: List[V1Toleration]**: This parameter will allow users to provide tolerations to handle node taints effectively.
These parameters will be included in the pod specification when creating a new pod using TorchX and Volcano.
## Alternatives
<!-- discuss the alternatives considered and their pros/cons -->
An alternative approach would be to manually modify the generated pod specification after it's created using TorchX. However, this approach would require additional steps and could lead to inconsistencies between the job definition and the actual pod specification.
## Additional context/links
<!-- link to code, documentation, etc. -->
|
https://github.com/meta-pytorch/torchx/issues/753
|
open
|
[] | 2023-08-15T21:55:30Z
| 2023-08-15T22:02:33Z
| 0
|
vara-bonthu
|
pytorch/pytorch
| 107,238
|
How to export GNN with dict inputs correctly?
|
## Problem description
I am having an issue when exporting of PyTorch GNN model to ONNX. Here is my export code:
```
torch.onnx.export(
model=model,
args=(x_dict, edge_index_dict, edge_attr_dict, {}),
f=save_path,
verbose=False,
input_names=["x_dict", "edge_index_dict", "edge_attr_dict"],
output_names=["out"],
)
```
`x_dict, edge_index_dict, edge_attr_dict` are of type `Dict[str, torch.Tensor]` (hetero_data is formed [like this](https://github.com/emnigma/VSharp/blob/408ba9800362285f420b3d9b51116f4b2cbb3391/VSharp.ML.AIAgent/ml/data_loader_compact.py#L30))
In addition to 3 inputs in my [model](https://github.com/emnigma/VSharp/blob/408ba9800362285f420b3d9b51116f4b2cbb3391/VSharp.ML.AIAgent/ml/models.py#L654)'s [forward](https://github.com/emnigma/VSharp/blob/408ba9800362285f420b3d9b51116f4b2cbb3391/VSharp.ML.AIAgent/ml/models.py#L659) , torch.onnx.export generates 4 additional inputs and when I try to use exported model with onnxruntime I get ValueError:
`ValueError: Required inputs (['edge_index', 'edge_index.5', 'edge_index.3', 'onnx::Reshape_9']) are missing from input feed (['x_dict', 'edge_index_dict', 'edge_attr_dict']).`
I am getting a feeling I am doing something wrong, how can i export my model correctly?
## Reproduction
here is a minimal reproduction script and dummy_data for it:
script: https://gist.github.com/emnigma/0b98cfbf3fff47be417c64489d83a2a2
data: https://gist.github.com/emnigma/e3ea559fe4db0adde886708f402473bb
## JIT trace output
I also tried to trace model compilation, here is the jit trace results with strict=False .code output:
```
def forward(self,
argument_1: Dict[str, Tensor],
argument_2: Dict[str, Tensor],
argument_3: Dict[str, Tensor]) -> Dict[str, Tensor]:
state_encoder = self.state_encoder
x = argument_1["game_vertex"]
x0 = argument_1["state_vertex"]
edge_index = argument_2["game_vertex to game_vertex"]
edge_index0 = argument_2["game_vertex in state_vertex"]
edge_index1 = argument_2["game_vertex history state_vertex"]
edge_index2 = argument_2["state_vertex parent_of state_vertex"]
edge_weight = argument_3["game_vertex history state_vertex"]
_0 = (state_encoder).forward(x, edge_index, x0, edge_index2, edge_index1, edge_weight, edge_index0, )
_1 = {"state_vertex": _0, "game_vertex": x}
return _1
```
## System Info
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.11.4 (main, Jul 5 2023, 08:40:20) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.0
[pip3] torch==2.0.1
[pip3] torch-geometric==2.3.1
[pip3] torch-scatter==2.1.1
[pip3] torch-sparse==0.6.17
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2a0
[conda] numpy 1.25.0 py311he598dae_0
[conda] numpy-base 1.25.0 py311hfbfe69c_0
[conda] pytorch 2.0.1 py3.11_0 pytorch
[conda] torch-geometric 2.3.1 pypi_0 pypi
[conda] torch-scatter 2.1.1 pypi_0 pypi
[conda] torch-sparse 0.6.17 pypi_0 pypi
[conda] torchaudio 2.0.2 py311_cpu pytorch
[conda] torchvision 0.15.2 cpu_py311he74fb5d_0
|
https://github.com/pytorch/pytorch/issues/107238
|
closed
|
[
"module: onnx",
"triaged"
] | 2023-08-15T15:43:12Z
| 2024-03-27T21:47:06Z
| null |
emnigma
|
huggingface/diffusers
| 4,618
|
How to use dreamshaperXL10_alpha2Xl10.safetensors with controlnet-canny-sdxl-1.0 ?
|
I want to use dreamshaperXL10_alpha2Xl10.safetensors with controlnet-canny-sdxl-1.0
I downloaded dreamshaperXL10_alpha2Xl10.safetensors file and tried to use :
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
'./dreamshaperXL10_alpha2Xl10.safetensors',
controlnet=controlnet,
use_safetensors=True,
torch_dtype=torch.float16,
variant="fp16"
)
got error :
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
File "/opt/conda/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 908, in from_pretrained
cached_folder = cls.download(
File "/opt/conda/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1330, in download
info = model_info(
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
validate_repo_id(arg_value)
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './dream/dreamshaperXL10_alpha2Xl10.safetensors'. Use repo_type argument if needed.
Previously, I tried to use from_single_file insteaed of from_pretrained.
Got error : from_single_file not available with StableDiffusionXLControlNetPipeline.
Please help.
Thanks
|
https://github.com/huggingface/diffusers/issues/4618
|
closed
|
[] | 2023-08-15T13:44:54Z
| 2023-08-22T01:31:37Z
| null |
arnold408
|
pytorch/pytorch
| 107,225
|
Is pytorch version 1.10.2 still maintained? What is the official EOM(End of Maintenance) date?
|
### 🐛 Describe the bug
Is pytorch version 1.10.2 still maintained? What is the official EOM(End of Maintenance) date?
### Versions
pytorch v1.10.2
cc @seemethere @malfet @svekars @carljparker
|
https://github.com/pytorch/pytorch/issues/107225
|
closed
|
[
"module: binaries",
"module: docs",
"oncall: releng",
"triaged"
] | 2023-08-15T12:36:25Z
| 2023-08-15T18:49:57Z
| null |
reBiocoder
|
pytorch/benchmark
| 1,825
|
how to run torchbenchmark in dynamo mode
|
Hi,
1. I want to test benchmark in dynamo mode, how can I run test_bench.py script?
2. When I add code:
`self.model = torch.compile(self.model)`
in BERT_pytorch __init__.py, then run:
`pytest test_bench.py -k "test_train[BERT_pytorch-cuda-eager]" --ignore_machine_config --benchmark-autosave`, it raises below errors:

how can I fix it? Thank you for you help~ @ezyang @orionr @romovpa @kostmo @zdevito
|
https://github.com/pytorch/benchmark/issues/1825
|
closed
|
[] | 2023-08-15T12:12:20Z
| 2023-08-16T05:46:53Z
| null |
Godlovecui
|
huggingface/peft
| 826
|
what is alpha ?? alpha not in paper.
|
### Feature request
https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py#L57
this alpha not in paper :
https://arxiv.org/abs/2106.09685
where can i learn this alpha ??
thank you !!
### Motivation
rt
### Your contribution
rt
|
https://github.com/huggingface/peft/issues/826
|
closed
|
[] | 2023-08-15T09:47:58Z
| 2023-09-23T15:03:19Z
| null |
XuJianzhi
|
huggingface/optimum
| 1,285
|
Merge patch into autogptq
|
### Feature request
Currently, there is a patch to get GPTQ quantization working:
```
# !pip install -q git+https://github.com/fxmarty/AutoGPTQ.git@patch-act-order-exllama
```
Is there a plan to try and merge that into the autogptq repo?
### Motivation
autogptq is slow to install. This is easily solved by using wheels, but I don't have wheels for this patch. Easiest would be for the patch to be released.
### Your contribution
Seems like the patch is a few tens of commits behind autogptq, so the first step would be to check whether doing a pr would create conflicts.
|
https://github.com/huggingface/optimum/issues/1285
|
closed
|
[] | 2023-08-14T16:24:14Z
| 2023-08-23T17:17:46Z
| 5
|
RonanKMcGovern
|
pytorch/pytorch
| 107,146
|
【libtorch c++ 】 how to make libtorch model distribute train and infer ,please show me one tutorial or example
|
### 🐛 Describe the bug
HI, for libtorch I found distribute package ,but I don't know how to declare distribute param to make the libtorch model train and infer on distribute machines .need our team help, thanks,pleaase show me one example distribute train model code. thanks
### Versions
libtorch 2.0
|
https://github.com/pytorch/pytorch/issues/107146
|
closed
|
[] | 2023-08-14T15:54:56Z
| 2023-08-14T18:34:09Z
| null |
mullerhai
|
huggingface/candle
| 443
|
What is the minimal requirements of Intel MKL version?
|
Hello, Thanks for the great work!
I've got an error while compiling with the `-features mkl` option.
For example `cargo install --git https://github.com/huggingface/candle.git candle-examples --examples bert -F mkl`
The error said
```bash
= note: /usr/bin/ld: /workspaces/Kuberian/searcher/target/debug/deps/libcandle_core-0afc8671b4dae8af.rlib(candle_core-0afc8671b4dae8af.candle_core.b11884625c01537d-cgu.13.rcgu.o): in function `candle_core::mkl::hgemm':
/usr/local/cargo/git/checkouts/candle-0c2b4fa9e5801351/60cd155/candle-core/src/mkl.rs:162: undefined reference to `hgemm_'
collect2: error: ld returned 1 exit status
= note: some `extern` functions couldn't be found; some native libraries may need to be installed or have their path specified
= note: use the `-l` flag to specify native libraries to link
= note: use the `cargo:rustc-link-lib` directive to specify the native libraries to link with Cargo (see https://doc.rust-lang.org/cargo/reference/build-scripts.html#cargorustc-link-libkindname)
```
I initially thought that I did not install intel mkl libs properly, but I found that
1. [intel-mkl-src](https://github.com/rust-math/intel-mkl-src) automatically downloads the required library from ghcr
2. `intel mkl 2020.01`, which automatically downloaded from [here](https://github.com/rust-math/rust-mkl-container), simply does not implement `hgemm` while they do implement `sgemm` and `dgemm`
3. the latest version of intel mkl does implement `hgemm`
So I tried the latest version of intel mkl, but it seems `intel-mkl-src` does not support it.
I'm wondering which `intel-mkl` version do you use for your development environment?
|
https://github.com/huggingface/candle/issues/443
|
closed
|
[] | 2023-08-14T14:09:01Z
| 2024-02-03T16:43:34Z
| null |
iwanhae
|
huggingface/pytorch-image-models
| 1,917
|
how to change SqueezeExcite in efficientnet
|
I want to create efficientnet networks using timm, where SqueezeExcite contains three parts ['Conv2d','SiLU','Conv2d'], but it contains four parts ['Conv2d','SiLU','Conv2d','sigmoid'], How should I modify it, thank you
|
https://github.com/huggingface/pytorch-image-models/issues/1917
|
closed
|
[
"enhancement"
] | 2023-08-14T11:45:05Z
| 2023-08-14T14:13:26Z
| null |
Yang-Changhui
|
huggingface/setfit
| 408
|
No tutorial or guideline for Few-shot learning on multiclass text classification
|
I just want to use SBERT for Few Shot multiclass text classification, however I couldn't see any tutorial or explanation for it. Can you explain to me that which "multi_target_strategy" and loss function should I use for multi-class text classification ?
|
https://github.com/huggingface/setfit/issues/408
|
open
|
[
"documentation",
"question"
] | 2023-08-14T09:02:18Z
| 2023-10-03T20:29:25Z
| null |
ByUnal
|
huggingface/diffusers
| 4,594
|
latents.requires_grad is false in my custom pipeline no matter what.
|
Hi, in my quest to make a flexible pipeline that can easily add new features instead of creating a pipeline for every variation, I made the following:
```
class StableDiffusionRubberPipeline(StableDiffusionPipeline):
call_funcs=[]
def __init__(
self,
vae: AutoencoderKL,
text_encoder: CLIPTextModel,
tokenizer: CLIPTokenizer,
unet: UNet2DConditionModel,
scheduler: KarrasDiffusionSchedulers,
safety_checker: StableDiffusionSafetyChecker,
feature_extractor: CLIPImageProcessor,
requires_safety_checker: bool = True,
):
self.before_init()
super().__init__(vae,text_encoder,tokenizer,unet,scheduler,safety_checker,feature_extractor,requires_safety_checker)
if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
deprecation_message = (
f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
"to update the config accordingly as leaving `steps_offset` might led to incorrect results"
" in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
" it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
" file"
)
deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
new_config = dict(scheduler.config)
new_config["steps_offset"] = 1
scheduler._internal_dict = FrozenDict(new_config)
if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
deprecation_message = (
f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
" `clip_sample` should be set to False in the configuration file. Please make sure to update the"
" config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
" future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
" nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
)
deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
new_config = dict(scheduler.config)
new_config["clip_sample"] = False
scheduler._internal_dict = FrozenDict(new_config)
if safety_checker is None and requires_safety_checker:
logger.warning(
f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
" that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
" results in services or applications open to the public. Both the diffusers team and Hugging Face"
" strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
" it only for use-cases that involve analyzing network behavior or auditing its results. For more"
" information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
)
if safety_checker is not None and feature_extractor is None:
raise ValueError(
"Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
" checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
)
is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
version.parse(unet.config._diffusers_version).base_version
) < version.parse("0.9.0.dev0")
is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
deprecation_message = (
"The configuration file of the unet has set the default `sample_size` to smaller than"
" 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the"
" following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
" CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
" \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
" configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
" in the config mi
|
https://github.com/huggingface/diffusers/issues/4594
|
closed
|
[] | 2023-08-13T15:02:22Z
| 2023-08-14T12:11:36Z
| null |
alexblattner
|
huggingface/datasets
| 6,153
|
custom load dataset to hub
|
### System Info
kaggle notebook
i transformed dataset:
```
dataset = load_dataset("Dahoas/first-instruct-human-assistant-prompt")
```
to
formatted_dataset:
```
Dataset({
features: ['message_tree_id', 'message_tree_text'],
num_rows: 33143
})
```
but would like to know how to upload to hub
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
shared above
### Expected behavior
load dataset to hub
|
https://github.com/huggingface/datasets/issues/6153
|
closed
|
[] | 2023-08-13T04:42:22Z
| 2023-11-21T11:50:28Z
| 5
|
andysingal
|
huggingface/chat-ui
| 398
|
meta-llama/Llama-2-7b-chat-hf requires a pro subscription?
|
I ran the instructions to run locally, and ran into this.
I've been working on my own ui, and thought I'd give this a shot, and if that's the route huggingface is going, I find that very disappointing. I was expecting the model to be hosted locally and routed through fastapi or something
|
https://github.com/huggingface/chat-ui/issues/398
|
closed
|
[] | 2023-08-12T03:56:55Z
| 2023-08-12T04:03:11Z
| 1
|
thistleknot
|
huggingface/chat-ui
| 397
|
Dynamically adjust `max_new_tokens`
|
Hi,
I am running a 4096 context length model behind TGI interface. My primary use case is summarization wherein some of my requests can be quite large.
I have set `truncate` to 4000 and that leaves `max_new_tokens` to be at most 4096-4000=96.
So, even if my input length is not 4000 tokens long, say it is only 1024 tokens long, I can only generate 96 token long response. In this case, `max_new_tokens` can be 4096-1024=3072.
Is it possible for `chat-ui` to dynamically adjust the `max_new_tokens` this way?
Thanks for the great work!
|
https://github.com/huggingface/chat-ui/issues/397
|
open
|
[
"question",
"back"
] | 2023-08-11T16:37:10Z
| 2023-09-18T12:49:49Z
| null |
abhinavkulkarni
|
huggingface/chat-ui
| 396
|
Long chat history
|
How do you manage a long chat history?
Do you truncate the history at some point and call the API only with the most recent messages?
|
https://github.com/huggingface/chat-ui/issues/396
|
closed
|
[
"question"
] | 2023-08-11T15:52:43Z
| 2023-09-18T12:50:07Z
| null |
keidev
|
huggingface/trl
| 638
|
How many and what kind of gpus needed to run the example?
|
For every script or project in the example directory, could you please tell us how many and what kind of gpus needed to run the experiments? Thanks a lot.
|
https://github.com/huggingface/trl/issues/638
|
closed
|
[] | 2023-08-11T14:12:34Z
| 2023-09-11T08:22:33Z
| null |
Wallace-222
|
huggingface/chat-ui
| 395
|
Error's out evetime I try to add a new model
|
I'm currently having an huge issue. I'm trying to easily add models in to the chat ui. I have made a holder and added a specific model in that folder but I'm unable to actual get to use that model. I'm not sure what I'm doing wrong I've staired at the docs for a few hours re reading and also looked it up on YouTube but have found nothing. Currently the code in my .env.local file that looks like this:
MODELS=`[
{
"name": "Open Assistant epoch-3.5 LLM",
"datasetName": "OpenAssistant/oasst1",
"description": "A good alternative to ChatGPT",
"websiteUrl": "https://open-assistant.io",
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>",
"messageEndToken": "</s>",
"preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.9,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 1024
}
}
]`
,`[
{
"name": "Test LLM",
"datasetName": "OpenAssistant/oasst1",
"endpoints": [{"url": "/models/Wizard-Vicuna-30B-Uncensored-GPTQ-4bit--1g.act.order.safetensors"}]
"description": "A good alternative to ChatGPT",
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>",
"messageEndToken": "</s>",
"preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.9,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 1024
}
}
]`
I'm currently re using everything from the default once and then but I will be stripping everything from it to match the actual LLM. Any and all help is much appreciated
|
https://github.com/huggingface/chat-ui/issues/395
|
closed
|
[
"support"
] | 2023-08-11T12:55:03Z
| 2023-09-11T09:35:55Z
| 3
|
Dom-Cogan
|
huggingface/dataset-viewer
| 1,662
|
Should we change 500 to another status code when the error comes from the dataset?
|
See #1661 for example.
Same for the "retry later" error: is 500 the most appropriate status code?
|
https://github.com/huggingface/dataset-viewer/issues/1662
|
open
|
[
"question",
"api",
"P2"
] | 2023-08-10T15:57:03Z
| 2023-08-14T15:36:27Z
| null |
severo
|
huggingface/datasets
| 6,139
|
Offline dataset viewer
|
### Feature request
The dataset viewer feature is very nice. It enables to the user to easily view the dataset. However, when working for private companies we cannot always upload the dataset to the hub. Is there a way to create dataset viewer offline? I.e. to run a code that will open some kind of html or something that makes it easy to view the dataset.
### Motivation
I want to easily view my dataset even when it is hosted locally.
### Your contribution
N.A.
|
https://github.com/huggingface/datasets/issues/6139
|
closed
|
[
"enhancement",
"dataset-viewer"
] | 2023-08-10T11:30:00Z
| 2024-09-24T18:36:35Z
| 7
|
yuvalkirstain
|
huggingface/text-generation-inference
| 807
|
How to create a NCCL group on Kubernetes?
|
I am deploying text-generation-inference on EKS with each node having 1 NVIDIA A10G GPU.
How should I create a group such that a model like llama-2-13b-chat is able to use GPUs across nodes for inference?
|
https://github.com/huggingface/text-generation-inference/issues/807
|
closed
|
[
"Stale"
] | 2023-08-10T09:29:59Z
| 2024-04-17T01:45:28Z
| null |
rsaxena-rajat
|
pytorch/kineto
| 799
|
pytorch.profiler cannot profile aten:mm on GPU
|
I use pytorch.profiler to profile a program of matmul on GPU, it seems profiler does not record aten.mm correctly. There is stats in GPU kernel View,
<img width="2118" alt="image" src="https://github.com/pytorch/kineto/assets/11534916/dc126d48-1517-4af2-9200-8fd37aeaa6a4">
but no GPU kernel stats in Trace view.
<img width="1903" alt="image" src="https://github.com/pytorch/kineto/assets/11534916/d4d747af-9b88-47b3-88eb-a3e3a9d00ef1">
Sample code:
```python
import torch
a = torch.rand([1, 1024, 2048], device='cuda')
b = torch.rand([2048, 2048], device='cuda')
with torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
torch.profiler.ProfilerActivity.CUDA,
],
on_trace_ready=torch.profiler.tensorboard_trace_handler("./mm-profile")
):
torch.matmul(a, b)
```
|
https://github.com/pytorch/kineto/issues/799
|
closed
|
[
"question",
"plugin"
] | 2023-08-10T08:13:15Z
| 2024-04-23T15:50:55Z
| null |
scse-l
|
huggingface/chat-ui
| 394
|
Internal server error: Unexpected token ] in JSON at position 1090
|
1:58:23 AM [vite] Error when evaluating SSR module /src/lib/server/models.ts:
|- SyntaxError: Unexpected token ] in JSON at position 1090
at JSON.parse (<anonymous>)
at eval (/home/chat-ui/src/lib/server/models.ts:46:14)
at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9)
1:58:23 AM [vite] Error when evaluating SSR module /src/routes/+layout.server.ts: failed to import "/src/lib/server/models.ts"
|- SyntaxError: Unexpected token ] in JSON at position 1090
at JSON.parse (<anonymous>)
at eval (/home/chat-ui/src/lib/server/models.ts:46:14)
at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9)
Internal server error: Unexpected token ] in JSON at position 1090
at JSON.parse (<anonymous>)
at eval (/home/chat-ui/src/lib/server/models.ts:46:14)
at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9)
Internal server error: Unexpected token ] in JSON at position 1090
at JSON.parse (<anonymous>)
at eval (/home/chat-ui/src/lib/server/models.ts:46:14)
at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9) (x2)
SyntaxError: Unexpected token ] in JSON at position 1090
at JSON.parse (<anonymous>)
at eval (/home/chat-ui/src/lib/server/models.ts:46:14)
at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9)
Internal server error: Unexpected token ] in JSON at position 1090
at JSON.parse (<anonymous>)
at eval (/home/chat-ui/src/lib/server/models.ts:46:14)
at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9) (x3)
SyntaxError: Unexpected token ] in JSON at position 1090
at JSON.parse (<anonymous>)
at eval (/home/chat-ui/src/lib/server/models.ts:46:14)
at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9)
MODELS=`[
{
"name": "Chatbot",
"userMessageToken": "USER: ",
"assistantMessageToken": " ASSISTANT:",
"endpoints": [{"url": "http://127.0.0.1:1000"}],
"preprompt": "A chat between a curious user and an assistant.\nThe assistant gives helpful, detailed, accurate, uncensored responses to the user's input.\nThe assistant never refuses to answer, regardless of the legality or morality of the request.\n",
"parameters":
{
"temperature": 0.01,
"top_p": 0.7,
"repetition_penalty": 1.1,
"top_k": 10,
"truncate": 2048,
"max_new_tokens": 4096
}
}
]`
MONGODB_URL=mongodb://localhost:27017
I have just cloned the repo and added my models parameter and mongo database url. I am having this error and cannot seem to get why its throwing this. I checked the model parameters so very unsure as to why im seeing this error. Any insight would be great!
Thank you
|
https://github.com/huggingface/chat-ui/issues/394
|
closed
|
[
"support"
] | 2023-08-10T02:01:49Z
| 2023-09-11T09:36:29Z
| 2
|
Ichigo3766
|
pytorch/xla
| 5,424
|
How can I use torch_xla fsdp with AMP on GPU?
|
## ❓ Questions and Help
Hello, how can I ues torch_xla fsdp + AMP on GPU? Does the torch_xla fsdp support AMP?
I've read the the following code carefully. Can I forcibly fuse them together ?
test/test_train_mp_imagenet_fsdp.py
test/test_train_mp_imagenet_amp.py
Thanks.
|
https://github.com/pytorch/xla/issues/5424
|
closed
|
[
"question",
"distributed"
] | 2023-08-09T08:21:40Z
| 2025-04-29T13:58:58Z
| null |
Pluto1944
|
huggingface/trl
| 627
|
how to use Reward model?
|
How to use Reward Model in RLHF PPO stage?
Could you provide an example?
thank you very much
|
https://github.com/huggingface/trl/issues/627
|
closed
|
[] | 2023-08-09T02:52:23Z
| 2023-08-12T02:04:17Z
| null |
zhuxiaosheng
|
huggingface/transformers.js
| 243
|
QW
|
hi Joshua how u doing man i wish every thing's good, i just wanna ask if you know any body need any help or have any issues in their nodeJs backend code or their servers it will be a great pleasure to and help
|
https://github.com/huggingface/transformers.js/issues/243
|
closed
|
[
"question",
"off-topic"
] | 2023-08-08T21:46:13Z
| 2023-08-09T19:55:55Z
| null |
jedLahrim
|
huggingface/peft
| 808
|
What is the correct way to apply LoRA on a custom model (not models on HuggingFace)?
|
Hi, most models in examples are `transformers` pretrained models.
However, I'm using a custom model and applying LoRA to it:
```
model = MyPytorchModel()
model = PeftModel(model, peft_config)
======= training... ========
model.save_pretrained(save_path)
```
Then, I reload my custom model and merge lora weight:
```
model = MyPytorchModel()
lora_model = PeftModel.from_pretrained(model, save_path)
model = lora_model.merge_and_unload()
```
Is this feasible? When I test the final `model`, its behavior does not differ from before loading LoRA weight, as if `merge_ and_unload()` does not have any effect at all. I want to know where the problem is.
|
https://github.com/huggingface/peft/issues/808
|
closed
|
[] | 2023-08-08T17:10:36Z
| 2025-08-01T21:14:25Z
| null |
DtYXs
|
huggingface/diffusers
| 4,533
|
How to debug custom pipeline locally ?
|
Hi,
I build diffusers from source, and I am using ControlNet. However, diffusers seems not to load the custom pipeline from ```diffusers/examples/community/stable_diffusion_controlnet_img2img.py``` as I expected. Instead, it seems to download from the hub and cache a new ```stable_diffusion_controlnet_img2img.py``` somewhere else.
My question is how to make it load from my local ```diffusers/examples/community/stable_diffusion_controlnet_img2img.py``` so that I can debug it locally?
Best,
|
https://github.com/huggingface/diffusers/issues/4533
|
closed
|
[] | 2023-08-08T15:34:40Z
| 2023-08-09T12:17:42Z
| null |
pansanity666
|
huggingface/setfit
| 405
|
how to set the device id
|
How do I run multiple training runs on different GPU devices? I don't see any argument which allows me to set this. Thank you!
|
https://github.com/huggingface/setfit/issues/405
|
open
|
[] | 2023-08-08T08:25:36Z
| 2023-08-08T08:25:36Z
| null |
vahuja4
|
pytorch/android-demo-app
| 331
|
What is IValue type? It is a Tensor?
|
What is the diff of IValue and Tensor?
Could you please share some references?
Thx.
|
https://github.com/pytorch/android-demo-app/issues/331
|
open
|
[] | 2023-08-08T00:30:45Z
| 2023-08-08T00:30:45Z
| null |
NeighborhoodCoding
|
huggingface/transformers.js
| 239
|
[Question] Adding Custom or Unused Token
|
<!-- QUESTION GOES HERE -->
Is it possible to add custom range as a token?
For example for price_list of $100-$200
Can we add a custom vocab like this in vocab list
vocab list:
nice
hello
__$100-$200__
fish
...
|
https://github.com/huggingface/transformers.js/issues/239
|
closed
|
[
"question"
] | 2023-08-07T18:32:20Z
| 2023-08-07T20:38:15Z
| null |
hadminh
|
huggingface/chat-ui
| 390
|
Can I hook it up to a retrieval system for a document chatbot?
|
I want to use the instructor-xl text embedding model and use FAISS to create and retrieve from a vector store. Sort of a chatbot for documents or a domain specific chatbot. Any ideas on how I can do it?
|
https://github.com/huggingface/chat-ui/issues/390
|
open
|
[] | 2023-08-07T15:22:10Z
| 2024-02-22T12:55:41Z
| 9
|
adarshxs
|
huggingface/diffusers
| 4,507
|
How to train stable-diffusion-xl-base-1.0 without lora?
|
Hi, I want to train `stable-diffusion-xl-base-1.0` without lora, how to do this?
I can run `train_text_to_image_lora_sdxl.py` .
But `train_text_to_image.py` with `MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"` with raise an error:
```
diffusers/models/unet_2d_condition.py:836 in forward │
│ 833 │ │ │ aug_emb = self.add_embedding(text_embs, image_embs) │
│ 834 │ │ elif self.config.addition_embed_type == "text_time": │
│ 835 │ │ │ # SDXL - style │
│ ❱ 836 │ │ │ if "text_embeds" not in added_cond_kwargs: │
│ 837 │ │ │ │ raise ValueError( │
│ 838 │ │ │ │ │ f"{self.__class__} has the config param `addition_ │
│ 839 │ │ │ │ ) │
╰──────────────────────────────────────────────────────────────────────────────╯
TypeError: argument of type 'NoneType' is not iterable
```
the `added_cond_kwargs` is none in this case.
|
https://github.com/huggingface/diffusers/issues/4507
|
closed
|
[] | 2023-08-07T10:38:24Z
| 2023-08-14T07:25:49Z
| null |
KimmiShi
|
huggingface/text-generation-inference
| 782
|
What is the correct parameter combination for using dynamic RoPE scaling ?
|
Hi Team, First of all thanks for the awesome piece of software !!
I want to use `upstage/Llama-2-70b-instruct-v2` model with `--max-input-length=8192 --max-total-tokens=10240` which originally supports `max_position_embeddings=4096`.
I tried running the following command :
```
docker run -it --rm --gpus all --shm-size 80g --name llama2_70b_instruct_v2 -p 8560:80 -v ~/tgi_data:/data \
ghcr.io/huggingface/text-generation-inference:sha-f91e9d2 --num-shard=8 \
--model-id upstage/Llama-2-70b-instruct-v2 --revision 5f9c77b2c0397cf83d2f97740483f107c7109e8c \
--dtype=float16 \
--max-input-length=8192 --max-total-tokens=10240 --rope-scaling=dynamic --rope-factor=2.5 \
--max-batch-prefill-tokens=40100 \
```
1. Does it look correct ?
Though this ended up with:
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py", line 727, in warmup
_, batch = self.generate_token(batch)
File "/opt/conda/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py", line 825, in generate_token
raise e
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py", line 813, in generate_token
out = self.forward(
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py", line 789, in forward
return self.model.forward(
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 475, in forward
hidden_states = self.model(
File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 428, in forward
cos, sin = self.layers[0].self_attn.rotary_emb.get_cos_sin(
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/layers.py", line 470, in get_cos_sin
self._update_cos_sin_cache(dtype, position_ids.device, max_s)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/layers.py", line 501, in _update_cos_sin_cache
newbase = self.base * ((self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)) ** (self.dim / (self.dim - 2))
NameError: name 'seq_len' is not defined
```
2. Looks like typo in the code, should it have been `seqlen` instead of `seq_len` ?
3. When I am using the above model without RoPE scaling on 8xA100-40GB GPUs, it can churn out 1534 tokens per sec, with an prompt heavy set up of ~883 input tokens, ~76 output tokens(best_of=1, so no hidden output tokens) per request.
Is this expected performance or can I do better on the above set up?
FYI: tried fp16 on vllm, gptq(4bit), bitsandbytes(8bit) models all ended up with similar TPS (tokens per second).
|
https://github.com/huggingface/text-generation-inference/issues/782
|
closed
|
[] | 2023-08-07T05:58:14Z
| 2023-09-06T13:59:36Z
| null |
hrushikesh198
|
huggingface/transformers.js
| 238
|
[Question] Can you list all available models using tranformers.js?
|
Hey 👋
I was wondering if it's possible to list available models using the `transformers.js` package?
e.g.
> pipeline.getAvailableModels()
|
https://github.com/huggingface/transformers.js/issues/238
|
closed
|
[
"question"
] | 2023-08-07T01:53:35Z
| 2023-08-13T23:27:55Z
| null |
sambowenhughes
|
huggingface/chat-ui
| 389
|
Inject assistant message in the begining of the chat
|
Hey, is it possible to start a conversation with an assistant message showing up as the first message in the chat?
|
https://github.com/huggingface/chat-ui/issues/389
|
closed
|
[
"enhancement",
"question"
] | 2023-08-06T17:25:25Z
| 2023-09-18T12:52:16Z
| null |
matankley
|
huggingface/diffusers
| 4,494
|
How to convert a diffuser pipeline of XL to checkpoint or safetensors
|
I need to fine-tune stable diffusion unet or something like that. Then I have to convert the pipeline into ckpt for webui usage.
Before I use the `scripts/convert_diffusers_to_original_stable_diffusion.py` for transforming.
But currently it cannot convert correctly for XL pipeline and webui may raise bugs.
Thanks in advance.
|
https://github.com/huggingface/diffusers/issues/4494
|
closed
|
[
"stale",
"contributions-welcome"
] | 2023-08-06T13:06:54Z
| 2023-11-06T04:42:19Z
| null |
FeiiYin
|
huggingface/chat-ui
| 388
|
Is it down?
|
It doesnt load for me also your website
|
https://github.com/huggingface/chat-ui/issues/388
|
closed
|
[] | 2023-08-06T08:54:47Z
| 2023-08-08T06:05:48Z
| 6
|
BenutzerEinsZweiDrei
|
huggingface/transformers.js
| 237
|
[Question] Ipynb for ONNX conversion?
|
Could you please share the code you're using to convert models to onnx? I know you say in your cards you're using Optimum, but when I try to do it myself, I get much larger onnx files (talking about disk space here) and I don't know what I'm doing wrong.
|
https://github.com/huggingface/transformers.js/issues/237
|
closed
|
[
"question"
] | 2023-08-06T08:45:19Z
| 2023-08-06T09:17:02Z
| null |
Mihaiii
|
huggingface/transformers.js
| 233
|
[Docs] Mention demo (GitHub pages) in Readme
|
I love your old demo page on GitHub pages (https://xenova.github.io/transformers.js/), as one can easily play with the models and copy code if needed.
Is there any reason it's not mentioned anymore (or not more visible) in the Readme?
(Sorry, added bug label accidentally, should be question instead)
|
https://github.com/huggingface/transformers.js/issues/233
|
closed
|
[
"question"
] | 2023-08-04T10:53:48Z
| 2023-12-06T15:01:38Z
| null |
do-me
|
pytorch/text
| 2,197
|
Does DataLoader(shuffle=True) really shuffle DBpedia dataset correctly?
|
According to [the docs][1], DBpedia dataset has 14 classes (labels) and 40000 texts for each class. Hence, if I create batches using `DataLoader(shuffle=True)` as follows:
```python
import torchtext.datasets as d
from torch.utils.data.dataloader import DataLoader
train = DataLoader(
d.DBpedia(split="train", root=".cache"),
batch_size=10000,
shuffle=True,
)
```
the labels should be uniformly distributed in each batch. But in practice, it seems that only a few labels are in each batch.
```python
for labels, texts in train:
print(len(set(labels.tolist())))
```
The output of the above code is:
```
1
1
1
2
2
2
2
3
3
3
3
4
4
3
3
.
.
.
```
How can I fix this? Or is my implementation wrong?
P.S.
Interactive code is available on [GoogleColab][2]
[1]: https://pytorch.org/text/stable/datasets.html#dbpedia
[2]: https://colab.research.google.com/drive/10524PcR3_spf3fAh37hNbXdLeRVD6Sog?usp=sharing
|
https://github.com/pytorch/text/issues/2197
|
open
|
[] | 2023-08-04T10:34:52Z
| 2023-08-04T10:37:18Z
| 0
|
fujidaiti
|
pytorch/text
| 2,196
|
torchtext.datasets - requests.exceptions.ConnectionError
|
## 🐛 Bug
**Description of the bug**
When I try to use Multi30k dataset, I get this error:
```
requests.exceptions.ConnectionError:
This exception is thrown by __iter__ of HTTPReaderIterDataPipe(skip_on_error=False, source_datapipe=OnDiskCacheHolderIterDataPipe, timeout=None)
```
**To Reproduce**
```
from torchtext.datasets import Multi30k
SRC_LANGUAGE = 'de'
TGT_LANGUAGE = 'en'
train_iter = Multi30k(split='train', language_pair=(SRC_LANGUAGE, TGT_LANGUAGE))
next(iter(train_iter))
```
**Expected behavior**
Return a proper iterable where I can iterate over the dataset.
**Environment**
PyTorch version: 1.13.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Enterprise
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 1 2023, 18:18:15) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: GeForce GTX 1650
Nvidia driver version: 442.23
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2592
DeviceID=CPU0
Family=198
L2CacheSize=1536
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2592
Name=Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] numpydoc==1.5.0
[pip3] torch==1.13.1
[pip3] torchdata==0.5.1
[pip3] torchtext==0.14.1
[conda] Could not collect
**Additional context**
I've been running into issues with the Multi30K dataset for some time now. The issue that was occurring before was resolved by installing specific versions and combinations of the relevant torch libraries I specified. However, even this solution doesn't work anymore. Can you please fix what's broken with this cursed dataset?
Thank you.
|
https://github.com/pytorch/text/issues/2196
|
open
|
[] | 2023-08-04T09:25:28Z
| 2024-01-11T07:53:51Z
| 2
|
afurkank
|
huggingface/datasets
| 6,120
|
Lookahead streaming support?
|
### Feature request
From what I understand, streaming dataset currently pulls the data, and process the data as it is requested.
This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment.
While the delays might be dataset specific (or even mapping instruction/tokenizer specific)
Is it possible to introduce a `streaming_lookahead` parameter, which is used for predictable workloads (even shuffled dataset with fixed seed). As we can predict in advance what the next few datasamples will be. And fetch them while the current set is being trained.
With enough CPU & bandwidth to keep up with the training process, and a sufficiently large lookahead, this will reduce the various latency involved while waiting for the dataset to be ready between batches.
### Motivation
Faster streaming performance, while training over extra large TB sized datasets
### Your contribution
I currently use HF dataset, with pytorch lightning trainer for RWKV project, and would be able to help test this feature if supported.
|
https://github.com/huggingface/datasets/issues/6120
|
open
|
[
"enhancement"
] | 2023-08-04T04:01:52Z
| 2023-08-17T17:48:42Z
| 1
|
PicoCreator
|
huggingface/diffusers
| 4,459
|
how to convert a picture to text embedding, without training these image model like Textual Inversion
|
clip text: tokens -> text_embedding -> text_features
clip img: img -> img_embedding -> img_features
how inversion without training every time: img -> text_embedding
|
https://github.com/huggingface/diffusers/issues/4459
|
closed
|
[
"stale"
] | 2023-08-04T01:46:25Z
| 2023-09-12T15:03:45Z
| null |
yanchaoguo
|
huggingface/datasets
| 6,116
|
[Docs] The "Process" how-to guide lacks description of `select_columns` function
|
### Feature request
The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the guide.
### Motivation
This function is a commonly requested feature (see this [forum thread](https://discuss.huggingface.co/t/how-to-create-a-new-dataset-from-another-dataset-and-select-specific-columns-and-the-data-along-with-the-column/15120) and #5468 #5474). However, it has not been included in the guide since its implementation by PR #5480.
Mentioning it in the guide would help future users discover this added feature.
### Your contribution
I could submit a PR to add a brief description of the function to said guide.
|
https://github.com/huggingface/datasets/issues/6116
|
closed
|
[
"enhancement"
] | 2023-08-03T13:45:10Z
| 2023-08-16T10:02:53Z
| null |
unifyh
|
pytorch/TensorRT
| 2,167
|
❓ [Question] Is a INT8 calibrator specific to a given model or just specific to a dataset?
|
## ❓ Question
Is a INT8 calibrator specific to a given model or just specific to a dataset?
INT8 calibrators can be cached to accelerate further usage, which is nice. However, it's not clear from the documentation if the cached calibrator can only be used to calibrate the model it was used for TensorRT conversion or any model that uses the same calibration dataset.
As a practical example, let say that I'm training and comparing two classification neural networks A and B on the same dataset and with the same data preprocessing. I converted network A for TensorRT using INT8 quantization and saved the calibrator cache file. to disk. Can I use this calibrator to convert model B to TensorRT (which otherwise would have used the same calibration dataset as A)?
My intuition is that a calibrator is specific to given dataset **and** network and it cannot be reused for a different network.
|
https://github.com/pytorch/TensorRT/issues/2167
|
closed
|
[
"question"
] | 2023-08-03T11:38:16Z
| 2023-08-15T19:53:12Z
| null |
laclouis5
|
huggingface/diffusers
| 4,453
|
How to convert diffusers SDXL lora into safetensors that works with AUTO1111 webui
|
### Describe the bug
I trained a lora on SDXL with this diffusers script: https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py
I get great results when using the output .bin with the diffusers inference code.
How can I convert the .bin to .safetensors that can be loaded in AUTO1111 webui?
### Reproduction
Train a lora on SDXL with this diffusers script: https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py
The lora model cannot be loaded in AUTO1111 webui
### Logs
_No response_
### System Info
Python 3.10
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/4453
|
closed
|
[
"bug",
"stale"
] | 2023-08-03T11:23:25Z
| 2023-09-12T15:03:46Z
| null |
wangqyqq
|
huggingface/text-generation-inference
| 765
|
How to benchmark a warmed local model by docker
|
### System Info
Using the docker run to connected local model and it worked:
`docker run --rm --name tgi --runtime=nvidia --gpus all -p 5001:5001 -v data/nfs/gdiist/model:/data k8s-master:5000/text-generation-inference:0.9.3 --model-id /data/llama-7b-hf --hostname 0.0.0.0 --port 5001 --dtype float16 `
```
2023-08-03T09:14:08.564776Z INFO text_generation_launcher: Starting Webserver
2023-08-03T09:14:08.587895Z WARN text_generation_router: router/src/main.rs:165: Could not find a fast tokenizer implementation for /data/llama-7b-hf
2023-08-03T09:14:08.587942Z WARN text_generation_router: router/src/main.rs:168: Rust input length validation and truncation is disabled
2023-08-03T09:14:08.587953Z WARN text_generation_router: router/src/main.rs:193: no pipeline tag found for model /data/llama-7b-hf
2023-08-03T09:14:08.595313Z INFO text_generation_router: router/src/main.rs:212: Warming up model
2023-08-03T09:14:11.767661Z INFO text_generation_router: router/src/main.rs:221: Connected
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
And I can't use the `text-generation-benchmark` so I entered the Docker container and using the following command:
`docker exec -it tgi /bin/bash`
`text-generation-benchmark --tokenizer-name data/nfs/gdiist/model/llama-7b-hf`
There are errors reported as follows:
```
2023-08-03T09:23:25.437223Z INFO text_generation_benchmark: benchmark/src/main.rs:126: Loading tokenizer
2023-08-03T09:23:25.437552Z INFO text_generation_benchmark: benchmark/src/main.rs:135: Downloading tokenizer
2023-08-03T09:23:26.218104Z ERROR cached_path::cache: /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:559: ETAG fetch for https://huggingface.co/data/nfs/gdiist/model/llama-7b-hf/resolve/main/tokenizer.json failed with fatal error
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Model \"data/nfs/gdiist/model/llama-7b-hf\" on the Hub doesn't have a tokenizer"', benchmark/src/main.rs:147:78
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Aborted (core dumped)
I want to know if it's the reason for using the local model or the lack of parameters?
### Expected behavior
1. Help me using benchmark tool after docker run
2. Tell me how to use 2 gpus to run a local model in docker run
Thanks!
|
https://github.com/huggingface/text-generation-inference/issues/765
|
closed
|
[] | 2023-08-03T09:28:07Z
| 2023-10-16T01:50:10Z
| null |
Laych7
|
huggingface/diffusers
| 4,448
|
Outpainting results from diffusers' StableDiffusionControlNetPipeline is much worse than those from A1111 webui. How to improve?
|
I am trying to outpaint some human images (mainly the lower-body part) with SD 1.5 conditioned on ControlNet's inpainting and openpose. I have been using A1111 webui with ControlNet extension and it has been working quite well:
Here are my settings in the webui:
<img width="774" alt="Screenshot 2023-08-03 at 15 08 30" src="https://github.com/huggingface/diffusers/assets/50854238/f5d2ed63-bd8e-467a-81cb-28293eb45fe4">

<img width="774" alt="Screenshot 2023-08-03 at 15 10 00" src="https://github.com/huggingface/diffusers/assets/50854238/8b9e6c76-3986-437a-9159-cb799d35131d">
Note that 2 ControlNet units are enabled, one for OpenPose and one for ControlNet's inpainting model. For OpenPose I enabled "Preview as Input" and upload my custom json file with all joints defined (although the lower-body joints are not visible in the input image).
Here is the result I get from the webui, which looks good:

Now, I'm trying to reproduce this result using diffusers' StableDiffusionControlNetPipeline. Below is my code:
```
import numpy as np
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, DDIMScheduler
import torch
from diffusers.utils import load_image
import cv2
from PIL import Image
def make_inpaint_condition(image, image_mask):
image = np.array(image.convert("RGB")).astype(np.float32) / 255.0
image_mask = np.array(image_mask.convert("L")).astype(np.float32)
assert image.shape[0:1] == image_mask.shape[0:1], "image and image_mask must have the same image size"
image[image_mask < 128] = -1.0 # set as masked pixel
image = np.expand_dims(image, 0).transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
return image
controlnet_inpaint = ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_inpaint',
torch_dtype=torch.float16)
controlnet_openpose = ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_openpose',
torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained('runwayml/stable-diffusion-v1-5',
controlnet=[controlnet_inpaint, controlnet_openpose],
torch_dtype=torch.float16,
safety_checker=None).to('cuda')
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
pipe.enable_xformers_memory_efficient_attention()
original_image = load_image('./image.png')
mask_image = load_image('./mask.png')
inpaint_condition_image = make_inpaint_condition(original_image, mask_image)
openpose_condition_image = load_image('./pose.png')
generated_img = pipe(prompt="best quality, photorealistic, empty background",
negative_prompt="lowres, bad hands, bad feet, worst quality",
num_inference_steps=20,
guidance_scale=10.0,
image=[inpaint_condition_image, openpose_condition_image]).images[0]
generated_img.save('./test.png')
```
and here is the result I get from diffusers:

The legs look much less realistic and the background is kind of noisy. I have been using the same SD model (sd v1.5), same controlnet models (v1.1 for OpenPose and inpainting), and same sampler (DDIM), but the results from diffusers are much worse than the webui. What can I do to reproduce the results I get from the webui?
It also seems that with the diffusers pipeline, the unmasked part is also slightly modified. Is there any post-processing applied to it?
|
https://github.com/huggingface/diffusers/issues/4448
|
closed
|
[] | 2023-08-03T07:19:12Z
| 2023-08-30T05:35:03Z
| null |
xiyichen
|
huggingface/transformers
| 25,280
|
How to download files from HF spaces
|
### System Info
google colab
### Who can help?
@sanchit-gandhi @rock
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
i tried:
```
from huggingface_hub import hf_hub_download,hf_hub_url
# model_path = hf_hub_download(repo_id="xinyu1205/recognize-anything", filename="tag2text_swin_14m.pth", local_dir = "/content")
```
but throws an error repo not present
### Expected behavior
download the file
|
https://github.com/huggingface/transformers/issues/25280
|
closed
|
[] | 2023-08-03T07:02:03Z
| 2023-09-11T08:02:40Z
| null |
andysingal
|
huggingface/diffusers
| 4,445
|
How to finetune lora model ?
|
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
If I have a model from civitai , how to finetune it in sd1.5 and sdxl?
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
https://github.com/huggingface/diffusers/issues/4445
|
closed
|
[
"stale"
] | 2023-08-03T01:55:15Z
| 2023-09-12T15:03:49Z
| null |
kelisiya
|
pytorch/torchx
| 749
|
Passing additional build arguments to Dockerfile.torchx
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our
[documentation](https://pytorch.org/torchx).
### Question
Use case:
My team uses torchx to submit the job to remote scheduler such as AWS Batch. While building the docker image, we want to use a private PyPi repository to install the python dependncies.
It seems that Dockerfile doesn't allow passing additional build arguments, besides `Image` and `Workspace` ([reference](https://github.com/pytorch/torchx/blob/966c96f092bc89ad067b0bdb9eed8f7002dbcb46/torchx/workspace/docker_workspace.py#L122-L125)). We need to pass additional build arguments such as pip `index-url` to point to our private PyPi repository during the image build process.
Does the torchx team have any recommendations on how to achieve our use case of passing additional build args, while building the docker
|
https://github.com/meta-pytorch/torchx/issues/749
|
open
|
[] | 2023-08-02T20:05:02Z
| 2023-10-04T22:35:48Z
| 4
|
anjali-chadha
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.