repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
pytorch/torchx
798
Combine / rename `dist.ddp` and `dist.spmd` into `dist.torchrun`
## Description Currently, `dist.ddp` and `dist.spmd` are basically identical (the latter being a lightweight wrapper on the former). Also, they could be named more explicitly — `dist.ddp` doesn't actually involve Distributed Data Parallel, it just calls `torchrun`. ## Motivation/Background <!-- why is this feature/enhancement important? provide background context --> All else equal, simplification and explicit naming are good. For example, users leveraging Fully Sharded Data Parallel instead of DDP may find it confusing that they should be using `dist.ddp`. ## Detailed Proposal <!-- provide a detailed proposal --> Refactor `components/dist.py` by combining the methods for `ddp` and `spmd` into one method called `torchrun`. Update docs, tests, examples, and callsites as appropriate. ## Alternatives <!-- discuss the alternatives considered and their pros/cons --> 1. Leave thing as-is. 2. Remove `ddp` by rolling it into `spmd` and keep the `spmd` method, so `dist.spmd` is the only available command and it has a "good enough" name. ## Additional context/links <!-- link to code, documentation, etc. --> @danielbear
https://github.com/meta-pytorch/torchx/issues/798
open
[]
2023-12-08T21:23:31Z
2023-12-08T21:31:54Z
0
schmidt-ai
huggingface/tokenizers
1,410
How to create Tokenizer.json?
I have this tokenizer and I want to convert it to **tokenizer.json** format. - added_tokens.json - normalizer.json - special_tokens_map.json - config.json - preprocessor_config.json - vocab.json - merges.txt - pytorch_model.bin Is it possible to replace my tokenizer data with the original **tokenizer.json**? ``` import json j = open('hf/tokenizer.json') data = json.load(j) with open('medium-tokenizer/merges.txt') as f: merges = f.readlines() merges.pop(0) j = open('medium-tokenizer/vocab.json') vocab = json.load(j) j = open('medium-tokenizer/added_tokens.json') added_tokens = json.load(j) j = open('medium-tokenizer/normalizer.json') normalizer = json.load(j) data['added_tokens'] = added_tokens data['normalizer'] = normalizer data['model']['vocab'] = vocab data['model']['merges'] = merges with open("tokenizer.json", "w") as outfile: json.dump(data, outfile) ```
https://github.com/huggingface/tokenizers/issues/1410
closed
[ "Stale" ]
2023-12-08T09:41:18Z
2024-01-14T01:52:39Z
null
kenaii
huggingface/optimum
1,577
Support the ORT of the Stable Diffusion XL inpaint model
### Feature request Hi all. We would like to convert the stable-diffusion-xl-inpaint model below to ONNX and run it using ORT. The conversion to ONNX went well using Optimum's cli, but there doesn't seem to be a Python class for ORT inference. https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1 Is there a way to perform inference on this model with the optimum package? If not, do you have any plans to provide support? Thank you ### Motivation To run sd-xl inpaint model with ORT ### Your contribution I can submit a PR for you if I have something to help
https://github.com/huggingface/optimum/issues/1577
closed
[ "feature-request", "Stale" ]
2023-12-08T09:21:06Z
2025-02-19T02:02:54Z
2
0-chan-kor
huggingface/chat-ui
617
Does Chat-UI support multithreading?
Maybe it depends on node.js, but I want to know the CPU utilization.
https://github.com/huggingface/chat-ui/issues/617
closed
[ "question" ]
2023-12-08T05:36:18Z
2023-12-14T07:30:01Z
null
calycekr
huggingface/chat-ui
615
npm run error (latest git pull)
I created a .env.local as: ``` MONGODB_URL=mongodb://localhost:27017 MONGODB_DB_NAME=chat-ui MONGODB_DIRECT_CONNECTION=false COOKIE_NAME=hf-chat HF_TOKEN= HF_API_ROOT=https://api-inference.huggingface.co/models OPENAI_API_KEY= ``` Then I tried: ``` npm install #everything went fine npm run dev -- --host 0.0.0.0 ``` but I got the error below: ``` (node:770942) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. (Use `node --trace-deprecation ...` to show where the warning was created) 11:47:42 AM [vite] Error when evaluating SSR module /src/lib/server/auth.ts: |- SyntaxError: "undefined" is not valid JSON at JSON.parse (<anonymous>) at /home/shuther/devProjects/chat-ui/src/lib/server/auth.ts:43:14 at async instantiateModule (file:///home/shuther/devProjects/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9) 11:47:42 AM [vite] Error when evaluating SSR module /src/hooks.server.ts: failed to import "/src/lib/server/auth.ts" |- SyntaxError: "undefined" is not valid JSON at JSON.parse (<anonymous>) at /home/shuther/devProjects/chat-ui/src/lib/server/auth.ts:43:14 at async instantiateModule (file:///home/shuther/devProjects/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9) SyntaxError: "undefined" is not valid JSON at JSON.parse (<anonymous>) at /home/shuther/devProjects/chat-ui/src/lib/server/auth.ts:43:14 at async instantiateModule (file:///home/shuther/devProjects/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9) SyntaxError: "undefined" is not valid JSON at JSON.parse (<anonymous>) at /home/shuther/devProjects/chat-ui/src/lib/server/auth.ts:43:14 at async instantiateModule (file:///home/shuther/devProjects/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9) ``` On the browser side, I have error 500 (nice picture)
https://github.com/huggingface/chat-ui/issues/615
closed
[ "support" ]
2023-12-07T10:59:53Z
2024-04-24T12:29:46Z
4
shuther
huggingface/chat-ui
614
Docker build - multiple errors - documentation
I can't find documentation to build it myself; so I tried: `docker-compose build up` But I got multiple errors amoung: > chat-ui/.env: line 23: unexpected character "\"" in variable name "\"PROVIDER_URL\": \"\"," Even `source .env` returned multiple errors; I tried to change the `into a ' with no luck. My goal was to build it and include it into a docker compose.
https://github.com/huggingface/chat-ui/issues/614
open
[ "support" ]
2023-12-07T10:55:04Z
2024-06-01T12:44:18Z
4
shuther
huggingface/text-generation-inference
1,318
how to run tgi installed locally without any UI
### System Info how to run tgi installed locally without any UI? pip install text-generation , giving error: ERROR: No matching distribution found for text-generation ### Information - [ ] Docker - [X] The CLI directly ### Tasks - [X] An officially supported command - [ ] My own modifications ### Reproduction pip install text-generation ### Expected behavior need some help running tgi+my model on cmdline
https://github.com/huggingface/text-generation-inference/issues/1318
closed
[ "Stale" ]
2023-12-07T08:47:13Z
2024-01-13T01:46:40Z
null
poojitharamachandra
huggingface/autotrain-advanced
376
How to a Autotrain Seq2Seq ?
Hi everyone , I'm trying to finetune a Helsinki-NLP/opus-mt-tc-big-ar-en on local arabic of morocco which is called Daraija Arabic , the problem is that I'm unable to use Autotrain I keep getting 500 error code ![Screenshot 2023-12-07 011848](https://github.com/huggingface/autotrain-advanced/assets/112639221/ece3ee15-9f89-44ff-bf51-c5231f1858e7) ![Screenshot 2023-12-07 011912](https://github.com/huggingface/autotrain-advanced/assets/112639221/2dea03ae-afcd-4e86-a7b3-d175ff6bc555) [output.csv](https://github.com/huggingface/autotrain-advanced/files/13593069/output.csv) FYI : I didnt modify Training Parameters (find params to copy-paste [here] area so I dont know if its necessary
https://github.com/huggingface/autotrain-advanced/issues/376
closed
[]
2023-12-07T00:22:46Z
2023-12-08T17:27:57Z
null
Lachkar-Ahmed-Salim
huggingface/autotrain-advanced
375
How to do a Seq2Seq Autotrain ?
https://github.com/huggingface/autotrain-advanced/issues/375
closed
[]
2023-12-07T00:10:33Z
2023-12-11T09:41:24Z
null
Lachkar-Ahmed-Salim
huggingface/alignment-handbook
68
DPO alignment doesn't work on Lora models as suggested
You claim that "[In practice, we find comparable performance for both full and LoRA fine-tuning, with the latter having the advantage of producing small adapter weights that are fast to upload and download from the Hugging Face Hub.](https://github.com/huggingface/alignment-handbook/tree/main/scripts#:~:text=In%20practice%2C%20we%20find%20comparable%20performance%20for%20both%20full%20and%20LoRA%20fine%2Dtuning%2C%20with%20the%20latter%20having%20the%20advantage%20of%20producing%20small%20adapter%20weights%20that%20are%20fast%20to%20upload%20and%20download%20from%20the%20Hugging%20Face%20Hub.)" However, when I try the Lora model DPO-aligned LLM that you have trained, [alignment-handbook/zephyr-7b-dpo-lora](https://huggingface.co/alignment-handbook/zephyr-7b-dpo-lora), I experience a total performance degradation. Here is an example of model output that seems confused: ![image](https://github.com/huggingface/alignment-handbook/assets/3280518/1c5eae99-9641-469a-bb73-b66a26a594d4) Even the training loss indicates that the model has not learned much <img width="773" alt="image" src="https://github.com/huggingface/alignment-handbook/assets/3280518/550451f4-4afb-470c-ace7-71b332bb5087"> Here is the training loss for the full model DPO alignment. ![image](https://github.com/huggingface/alignment-handbook/assets/3280518/902aaf32-0446-4ab1-8e38-28afcd456fed) Would you please do a clarification? Is my observation different from what you have experienced? Thanks
https://github.com/huggingface/alignment-handbook/issues/68
open
[]
2023-12-06T19:12:30Z
2023-12-07T09:43:32Z
1
Abe13
pytorch/xla
6,032
/content/content/q-e/bin/pw.x: error while loading shared libraries: libmkl_scalapack_lp64.so: cannot open shared object file: No such file or directory
I am using google colab and in the code section: I wrote: ! /content/content/q-e/bin/pw.x < 01.vc-relax.in < 01.vc-relax.out got an output like that: /content/content/q-e/bin/pw.x: error while loading shared libraries: libmkl_scalapack_lp64.so: cannot open shared object file: No such file or directory Can you help me to solve it?
https://github.com/pytorch/xla/issues/6032
open
[ "question" ]
2023-12-06T13:48:03Z
2025-04-24T14:53:55Z
null
safinmahmood
huggingface/alignment-handbook
66
How to specify another GPU to run rather than cuda:0?
I tried to modify the --gpu_ids paramater in recipes/accelerate_configs/multi_gpu.yaml, however, it didn't work, the device was still 'cuda:0'.
https://github.com/huggingface/alignment-handbook/issues/66
closed
[]
2023-12-06T10:48:25Z
2023-12-06T11:13:02Z
null
njupopsicle
huggingface/datasets
6,478
How to load data from lakefs
My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references
https://github.com/huggingface/datasets/issues/6478
closed
[]
2023-12-06T09:04:11Z
2024-07-03T19:13:57Z
null
d710055071
huggingface/tokenizers
1,407
How to add byte_fallback tokens?
# Alternative title How to make a tokenizer behaving similarly to Llama ## Background Llama tokenizer considers byte_fallback tokens **not special**. When it decodes, it doesn't remove these tokens other than special tokens (unk, pad, bos, eos). ## What I am trying to do I'm trying to create a tokenizer behaving like Llama. However, I **am only able** to add byte_fallback tokens as **special tokens**. ```python from tokenizers import Tokenizer from tokenizers import decoders, pre_tokenizers from tokenizers.models import BPE from tokenizers.processors import TemplateProcessing from tokenizers.trainers import BpeTrainer from tokenizers import AddedToken from datasets import load_dataset dataset = load_dataset("tapaco") def topaco_generator(): for i in dataset['train']: yield i['paraphrase'] bpe_trainer = BpeTrainer( special_tokens=["<unk>", "<s>", "</s>", "<pad>"] + [f"<0x{i:02X}>" for i in range(256)] # byte_fallback tokens ) tokenizer = Tokenizer(BPE(byte_fallback=True)) tokenizer.pre_tokenizer = pre_tokenizers.Sequence( [pre_tokenizers.Metaspace(), pre_tokenizers.Digits(individual_digits=True)] ) tokenizer.enable_padding(pad_id=3, pad_token="<pad>") tokenizer.post_processor = TemplateProcessing( single="<s> $A </s>", pair="<s> $A </s> $B </s>", special_tokens=[ ("<s>", 1), ("</s>", 2), ], ) tokenizer.decoder = decoders.Sequence( [ decoders.Metaspace(), decoders.ByteFallback(), ] ) # my attempt to add byte_fallback as non-special tokens # tokenizer.add_tokens([AddedToken(content=f"<0x{i:02X}>", special=True, normalized=False) for i in range(256)]) tokenizer.train_from_iterator(topaco_generator(), trainer=bpe_trainer) tokenizer.save("topaco_tokenizer.json") tokenizer = Tokenizer.from_file("topaco_tokenizer.json") text = "I love you more than I can say 🤗" encoded_text = tokenizer.encode(text) print(encoded_text.tokens) # My work around to preverse byte_fallback tokens # and remove other special tokens decoded_text = tokenizer.decode(encoded_text.ids, skip_special_tokens=False) print(decoded_text.removeprefix('<s> ').removesuffix('</s>')) ``` ## Problem No matter how I tried this line `tokenizer.add_tokens([AddedToken(content=f"<0x{i:02X}>", special=True, normalized=False) for i in range(256)])` with different position in my code (before training, after training) and with different parameters of AddedToken, I still can not achieve Llama's behavior.
https://github.com/huggingface/tokenizers/issues/1407
open
[ "bytefallback", "Feature Request" ]
2023-12-06T09:03:35Z
2024-08-27T01:57:04Z
null
dinhanhx
huggingface/transformers.js
432
Cannot download the model from huggingface
Because of the network reason, when using transfomer.js we cannot download the model successful How to set the network proxy for the model download
https://github.com/huggingface/transformers.js/issues/432
open
[ "question" ]
2023-12-06T08:18:58Z
2023-12-10T13:42:50Z
null
wujohns
huggingface/blog
1,677
how to achieve image-text matching of BLIP2
Hi, Thanks to the authors for the works. I am trying to achieve image-text matching of BLIP2, but I didn't find any examples of that. Can you give me some help or tips?
https://github.com/huggingface/blog/issues/1677
open
[]
2023-12-06T07:03:21Z
2023-12-06T07:08:48Z
null
wkqun555
pytorch/kineto
847
How does kineto work actuallly?
Hello, everyone. I took a quick look at the source code of kineto and it seems the most important part of kineto is [CUPTI](https://docs.nvidia.com/cupti/r_main.html#r_main). I am curious how does kineto work and I have tried some examples of CUPTI. I have some questions hope someone could give me some insights. 1. How does kineto get pytroch functions name? From my short experience of CUPTI programming, I knew I could get CUDA runtime function name from CUPTI, here is code snippet of it: ```c++ if (cbInfo->callbackSite == CUPTI_API_ENTER) { traceData->functionName = cbInfo->functionName; // get cuda function name CUPTI_CALL(cuptiGetTimestamp(&startTimestamp)); traceData->startTimestamp = startTimestamp; traceData->memcpy_bytes = ((cudaMemcpy_v3020_params *)(cbInfo->functionParams))->count; traceData->memcpy_kind = ((cudaMemcpy_v3020_params *)(cbInfo->functionParams))->kind; } ``` What makes me confused is that how does kineto get functions name of pytroch (e.g. `torch::autograd::AccumulateGrad`) ? Is the supported by CUPTI or you guys use other ways to implement ? 2. What is the purpose of `KINETO_USE_DAEMON=1` ? According to a [blog](https://pytorch.org/blog/automated-trace-collection/) I quote a pecie of it: > First, we modified PyTorch to register with the Dynolog daemon on start up. This feature is switched on by setting the environment variable KINETO_USE_DAEMON=True. With this environment variable set to True, the PyTorch Profiler periodically polls Dynolog to check for on-demand tracing requests. So does it mean that if the env variable was not set, then Pytorch profiler still enabled and it just doesn't send the trace info it captured to user ? In other words, the env variable does not affect whether Pytorch Profiler is enabled. Am I right ? I have also opened a similiar [issue](https://github.com/facebookincubator/dynolog/issues/195) in dynolog repo but I dont get any feedback yet. I would appreciate if someone could answer these questions.
https://github.com/pytorch/kineto/issues/847
closed
[ "documentation", "question" ]
2023-12-06T06:48:28Z
2023-12-28T16:46:47Z
null
stricklandye
huggingface/diffusers
6,070
How to overload existing class in diffusers
That's just for personal development. I want to write a new class inherited from existing class (e.g. `ControlNetModel`) and I added some new parameters to `__init__` function, but found that the `__init__` function is still the parent's implementation, whether to add the decorator `register_to_config` or not. Hope some advice.
https://github.com/huggingface/diffusers/issues/6070
closed
[]
2023-12-06T06:41:44Z
2024-09-25T14:44:04Z
null
OrangeSodahub
huggingface/diffusers
6,067
How to run the fine_tuned model?
Hi all, I used the instructions given [here](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) to fine_tune the model on dog pictures (as explained in the link). The fine_tuning has finished, and a folder called path-to-save-model has been created (that has the weights of the model). Now how do I use this output? Do I run test_dreambooth.py? (I tried running it but it gives error at "from test_examples_utils import ExamplesTestsAccelerate, run_command # noqa: E402" I appreciate it if someone can please let me know how to use the output of the trained model. Thank you
https://github.com/huggingface/diffusers/issues/6067
closed
[]
2023-12-06T01:01:56Z
2025-04-28T10:32:33Z
null
alireza18878
huggingface/text-generation-inference
1,314
What is the default tokenizer behaviour?
### System Info N/A ### Information - [ ] Docker - [X] The CLI directly ### Tasks - [X] An officially supported command - [ ] My own modifications ### Reproduction I'm trying to understand whether special tokens (i.e. BOS and EOS) are added and suppressed on tokenization and decoding. Encoding: - I searched for add_special_tokens in the repo and I don't see anywhere this is being set to true when tokenizing. So, it seems that there are no EOS tokens automatically added. Decoding: - I searched for skip_special_tokens and it seems that [here](https://github.com/huggingface/text-generation-inference/blob/3238c49121b02432bf2938c6ebfd44f06c5adc2f/server/text_generation_server/models/causal_lm.py#L525) on line 541 that indeed BOS and EOS are being supressed. Is this understanding correct? ### Expected behavior If possible, could the default tokenization strategy be described on the ReadMe so users know what to expect?
https://github.com/huggingface/text-generation-inference/issues/1314
closed
[]
2023-12-05T17:35:05Z
2024-01-19T13:14:13Z
null
RonanKMcGovern
huggingface/chat-ui
609
[Feature Request] Uploading PDFS/Text Files/Images?
I love the search function and it makes the chat feel so much more accurate! I use it mainly as a direct ChatGPT replacment, using code models when needed or normal models for chat. Can we have the option to upload images/pdfs/other files to the chat? the images could be integrated by clip/blip, and the PDF or text files could just be added to the context or summarized and then added? It would be awesome to have! Thank you for all the work made into this project
https://github.com/huggingface/chat-ui/issues/609
open
[]
2023-12-05T12:20:39Z
2024-10-04T01:13:18Z
3
iChristGit
huggingface/trl
1,059
How can I have the evaluation pass in only the response to a prompted/instructed generation into the metric.
I have created the following metric: ```py class MyCustomMetric(Metric): def _info(self): # Returns the MetricInfo that defines the name, description, etc. return datasets.MetricInfo( # This should be a short description of your metric. description="_DESCRIPTION", # You can cite papers, GitHub repositories, etc. citation="_CITATION", # The inputs and outputs your metric expects. # These are used to validate the inputs and outputs of _compute inputs_description="_KWARGS_DESCRIPTION", features=datasets.Features({ 'predictions': datasets.Value('string'), 'references': datasets.Value('string') }) ) def _compute(self, predictions, references): # Here is where you should put your main metric computation logic # Adapt your existing code to fit in here fc_results = [] for idx, example in enumerate(predictions): print(f"Example {idx}: ", end="") post_message = "" # Custom Function Calling metric prompts = None try: generated_arguments, expected_arguments, prompts = json_arguments_from_prompt( references[idx], predictions[idx], INSTRUCTION # {"idx": idx, "epoch": epoch} ) fc_result = fc_metric.run(generated_arguments, expected_arguments) fc_results.append(fc_result) # if save_prompts_path: # # add prompts to dpo_data.json # dpo_data.append({ # "fc_result": fc_result, # **prompts # }) # with open(save_prompts_path, "w") as f: # json.dump(dpo_data, f) except Exception as e: print(f"Error function calling: {e}\n") fc_results.append(0) return fc_results ``` This metric expects the prediction to be generated after passing the instruction. For example I have my prompts in the following format: `<s> [INST] {message} [/INST] {response}` I want the evaluation to receive the `predictions` for response and then compare those with my `references`. To reiterate, the predictions should be generated from the model being passed `<s> [INST] {message} [/INST]`. Currently it seems as if the logits are just generated without any prompt resulting in responses like: ``` predicted_strings: ['Unterscheidung Unterscheidung![: What<<NOP What favorite is to help the patterns climate a following is is a to a topic you\nineited by the >>_> in returnFUNCTIONS>\n the is related, return program should be " the format formatname format format. functionFUNCTION_CALL>FORM>( <</OFIGNCIATED_WITH_USER_USERUNCTION</FUNCTION_CALL_NAME>brUNCTIONSCALL_NAMEGSUMENTS>\nGUMENTS_ASS_THE_FIED_FORM_FORMAT</FUNCTION_CALL_ARGUMENTS> If, respond " " response.\nFUNCTIONS>username": "get",meanalth",",function_ "description": "Get health "input": [root": "string", "properties": {" "}] {"name": "leaf_Results", "description": "Search search list of searchists", on a search query", "parameters": {"type": "array", "properties": {"query": {"type": {"query": {"type": "string" "required": "Search"}} "type": "array" "title": ["query"] "description": "Searchphy Search"}}}, {"name": "getUserending",", "description": "Get a list of trifs that on the tr trending", "parameters": {"type": "object", "properties": {"}}},}</FUNCTIONS>\nUSERFS>\n me the ofif from a cat cat doing</users FUNCTION_CALL_NAME>rootSearchResults</FUNCTION_CALL_NAME>FUNCTION_CALL_ARGUMENTS>{"json": {"query": "cool cat"}}</FUNCTION_CALL_ARGUMENTS></s>��'] ``` after looking through the source code it seems like modifying the `prediction_step` method inside `Trainer` is the way to go.
https://github.com/huggingface/trl/issues/1059
closed
[]
2023-12-04T19:01:34Z
2024-01-12T15:05:10Z
null
CakeCrusher
huggingface/distil-whisper
49
How to make training data?
I have a folder like this: audio_1 transcript_1.txt audio_2 transcript_2.txt how can I make this folder into huggingface dataset?
https://github.com/huggingface/distil-whisper/issues/49
open
[]
2023-12-04T18:44:40Z
2023-12-12T16:51:48Z
null
satani99
pytorch/audio
3,711
_pickle.UnpicklingError: invalid load key, 'v'.
### 🐛 Describe the bug ### ISSUE When I run `python preprocess_lrs3.py --data-dir=D:/BaiduNetdiskDownload/LRS3 --detector=retinaface --dataset=lrs3 --root-dir=D:/pycharmProject/audio_vision/audio-main/examples/avsr/predata --subset=test --seg-duration=16 --groups=4 --job-index=0` The following appears `D:\anaconda3\envs\davsr\lib\site-packages\torchaudio\backend\utils.py:62: UserWarning: No audio backend is available. warnings.warn("No audio backend is available.") Traceback (most recent call last): File "preprocess_lrs3.py", line 68, in <module> vid_dataloader = AVSRDataLoader(modality="video", detector=args.detector, resize=(96, 96)) File "D:\pycharmProject\audio_vision\audio-main\examples\avsr\data_prep\data\data_module.py", line 19, in __init__ self.landmarks_detector = LandmarksDetector(device="cuda:0") File "D:\pycharmProject\audio_vision\audio-main\examples\avsr\data_prep\detectors\retinaface\detector.py", line 17, in __init__ self.face_detector = RetinaFacePredictor( File "D:\pycharmProject\audio_vision\audio-main\examples\avsr\data_prep\face_detection\ibug\face_detection\retina_face\retina_face_predictor.py", line 28, in __init__ pretrained_dict = torch.load(model.weights, map_location=self.device) File "D:\anaconda3\envs\davsr\lib\site-packages\torch\serialization.py", line 795, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "D:\anaconda3\envs\davsr\lib\site-packages\torch\serialization.py", line 1002, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: invalid load key, 'v'. ` May I ask why this problem occurs? How to solve it ### Versions Collecting environment information... PyTorch version: 1.13.1 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Microsoft Windows 11 Home China GCC version: Could not collect Clang version: Could not collect CMake version: version 3.27.7 Libc version: N/A Python version: 3.8.18 (default, Sep 11 2023, 13:39:12) [MSC v.1916 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.22621-SP0 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 Laptop GPU Nvidia driver version: 517.18 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture=9 CurrentClockSpeed=1992 DeviceID=CPU0 Family=198 L2CacheSize=2048 L2CacheSpeed= Manufacturer=GenuineIntel MaxClockSpeed=1992 Name=Intel(R) Core(TM) i7-10700T CPU @ 2.00GHz ProcessorType=3 Revision= Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] torch==1.13.1 [pip3] torchaudio==0.13.1 [pip3] torchvision==0.14.1 [conda] blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main [conda] mkl 2023.1.0 h6b88ed4_46358 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main [conda] mkl-service 2.4.0 py38h2bbff1b_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main [conda] mkl_fft 1.3.8 py38h2bbff1b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main [conda] mkl_random 1.2.4 py38h59b6b97_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main [conda] numpy 1.24.3 py38h79a8e48_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main [conda] numpy-base 1.24.3 py38h8a87ada_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main [conda] pytorch 1.13.1 py3.8_cuda11.7_cudnn8_0 pytorch [conda] pytorch-cuda 11.7 h16d0643_5 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchaudio 0.13.1 pypi_0 pypi [conda] torchvision 0.14.1 pypi_0 pypi
https://github.com/pytorch/audio/issues/3711
open
[]
2023-12-04T15:32:55Z
2024-11-12T15:06:54Z
1
YuQing2000
pytorch/xla
6,015
Kaggle TPU Finetuning Roberta Help
## ❓ Questions and Help I have pretrained roberta-base on dna promoter sequences of plants (working on a project). I am currently trying to finetune it on a downstream task of predicting gene expression values, basically a list of 8 values (corresponding to various tissues) from a single promoter sequence. This wasn't possible on kaggle's gpu (due to memory restrictions), so I tried to do the same on TPU using pytorch-xla (figured that was the best option). The link to the notebook as well as the datasets used are as follows: 1. [Main Kaggle Notebook](https://www.kaggle.com/code/gurveersinghvirk/florabert-2/) 2. [Dataset containing code and data](https://www.kaggle.com/datasets/gurveersinghvirk/florabert-base) 3. [Dataset on github](https://github.com/gurveervirk/florabert/) (contains old code but has the correct structure) Version 43 is the one using the pytorch-xla code (as far as I could figure out). The data's format is as follows: sequence \t labels dna_promoter_seq_here list_of_8_values_here eg: CTCAAGCTGAGCAGTGGGTTTGCTCTGGAGGGGAAGCTCAACGGTGGCGACAAGGAAGAATCTGCTTGCGAGGCGAGCCCTGACGCCGCTGATAGCGACCAAAGGTGGATTAAACAACCCATTTCATCATTCTTCTTCCTTGTTAGTTATGATTCCCACGCTTGCCTTTCATGAATCATGATCCTATATGTATATTGATATTAATCAGTTCTAGAAAGTTCAACAACATTTGAGCATGTCAAAACCTGATCGTTGCCTGTTCCATGTCAACAGTGGATTATAACACGTGCAAATGTAGCTATTTGTGTGAGAAGACGTGTGATCGACTCTTTTTTTATATAGATAGCATTGAGATCAACTGTTTGTATATATCTTGTCATAACATTTTTACTTCGTAGCAACGTACGAGCGTTCACCTATTTGTATATAAGTTATCATGATATTTATAAGTTACCGTTGCAACGCACGGACACTCACCTAGTATAGTTTATGTATTACAGTACTAGGAGCCCTAGGCTTCCAATAACTAGAAAAAGTCCTGGTCAGTCGAACCAAACCACAATCCGACGTATACATTCTGGTTCCCCCACGCCCCCATCCGTTCGATTCA [54.679647, 60.646678, 54.9113, 78.878474, 21.326259, 27.973276, 17.419968, 40.465529] There's 7,22,000 examples of this kind, ~722 mb in total divided into ~400 mb train, 200 mb test and 100 mb eval. When running the code "finetune.py", all goes well till the training starts (datasets are loaded, processed, etc). But, the latest run took 3+ hrs to get to the next step and the RAM usage kept on increasing. It looked the TPU run was very slow and the run then crashed as it ran out of memory. I have tried accelerate and trainer but those efforts were in vain. Few questions: 1. Is my approach correct? 2. What changes should I make? 3. Can I run this code using HuggingFace Trainer (was originally used in the code)? If so, how? 4. Is the RAM usage normal? 5. Should it take this long? If I pass the model as an arg to xmp.spawn, I end up seeing either of "Check failed: data()->tensor_data" or "RuntimeError: Function AddcmulBackward0 returned an invalid gradient at index 1 - expected device xla:1 but got xla:0". Why? Kindly guide.
https://github.com/pytorch/xla/issues/6015
open
[ "question", "performance", "xla:tpu" ]
2023-12-04T14:07:43Z
2025-04-24T14:56:25Z
null
gurveervirk
pytorch/xla
6,014
How to add a new third-party Backend
## ❓ Questions and Help 1 We see PyTorch/XLA now pulls XLA from OpenXLA, is that means we just need to adapt OpenXLA to add a new backend? 2 Will collective operations work with third-party backend?
https://github.com/pytorch/xla/issues/6014
closed
[]
2023-12-04T10:10:18Z
2023-12-28T22:31:11Z
null
dinghaodhd
huggingface/computer-vision-course
77
Issue with rendering the course
If we try to render the course to preview how our added content looks like, it throws the following error ```bash sarthak@kde:~/Desktop/computer-vision-course$ doc-builder preview computer-vision-course chapters/ --not_python_module Initial build docs for computer-vision-course chapters/ /tmp/tmp0uqdjoxf/computer-vision-course/main/en Building the MDX files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 29/29 [00:00<00:00, 1288.27it/s] Traceback (most recent call last): File "/home/sarthak/anaconda3/bin/doc-builder", line 8, in <module> sys.exit(main()) File "/home/sarthak/anaconda3/lib/python3.9/site-packages/doc_builder/commands/doc_builder_cli.py", line 47, in main args.func(args) File "/home/sarthak/anaconda3/lib/python3.9/site-packages/doc_builder/commands/preview.py", line 171, in preview_command source_files_mapping = build_doc( File "/home/sarthak/anaconda3/lib/python3.9/site-packages/doc_builder/build_doc.py", line 405, in build_doc sphinx_refs = check_toc_integrity(doc_folder, output_dir) File "/home/sarthak/anaconda3/lib/python3.9/site-packages/doc_builder/build_doc.py", line 460, in check_toc_integrity raise RuntimeError( RuntimeError: The following files are not present in the table of contents: - en/Unit 5 - Generative Models/variational_autoencoders - en/Unit 5 - Generative Models/README - en/Unit 11 - Zero Shot Computer Vision/README - en/Unit 2 - Convolutional Neural Networks/README - en/Unit 1 - Fundamentals/README - en/Unit 8 - 3D Vision, Scene Rendering and Reconstruction/README - en/Unit 4 - Mulitmodal Models/README - en/Unit 9 - Model Optimization/README - en/Unit 6 - Basic CV Tasks/README - en/Unit 7 - Video and Video Processing/README - en/Unit 13 - Outlook/README - en/Unit 3 - Vision Transformers/README - en/Unit 12 - Ethics and Biases/README - en/Unit 10 - Synthetic Data Creation/README Add them to chapters/_toctree.yml. ``` **Explanation:** This is because there have been README files added to each chapter. However, these README files are not present in the `_toctree.yml`. **Why it's important:** Being able to render the course locally is important as it can give us a rough overview of how the content looks like. **Possible solutions could be:** * Remove the README files for the time being * Add them to the toctree and also making sure that if anyone adds any chapter contents they also update the toctree making it easier for others to render the course Open for discussion from other members :v:
https://github.com/huggingface/computer-vision-course/issues/77
open
[ "question" ]
2023-12-04T01:02:22Z
2023-12-08T18:17:19Z
null
sarthak247
huggingface/sentence-transformers
2,363
How to retrieve the epoch of the saved model from model.save ?
Hi, Thank you for the repo. Can anyone help me with retrieving the epoch of the saved model, in both cases where save_best_model=True and save_best_model=False? Thank you ``` model.fit(train_objectives=[(train_dataloader, train_loss)], evaluator=evaluator, epochs=num_epochs, evaluation_steps=1000, warmup_steps=warmup_steps, save_best_model=True, output_path=output_path) model.save(path)```
https://github.com/huggingface/sentence-transformers/issues/2363
closed
[]
2023-12-02T15:25:52Z
2024-01-09T22:16:20Z
null
gowrijsuria
huggingface/transformers.js
426
[Question] feature-extraction discrepancies across different platforms
I'm observing discrepancies in feature-extraction results across different platforms. Here's the code: ```js import { pipeline, env } from '@xenova/transformers' const extractor = await pipeline('feature-extraction', 'Xenova/gte-small', { quantized: false, cache_dir: './.cache', local_files_only: false, }) const text = 'hello' const embedding = await extractor(text, { pooling: 'mean', normalize: true }) const response = Array.from(embedding.data) console.log(JSON.stringify(response, null, 2)) // Node v20 // "@xenova/transformers": "^2.9.0" ``` The results differ between macOS 13 (Apple Silicon/Arm) and Ubuntu 23.1 (Raspberry Pi/Arm). I've tried various configurations (e.g., pooling, normalize, with and without Array.from) and still observe different results. It's worth noting that sequential calls on the same platform produce consistent results. I have a few questions: 1. Is this discrepancy expected due to the nature of float32 precision and rounding, even though the calculations are performed on ARM architecture? 2. Given that the difference is extremely small, could it still impact accuracy in any significant way? [mean-nonorm-mac-01.json](https://github.com/xenova/transformers.js/files/13530082/mean-nonorm-mac-01.json) [mean-nonorm-pi-01.json](https://github.com/xenova/transformers.js/files/13530083/mean-nonorm-pi-01.json) [mean-norm-mac-01.json](https://github.com/xenova/transformers.js/files/13530084/mean-norm-mac-01.json) [mean-norm-pi-01.json](https://github.com/xenova/transformers.js/files/13530086/mean-norm-pi-01.json)
https://github.com/huggingface/transformers.js/issues/426
closed
[ "question" ]
2023-12-01T17:12:04Z
2023-12-05T18:51:03Z
null
devfacet
pytorch/xla
5,959
how pytorch NCHW TO XLA HWOI format ? help
## ❓ Questions and Help I have a request to make the pytorch input model in NCHW format by default, and convert it to HWOI format during the training process, which is conducive to hardware processing data. I wonder if there is a way to uniformly convert this model to the HWOI format when it is sent to XLA. In addition, when sending back from torch_xla to torch, should the HWOI format be converted to the default format NCHW of torch, is there an existing method?
https://github.com/pytorch/xla/issues/5959
open
[ "question" ]
2023-12-01T06:18:20Z
2025-04-28T11:44:59Z
null
ckfgihub
pytorch/serve
2,814
[question] How to properly handle client request cancelation during inference?
Hey all, My model's inference is quite long-running (around 50 seconds per request), so it would be great if closed client connections are handled properly by interrupting the inference that's currently in progress. I'm currently implementing `initialize`, `preprocess`, `inference` and `postprocess` methods in my custom handler class. What's the proper place for detecting closed connection, if possible? Thanks, Miro
https://github.com/pytorch/serve/issues/2814
closed
[]
2023-11-30T18:34:49Z
2024-03-20T22:14:27Z
null
miroslavLalev
huggingface/chat-ui
604
"Invalid State: Controller is already closed" error when trying to use chat-ui locally with llama.cpp
HELP NEEDED **What is the issue?** Not able to use chat-ui locally to get the response back when using the llama.cpp as a server. I can load the chat-ui after installing it via npm install and npm run dev. The env.local file is also configured and UI allows to send the request. However, the response never comes back in UI, and 'Sorry, something went wrong. Please try again' is shown. On checking the logs in chat-ui, the error shown is: TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed at new NodeError (node:internal/errors:399:5) at ReadableStreamDefaultController.enqueue (node:internal/webstreams/readablestream:1036:13) at update (/home/devuser/development/chat-ui-main/src/routes/conversation/[id]/+server.ts:158:20) at eval (/home/devuser/development/chat-ui-main/src/routes/conversation/[id]/+server.ts:168:13) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async Object.start (/home/devuser/development/chat-ui-main/src/routes/conversation/[id]/+server.ts:260:7) { code: 'ERR_INVALID_STATE' I also tested the llama.cpp server response via curl and the response came back correctly, so it's not an issue with llama.cpp. Versions: chat-ui code is latest from master. llama.cpp code is latest from master and build locally. Tried with Node 20 and then with Node 19, but issue still remains. env.local: MONGODB_URL=mongodb://localhost:27017 MONGODB_DB_NAME=chat-ui MONGODB_DIRECT_CONNECTION=false USE_LOCAL_WEBSEARCH=true HF_ACCESS_TOKEN=test MODELS=`[ { "name": "Zephyr", "chatPromptTemplate": "<|system|>\n{{preprompt}}</s>\n{{#each messages}}{{#ifUser}}<|user|>\n{{content}}</s>\n<|assistant|>\n{{/ifUser}}{{#ifAssistant}}{{content}}</s>\n{{/ifAssistant}}{{/each}}", "parameters": { "temperature": 0.7, "top_p": 0.95, "repetition_penalty": 1.1, "top_k": 50, "truncate": 1000, "max_new_tokens": 2048, "stop": ["</s>"] }, "endpoints": [ { "url": "http://localhost:8080", "type": "llamacpp" } ] } ]` Am I missing anything in terms of installation steps? Any help here will be appreciated.
https://github.com/huggingface/chat-ui/issues/604
closed
[]
2023-11-30T16:42:06Z
2023-11-30T17:41:19Z
1
ManasInd
huggingface/optimum
1,556
RuntimeError: Cannot infer the task from a local directory yet, please specify the task manually.
### System Info windows 10 - ryzen 3600x - 16 gb ddr4-3000 - python 3.10 - latest optimum inside a venv ### Who can help? _No response_ ### Information When I try to convert a model to openvino using optimum-cli export openvino -m "d:\sdxl\LCMphoton" "d:\sdxl\LCMphotonov" I have this error : RuntimeError: Cannot infer the task from a local directory yet, please specify the task manually. I am converting standard sd1.5 models to lcm with lora locally and want to convert that to openvino. I have local models which are not present on huggingface and it takes forever for me to upload there (only 1-2 megabytes max) Can we somehow use local models that have the same directory structure as hf ? ``` ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) optimum-cli export openvino -m "d:\sdxl\LCMphoton" "d:\sdxl\LCMphotonov" ### Expected behavior I want to be able to convert local models without having to download from huggingface.
https://github.com/huggingface/optimum/issues/1556
closed
[ "bug" ]
2023-11-30T16:09:24Z
2023-12-09T22:37:44Z
2
patientx
pytorch/xla
5,953
xla NCHW to HWOI
## ❓ Questions and Help Is there a simple way to modify the tensor layout (NCHW) in the entire xla computation graph to convert it to HWOI format, and continue to convert it to NCHW format when it is returned to torch? If there is no simple and unified modification method, how can we change it? For example, modifying each operator one by one is also a method. How should we achieve this goal?
https://github.com/pytorch/xla/issues/5953
closed
[ "duplicate", "question" ]
2023-11-30T06:59:00Z
2025-04-28T11:53:34Z
null
ckfgihub
pytorch/executorch
1,313
How to run the pte model on GPU
Hello, I would like to konw if ExecuTorch supports GPU. Now I could export model into pte format and execute runtime for xnnpack backend in Intel device. The device has GPU. But when I check GPU usage while running the application, GPU wasn't utilized. If ExecuTorch supports GPU, can you please share me how to use GPU? ### Environment ``` $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz CPU family: 6 Model: 142 Thread(s) per core: 2 Core(s) per socket: 2 Socket(s): 1 Stepping: 9 CPU max MHz: 3600.0000 CPU min MHz: 400.0000 BogoMIPS: 4599.93 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe p opcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities ``` Thanks,
https://github.com/pytorch/executorch/issues/1313
closed
[ "need-user-input" ]
2023-11-30T05:19:12Z
2023-12-14T23:54:25Z
null
EarthMu
huggingface/safetensors
396
[Feature request] How about support async save to disk?
### Feature request How about support async save to disk? ### Motivation the weight or optimizer is vary large for LLMs,so,it will waste a lot of time for tensor from cpu to disk。 If we can support async save to disk, it will be vary helpful. ### Your contribution .
https://github.com/huggingface/safetensors/issues/396
closed
[ "Stale" ]
2023-11-30T02:55:25Z
2024-02-13T01:46:40Z
null
ZHUI
pytorch/pytorch
114,822
convert to onnx with the dynamic shape and onnx convert to tensorrt, but could't get the dynamic engine of tensorrt. dims.d[0]==1 !!! what is wrong with the model??? please give me some help. thanks
### 🐛 Describe the bug - convert my model to onnx with the dynamic shape and onnx convert to tensorrt, but could't get the dynamic engine of tensorrt. dims.d[0]==1 !!! but when i converted yolov8 model to onnx, then onnx convert to tensorrt, i got dims.d[0] == -1 and it worked well. what is wrong with the model??? ### pth to onnx ```python torch.onnx.export( model, dummy_input, args.output, verbose=False, export_params=True, input_names=input_names, output_names=output_names, keep_initializers_as_inputs=False, opset_version=13, dynamic_axes = { "input_image":{0:"batch"}, "bases":{0:"batch"}, "pred":{0:"batch"} } if args.dynamic else None ) ``` ![image](https://github.com/pytorch/pytorch/assets/71381036/4ac882a2-ab38-4b30-9125-927d145ca040) ### onnx to tensorrt ```bash ./trtexec --onnx=model_0364999-dy-op13.onnx \ --saveEngine=model_0364999-dy-op13 \ --minShapes=input_image:1x1x2048x2048 \ --optShapes=input_image:10x1x2048x2048 \ --maxShapes=input_image:10x1x2048x2048 \ --fp16 \ --device=0 \ --workspace=10240 \ --preview=+fasterDynamicShapes0805 \ ``` ![image](https://github.com/pytorch/pytorch/assets/71381036/4874b949-6847-4e74-96a1-113beaa81d83) ### Versions ubuntu:20.04 cuda:11.1 cudnn:8.2 tensorrt:8.5.2 python: 3.6 pytorch:1.7.1
https://github.com/pytorch/pytorch/issues/114822
closed
[]
2023-11-30T02:21:23Z
2023-11-30T03:22:52Z
null
tianlan6767
huggingface/transformers.js
424
[Question] Batch inference for vit
It seems like all the tests in the repository related to processors and image models use one image per input. 1. Do the models support feeding a batch of images as input during inference? Is there a speed benefit from this? 2. Are there any other optimization/parallelization tools in transformers.js that I can use to process a set of images? Used model: vit base (google/vit-base-patch16-224-in21k), tiny and small distillations (WinKawaks/vit-tiny-patch16-224), exported in onnx format with optimum
https://github.com/huggingface/transformers.js/issues/424
closed
[ "question" ]
2023-11-29T09:52:16Z
2023-12-05T14:49:36Z
null
arseniymerkulov
huggingface/transformers
27,755
How to inference the model with 200k length context
### Model description I want to test Yi-34B-200k, Although I ran through the model, as the context length increased, OOM appeared, and I wondered how I could test to 200k context length with sufficient GPU resources. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
https://github.com/huggingface/transformers/issues/27755
closed
[]
2023-11-29T07:37:06Z
2024-05-24T07:24:56Z
null
taishan1994
huggingface/transformers.js
423
Not able to load local classification onnx model
Was trying to follow the instruction of this page to load local custom model, but failed to find local path https://huggingface.co/docs/transformers.js/custom_usage the code snippet ` import { env, AutoTokenizer, AutoModelForSequenceClassification } from '@xenova/transformers'; env.useFS = true; env.localModelPath = '/path/to/local/file' env.allowRemoteModels = false; let tokenizer = await AutoTokenizer.from_pretrained('tinybert'); let model = await AutoModelForSequenceClassification.from_pretrained('tinybert'); let inputs = await tokenizer('I love transformers!'); let { logits } = await model(inputs); ` here is the file structure: models └── tinybert ├── config.json ├── onnx │ ├── model.onnx │ └── model_quantized.onnx ├── ort_config.json ├── special_tokens_map.json ├── tokenizer.json ├── tokenizer_config.json └── vocab.txt error: (node:36959) ExperimentalWarning: stream/web is an experimental feature. This feature could change at any time (Use `node --trace-warnings ...` to show where the warning was created) Unable to load from local path "/Users/hzhang14/pete/2023_H1_spam/models/tinybert/tokenizer.json": "ReferenceError: Headers is not defined" Unable to load from local path "/Users/hzhang14/pete/2023_H1_spam/models/tinybert/tokenizer_config.json": "ReferenceError: Headers is not defined" file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/utils/hub.js:462 throw Error(`\`local_files_only=true\` or \`env.allowRemoteModels=false\` and file was not found locally at "${localPath}".`); ^ Error: `local_files_only=true` or `env.allowRemoteModels=false` and file was not found locally at "/Users/hzhang14/pete/2023_H1_spam/models/tinybert/tokenizer.json". at getModelFile (file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/utils/hub.js:462:27) at async getModelJSON (file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/utils/hub.js:575:18) at async Promise.all (index 0) at async loadTokenizer (file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/tokenizers.js:52:16) at async Function.from_pretrained (file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/tokenizers.js:3890:48) at async file:///Users/hzhang14/pete/2023_H1_spam/js/test.mjs:9:17
https://github.com/huggingface/transformers.js/issues/423
closed
[ "question" ]
2023-11-29T06:40:09Z
2023-11-30T07:27:27Z
null
purezhanghan
huggingface/chat-ui
594
TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed
i use the lasted main version and i have error when make chat, and in GUI , it show "Sorry, something went wrong. Please try again." TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed at new NodeError (node:internal/errors:405:5) at ReadableStreamDefaultController.enqueue (node:internal/webstreams/readablestream:1040:13) at update (file:////chat-ui-main/build/server/chunks/_server.ts-38ce6e8d.js:480:22) at file:////chat-ui-main/build/server/chunks/_server.ts-38ce6e8d.js:492:15 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async Object.start (file:////chat-ui-main/build/server/chunks/_server.ts-38ce6e8d.js:585:9) { code: 'ERR_INVALID_STATE' can any one help me to fix this problem
https://github.com/huggingface/chat-ui/issues/594
closed
[ "support" ]
2023-11-29T04:28:27Z
2024-06-17T12:48:45Z
18
AlexBlack2202
huggingface/chat-ui
593
Show image in chat box
Can I show a image by http link on chat box?
https://github.com/huggingface/chat-ui/issues/593
open
[ "support" ]
2023-11-29T03:17:17Z
2023-11-30T17:57:32Z
3
ntqnhanguyen
pytorch/text
2,217
how to run this code
## how to run this code i need a --pip list -- to run this code
https://github.com/pytorch/text/issues/2217
open
[]
2023-11-29T02:15:16Z
2024-08-05T12:51:43Z
null
ygqrc
huggingface/optimum
1,554
ORT Models Failing because of the latest fsdp changes on transformers Trainer.
### System Info ```shell optimum from source transformers from source ``` ### Who can help? @JingyaHuang ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) when trying to run training using ortmodule all models will fail due to latest changes on transformers trainer. fsdp was removed as an attribute and it included other changes. I can work on the fix if you guys don't have the bandwith. @JingyaHuang We also been getting a lot of this types errors, can we work on some CI pipeline to spot these failures so we can fix them fast? Thanks. ### Expected behavior `AttributeError: 'ORTTrainer' object has no attribute 'fsdp' `
https://github.com/huggingface/optimum/issues/1554
closed
[ "bug" ]
2023-11-28T20:22:40Z
2023-12-26T18:15:02Z
6
AdamLouly
huggingface/chat-ui
592
Authentication Doc and Code may be out-of-date/not working
## Description Hello, Following the doc in the `README`: https://github.com/huggingface/chat-ui#basic-and-bearer. The UI should support (if setup in the `.env.local` file) `Basic` and `Bearer` authentication, however, what I noticed since the requests have been moved to the `huggingface` module is that the authorization flow has changed. In the module: ```js #huggingface/inference/dist/index.mjs [...] const { accessToken, model: _model, ...otherArgs } = args; let { model } = args; const { forceTask: task, includeCredentials, taskHint, ...otherOptions } = options ?? {}; const headers = {}; if (accessToken) { headers["Authorization"] = `Bearer ${accessToken}`; } [...] ``` If I define a custom chat endpoint in this way: ``` "endpoints": [{"url": "URL/generate_stream", "type" : "tgi", "accessToken": "<bearer-token-only>"}] ``` then the `accessToken` is properly propagated, but the suggested `"authorization": "Bearer/Basic <string>"` does not work. If this is intended: 1. I would be happy to open a quick PR to change the README to something like: ```suggestion #### Bearer Custom endpoints may require authorization, depending on how you configure them. Chat-UI support `Bearer` authentication. You can use a token, which can be grabbed from [here](https://huggingface.co/settings/tokens). You can then add the generated information and the `accessToken` parameter to your `.env.local`. ```env "endpoints": [ { "url": "https://HOST:PORT", "accessToken": "<bearer-token>", } ] **NOTE**: currently, `Basic` authentication is not supported ``` Please let me know what do you think, and if I am missing something. Thanks, Guido
https://github.com/huggingface/chat-ui/issues/592
open
[ "bug", "documentation", "back" ]
2023-11-28T18:50:15Z
2023-11-29T13:29:22Z
1
muscionig
huggingface/transformers.js
421
[Question] FeatureExtractionPipeline input length
@xenova : First of all thank you so much for your amazing work with this open source library. It opens up many possibilities. One thing that caught my attention which is [FeatureExtractionPipeline](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.FeatureExtractionPipeline) can accept any amount of input regardless of the models' [sequence lengths](https://huggingface.co/spaces/mteb/leaderboard). Does it truncate or tokenize the data internally before applying it to the model? Is there documentation or an explanation about the implementation details?
https://github.com/huggingface/transformers.js/issues/421
closed
[ "question" ]
2023-11-28T17:28:28Z
2023-12-02T11:20:52Z
null
devfacet
huggingface/sentence-transformers
2,361
How to divide long texts into chunks using sentence-transformers?
Hello, I encounter the issue of my texts exceeding the maximum lengths allowed by pretrained models. So I intend to divide my texts into smaller chunks and then calculate the average embeddings over them. However, I find this process is not as straightforward as I initially thought. In order to properly chunk the texts, I need to obtain the tokenized version of each text to determine the exact number of tokens. Unfortunately, it seems that the tokenizers in sentence-transformers are not standalone, meaning they can not tokenize long texts. So what is the best way to solve this problem?
https://github.com/huggingface/sentence-transformers/issues/2361
closed
[]
2023-11-28T16:35:44Z
2023-12-25T12:38:42Z
null
srhouyu
huggingface/alignment-handbook
56
Why does the alignment-handbook account for user & system Inputs in loss calculation
I noticed that the alignment-handbook doesn't ignore the loss calculated from both the user and system inputs Based on my knowledge, many SFT choose to ignore these. I'm curious about the reasoning behind this difference.
https://github.com/huggingface/alignment-handbook/issues/56
open
[]
2023-11-28T06:03:53Z
2024-05-30T07:45:29Z
3
xffxff
huggingface/transformers
27,737
How to save the generated output of BarkModel to an npz file?
Hello there! I'm using the BarkModel from Hugging Face Transformers and I'm wondering how to save the generated results to an npz file. I'd like to use these saved results as history prompts for the next generation. In the [suno-ai/bark](https://github.com/suno-ai/bark) , when using the [`semantic_to_waveform`](https://github.com/suno-ai/bark/blob/main/bark/api.py#L35) method, I can pass `output_full = True`. This allows me to save the output to an npz file using `numpy.savez`. However, as I transition to using the BarkModel within the transformers framework, I am uncertain about the equivalent process. Could you kindly provide guidance on how to save the generated results of the BarkModel to an npz file in the Transformers library? Any assistance or code examples you could offer would be greatly appreciated. Thank you for your time and support.
https://github.com/huggingface/transformers/issues/27737
closed
[]
2023-11-28T03:55:19Z
2024-01-10T08:03:57Z
null
chet-chen
huggingface/alignment-handbook
55
Running on single GPU(16GB)
Hi, What is the best way to run this on my high performance laptop? Should this somehow work? Can i calculate how many days/weeks it will run? Thanks in advance Specs: > OS: Win 11 (WSL2) > CPU: Intel Core i7 12850HX > Make: Lenovo Thinkpad P16 gen 1 > Memory: 128GB DDR5-4800 (2400MHz) > GPU: Nvidia RTX A5500 16GB I found that this command would work on my laptop it seems: `ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml --load_in_4bit=true --gradient_accumulation_steps=1024 --per_device_eval_batch_size=1 --per_device_train_batch_size=1` how now run it for 1-2 hours ish: > ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml --load_in_4bit=true --gradient_accumulation_steps=1024 --per_device_eval_batch_size=1 --per_device_train_batch_size=1 > INFO:root:Using nproc_per_node=1. > 2023-11-27 15:41:33.914308: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. > 2023-11-27 15:41:33.941565: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. > To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. > 2023-11-27 15:41:34.582753: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT > [2023-11-27 15:41:35,164] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) > /usr/local/lib/python3.11/dist-packages/trl/trainer/ppo_config.py:141: UserWarning: The `optimize_cuda_cache` arguement will be deprecated soon, please use `optimize_device_cache` instead. > warnings.warn( > 2023-11-27 15:41:35 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1 distributed training: True, 16-bits training: False > 2023-11-27 15:41:35 - INFO - __main__ - Model parameters ModelArguments(base_model_revision=None, model_name_or_path='mistralai/Mistral-7B-v0.1', model_revision='main', model_code_revision=None, torch_dtype='auto', trust_remote_code=False, use_flash_attention_2=True, use_peft=True, lora_r=64, lora_alpha=16, lora_dropout=0.1, lora_target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj'], lora_modules_to_save=None, load_in_8bit=False, load_in_4bit=True, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) > 2023-11-27 15:41:35 - INFO - __main__ - Data parameters DataArguments(chat_template=None, dataset_mixer={'HuggingFaceH4/ultrachat_200k': 1.0}, dataset_splits=['train_sft', 'test_sft'], max_train_samples=None, max_eval_samples=None, preprocessing_num_workers=12, truncation_side=None) > 2023-11-27 15:41:35 - INFO - __main__ - Training/evaluation parameters SFTConfig( > _n_gpu=1, > adafactor=False, > adam_beta1=0.9, > adam_beta2=0.999, > adam_epsilon=1e-08, > auto_find_batch_size=False, > bf16=True, > bf16_full_eval=False, > data_seed=None, > dataloader_drop_last=False, > dataloader_num_workers=0, > dataloader_pin_memory=True, > ddp_backend=None, > ddp_broadcast_buffers=None, > ddp_bucket_cap_mb=None, > ddp_find_unused_parameters=None, > ddp_timeout=1800, > debug=[], > deepspeed=None, > disable_tqdm=False, > dispatch_batches=None, > do_eval=True, > do_predict=False, > do_train=False, > eval_accumulation_steps=None, > eval_delay=0, > eval_steps=None, > evaluation_strategy=IntervalStrategy.EPOCH, > fp16=False, > fp16_backend=auto, > fp16_full_eval=False, > fp16_opt_level=O1, > fsdp=[], > fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, > fsdp_min_num_params=0, > fsdp_transformer_layer_cls_to_wrap=None, > full_determinism=False, > gradient_accumulation_steps=1024, > gradient_checkpointing=True, > gradient_checkpointing_kwargs={'use_reentrant': False}, > greater_is_better=None, > group_by_length=False, > half_precision_backend=auto, > hub_always_push=False, > hub_model_id=zephyr-7b-sft-lora, > hub_private_repo=False, > hub_strategy=HubStrategy.EVERY_SAVE, > hub_token=<HUB_TOKEN>, > ignore_data_skip=False, > include_inputs_for_metrics=False, > include_tokens_per_second=False, > jit_mode_eval=False, > label_names=None, > label_smoothing_factor=0.0, > learning_rate=2e-05, > length_column_name=length, > load_best_model_at_end=False, > local_rank=0, > log_level=info, > log_level_replica=warning, > log_on_each_node=True, > logging_dir=data/zephyr-7b-sft-lora/runs/Nov27_15-41-35, > logging_first_step=True, > logging_nan_inf_filter=True, > logging_steps=5, > logg
https://github.com/huggingface/alignment-handbook/issues/55
open
[]
2023-11-27T19:50:12Z
2023-12-13T14:58:31Z
1
patchie
huggingface/chat-ui
588
Hallucinations when using web search
I have tried to run a mistral model with the search api but the web results don't seem to be making it to the model. I'm hosting the model through text-gen-webui and encountering the exact same issue as #571. I've given it a go with [openhermes-2.5-mistral-7b.Q5_K_M.gguf](https://imgur.com/a/HQV1lGD), [it seems to use the search tool just fine](https://imgur.com/a/GN9ycZY) but fails to incorporate the results into its answer. Any idea how to fix this issue or at least how I could help with debugging.
https://github.com/huggingface/chat-ui/issues/588
open
[ "support", "websearch" ]
2023-11-27T17:12:22Z
2023-12-27T21:25:42Z
2
NasonZ
huggingface/chat-ui
587
How do I format the ChatPromptTemplate ?
I currently have a working setup with llamacpp+mistral 7b instruct with the following loca.env : ``` MODELS=`[ { "name": "Mistral", "chatPromptTemplate": "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s> {{/ifAssistant}}{{/each}}", "parameters": { "temperature": 0.1, "top_p": 0.95, "repetition_penalty": 1.2, "top_k": 50, "truncate": 4096, "max_new_tokens": 4096, "stop": ["</s>"] }, "endpoints": [{ "url": "http://127.0.0.1:8080", "type": "llamacpp" } / ] } ]` ``` I am trying to set up the model "Neural Chat" by intel , and the tamplate is: ### System: {system_message} ### User: {prompt} ### Assistant: How can I set the chatPromptTemplate to match it? and so it knows to summarize and search the web correctly? Im having some issues to understand how to format it, and where to put ### User ETC. Thanks
https://github.com/huggingface/chat-ui/issues/587
open
[ "support", "models" ]
2023-11-27T15:21:17Z
2023-12-19T07:21:50Z
5
iChristGit
huggingface/candle
1,379
Help request: How to compile CUDA kernels with `cc-rs`?
Hello everybody, In the process of adding PagedAttention to candle-vllm, I need to compile some CUDA kernels. I am currently trying to use `cc-rs` in a `build.rs` to automatically build the kernels. However, I am not making much progress as I have run into issues that seem to be tied to the build stage. I would really appreciate some pointers on how to use either `nvcc` or `cc-rs` to build these CUDA kernels. I have opened an issue with vllm: vllm-project/vllm#1793. Thanks, Eric
https://github.com/huggingface/candle/issues/1379
closed
[]
2023-11-27T14:32:10Z
2023-11-27T20:57:11Z
null
EricLBuehler
huggingface/transformers
27,726
How to load PixArtAlphaPipeline in 8bit?
I know there is example but I couldn't make it work. I am trying to make an auto installer and gradio interface for Pix Art Alpha Pipeline so common people can install and use on their Windows PCs Currently my below code working and I want to make it load in 8 bit is that possible? ``` if torch.cuda.is_available(): pipe = PixArtAlphaPipeline.from_pretrained( "PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16, use_safetensors=True, ) if ENABLE_CPU_OFFLOAD: pipe.enable_model_cpu_offload() else: pipe.to(device) print("Loaded on Device!") # speed-up T5 pipe.text_encoder.to_bettertransformer() if USE_TORCH_COMPILE: pipe.transformer = torch.compile(pipe.transformer, mode="reduce-overhead", fullgraph=True) print("Model Compiled!") ``` ``` seed = int(randomize_seed_fn(seed, randomize_seed)) generator = torch.Generator().manual_seed(seed) if schedule == 'DPM-Solver': if not isinstance(pipe.scheduler, DPMSolverMultistepScheduler): pipe.scheduler = DPMSolverMultistepScheduler() num_inference_steps = dpms_inference_steps guidance_scale = dpms_guidance_scale elif schedule == "SA-Solver": if not isinstance(pipe.scheduler, SASolverScheduler): pipe.scheduler = SASolverScheduler.from_config(pipe.scheduler.config, algorithm_type='data_prediction', tau_func=lambda t: 1 if 200 <= t <= 800 else 0, predictor_order=2, corrector_order=2) num_inference_steps = sas_inference_steps guidance_scale = sas_guidance_scale else: raise ValueError(f"Unknown schedule: {schedule}") if not use_negative_prompt: negative_prompt = None # type: ignore prompt, negative_prompt = apply_style(style, prompt, negative_prompt) images = pipe( prompt=prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps, generator=generator, num_images_per_prompt=NUM_IMAGES_PER_PROMPT, use_resolution_binning=use_resolution_binning, output_type="pil", ).images ``` ### Who can help? @sayakpaul @Narsil @SunMarc @younesbelkada @gante I tried below but it broken the app ``` text_encoder = T5EncoderModel.from_pretrained( "PixArt-alpha/PixArt-XL-2-1024-MS", subfolder="text_encoder", load_in_8bit=True, device_map="auto", ) pipe = PixArtAlphaPipeline.from_pretrained( "PixArt-alpha/PixArt-XL-2-1024-MS", text_encoder=text_encoder, transformer=None, device_map="auto" ) ``` The error I am getting is like below ``` Downloading shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<?, ?it/s] bin G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:06<00:00, 3.09s/it] Loading pipeline components...: 0%| | 0/4 [00:00<?, ?it/s]Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 9.50it/s] Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. batch_count 1 Traceback (most recent call last): File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\queueing.py", line 427, in call_prediction output = await route_utils.call_process_api( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api output = await app.get_blocks().process_api( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\blocks.py", line 1484, in process_api result = await self.call_function( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\blocks.py", line 1106, in call_function prediction = await anyio.to_thread.run_sync( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run result = context.run(func, *args) File "G:\pixArt installer\P
https://github.com/huggingface/transformers/issues/27726
closed
[]
2023-11-27T11:36:44Z
2024-01-05T08:03:56Z
null
FurkanGozukara
huggingface/diffusers
5,942
How to prepare dataset for text-guided image to image generation
As the title suggests, I want to use stable diffusion to fine-tune my own dataset. How should I build it? I have tried: --input_image --xx.jpg --xx.jpg --output_image --yy.jpg --yy.jpg metadata.csv but it did't work ,can anybody help?
https://github.com/huggingface/diffusers/issues/5942
closed
[ "stale" ]
2023-11-27T06:58:57Z
2024-01-09T15:06:12Z
null
feelme0461
huggingface/alignment-handbook
52
What about the system prompt?
It seems that the system prompt is left to be `\n` or rather blank. Inspecting UltraChat (https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k?row=5), seems that no system prompt is added to the dataset. There must be something that I missed in regards to addition of system prompts to the dataset for training, especially since the officially deployed model is able to adhere to system prompt intent (like 'You are a pirate', etc)
https://github.com/huggingface/alignment-handbook/issues/52
open
[]
2023-11-27T02:55:38Z
2023-11-27T02:55:38Z
0
timothylimyl
huggingface/alignment-handbook
50
What is the expected "global batch size"?
In the recipes README there is this statement: > If you scale up/down the number of GPUs, we recommend also scaling up the per-device batch size or number of gradient accumulation steps to keep the global batch size constant (and thus replicate our results). Q: What is the expected "global batch size"? For example, I'm trying to run this on 2x3090s and need to know what the expected global batch size is so I can adjust the accumulation steps and per device train batch size. Thanks much!
https://github.com/huggingface/alignment-handbook/issues/50
closed
[]
2023-11-26T21:47:41Z
2023-11-27T04:14:22Z
null
ohmeow
huggingface/transformers.js
417
[Question] Any examples of processing video frames of a user uploaded video (specifically for depth estimation)?
Hi there, I'm wondering if there are any examples of processing video frames of a user uploaded video? I'm specifically looking to run depth estimation on each frame of a short video, but any similar example would be useful. If not, does this approach seem correct? * Use one of the approaches described [here](https://stackoverflow.com/questions/32699721/javascript-extract-video-frames-reliably) to draw each frame of the video to a canvas * Call `HTMLCanvasElement.toBlob()` on the canvas to get a `Blob` * Pass N (10?) of those Blobs to a worker at a time * For each of those Blobs call `const image = await RawImage.fromBlob(blob)` to get a `RawImage` * Run depth estimation on the list of images with `await classifier([rawImage1, rawImage2, etc.])` Thanks for any help!
https://github.com/huggingface/transformers.js/issues/417
open
[ "question" ]
2023-11-26T09:18:04Z
2023-12-10T22:51:18Z
null
jparismorgan
huggingface/chat-ui
583
Option to share the web interface locally/online ?
I wish we could make the ui available on phone/mac or even outside the local network. For example in SillyTavern (https://github.com/SillyTavern/SillyTavern) You can either open it up to all devices in the local network or open a cloudflare tunnel to access it through a link. Is that possible to add?
https://github.com/huggingface/chat-ui/issues/583
open
[ "enhancement", "back" ]
2023-11-26T00:44:08Z
2024-04-22T16:45:44Z
2
iChristGit
huggingface/candle
1,375
Question: How to interface a C++ API `torch::Tensor` with `candle_core::Tensor`?
I was wondering if there is a way to use a C++ API that accepts a Pytorch `torch::Tensor` with a Candle `candle_core::Tensor`? For reference, I want to use [this](https://github.com/vllm-project/vllm/blob/main/csrc/ops.h) C++ API. Can I convert between tensor types? @LaurentMazare, would it be possible to use [tch-rs](https://github.com/LaurentMazare/tch-rs) to make this conversion? Thanks for any help!
https://github.com/huggingface/candle/issues/1375
closed
[]
2023-11-25T19:05:27Z
2023-11-25T23:04:03Z
null
EricLBuehler
pytorch/TensorRT
2,486
❓ [Question] Using dynamic shapes with FX frontend
I tried to use dynamic shapes in FX path with the following codes. It seems that the `input_specs` argument passed to `LowerSetting` has no effect and TRT gives an error message. ```python import torch import torch.nn as nn from torch_tensorrt.fx import InputTensorSpec, LowerSetting from torch_tensorrt.fx.lower import Lowerer from torch_tensorrt.fx.utils import LowerPrecision class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.conv = nn.Sequential(nn.Conv2d(1, 20, 5), nn.PReLU()) def forward(self, input): return self.conv(input) with torch.inference_mode(): device = torch.device("cuda") mod = MyModule().eval().to(device).half() lower_setting = LowerSetting( lower_precision=LowerPrecision.FP16, min_acc_module_size=1, input_specs=[ InputTensorSpec( shape=(1, 1, -1, -1), dtype=torch.half, device=device, shape_ranges=[((1, 1, 16, 16), (1, 1, 32, 32), (1, 1, 64, 64))], ) ], dynamic_batch=False, ) lowerer = Lowerer.create(lower_setting=lower_setting) mod_trt = lowerer(mod, [torch.rand((1, 1, 16, 16), dtype=torch.half, device=device)]) print(mod_trt(torch.rand((1, 1, 16, 16), dtype=torch.half, device=device)).shape) print(mod_trt(torch.rand((1, 1, 32, 32), dtype=torch.half, device=device)).shape) ``` ``` WARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:MyModule__AccRewrittenModule does not have attribute _compiled_call_impl WARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:Sequential__AccRewrittenModule does not have attribute _compiled_call_impl WARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:Conv2d__AccRewrittenModule does not have attribute _compiled_call_impl WARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:PReLU__AccRewrittenModule does not have attribute _compiled_call_impl C:\Python311\Lib\site-packages\torch\overrides.py:110: UserWarning: 'has_cuda' is deprecated, please use 'torch.backends.cuda.is_built()' torch.has_cuda, C:\Python311\Lib\site-packages\torch\overrides.py:111: UserWarning: 'has_cudnn' is deprecated, please use 'torch.backends.cudnn.is_available()' torch.has_cudnn, C:\Python311\Lib\site-packages\torch\overrides.py:117: UserWarning: 'has_mps' is deprecated, please use 'torch.backends.mps.is_built()' torch.has_mps, C:\Python311\Lib\site-packages\torch\overrides.py:118: UserWarning: 'has_mkldnn' is deprecated, please use 'torch.backends.mkldnn.is_available()' torch.has_mkldnn, WARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:GraphModule.__new__.<locals>.GraphModuleImpl__AccRewrittenModule does not have attribute _compiled_call_impl WARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:Module__AccRewrittenModule does not have attribute _compiled_call_impl WARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:Module__AccRewrittenModule does not have attribute _compiled_call_impl WARNING:torch_tensorrt.fx.tracer.acc_tracer.acc_tracer:Module__AccRewrittenModule does not have attribute _compiled_call_impl INFO:torch_tensorrt.fx.passes.pass_utils:== Log pass <function fuse_permute_matmul at 0x000001E8B08F0E00> before/after graph to C:\Users\HOLYWU~1\AppData\Local\Temp\tmpgbz4qw6c, before/after are the same = True, time elapsed = 0:00:00.026858 INFO:torch_tensorrt.fx.passes.pass_utils:== Log pass <function fuse_permute_linear at 0x000001E8B08F0B80> before/after graph to C:\Users\HOLYWU~1\AppData\Local\Temp\tmpp8c1a1dw, before/after are the same = True, time elapsed = 0:00:00.000981 INFO:torch_tensorrt.fx.passes.pass_utils:== Log pass <function fix_clamp_numerical_limits_to_fp16 at 0x000001E8B08F1440> before/after graph to C:\Users\HOLYWU~1\AppData\Local\Temp\tmp43sia5pv, before/after are the same = True, time elapsed = 0:00:00 Supported node types in the model: acc_ops.conv2d: ((), {'input': torch.float16, 'weight': torch.float16, 'bias': torch.float16}) Unsupported node types in the model: acc_ops.prelu: ((), {'input': torch.float16, 'weight': torch.float16}) Got 1 acc subgraphs and 1 non-acc subgraphs INFO:torch_tensorrt.fx.passes.lower_pass_manager_builder:Now lowering submodule _run_on_acc_0 INFO:torch_tensorrt.fx.lower:split_name=_run_on_acc_0, input_specs=[InputTensorSpec(shape=torch.Size([1, 1, 16, 16]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)] INFO:torch_tensorrt.fx.lower:Timing cache is used! INFO:torch_tensorrt.fx.fx2trt:TRT INetwork construction elapsed time: 0:00:00.001014 INFO:torch_tensorrt.fx.fx2trt:Build TRT engine elapsed time: 0:00:00.993050 INFO:torch_tensorrt.fx.passes.lower_pass_manager_builder:Lowering submodule _run_on_acc_0 elapsed time 0:00:05.996300 torch.Size([1, 20, 12, 12]) [11/25/2023-13:55:00] [TRT] [E] 3: [executionContext.cpp::nvinfer1::rt::ExecutionContext::validat
https://github.com/pytorch/TensorRT/issues/2486
closed
[ "question" ]
2023-11-25T06:52:12Z
2024-02-22T13:30:13Z
null
HolyWu
pytorch/TensorRT
2,485
How may I install torch_tensorrt with my own local version of torch?
## ❓ Question How may I install `torch_tensorrt` with my own local version of torch? ## What you have already tried pip install torch-tensorrt --no-deps resulted in ``` ImportError: /home/jonch/.local/lib/python3.10/site-packages/torch_tensorrt/lib/libtorchtrt.so: undefined symbol: _ZN3c106detail23torchInternalAssertFailEPKcS2_jS2_RKSs ``` Seems like it tries to link to torch shared library but fails. I guess I can't configure it to point to my existing installation of torch. For instance, what if I want to use torch_tensorrt with torch nightly? ## Environment > Build information about Torch-TensorRT can be found by turning on debug messages - PyTorch Version (e.g., 1.0): - CPU Architecture: - OS (e.g., Linux): - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): - Build command you used (if compiling from source): - Are you using local sources or building from archives: - Python version: - CUDA version: - GPU models and configuration: - Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
https://github.com/pytorch/TensorRT/issues/2485
open
[ "question" ]
2023-11-25T05:25:35Z
2023-11-28T19:50:29Z
null
jon-chuang
huggingface/accelerate
2,187
how to collect outputs(not tensor dtype) on multi gpus
As the toy example below, ``` val_dataset = ['a', 'b', 'c', 'd', 'e'] val_dataloader = DataLoader( val_dataset, batch_size=2 ) accelerator = Accelerator() val_dataloader = accelerator.prepare(val_dataloader) for step, batch in enumerate(val_dataloader): print(batch, accelerator.device) ``` When i run this script by `CUDA_VISIBLE_DEVICES="0,1" accelerate launch --config_file="./configs/acc_mgpu_config.yaml" test_batch.py` , i will get below results, how can I get ['a', 'b', 'c', 'd', 'e'] in main process after reduce batch in all processes? ``` ['a', 'b'] cuda:0 ['e', 'a'] cuda:0 ['c', 'd'] cuda:1 ['b', 'c'] cuda:1 ``` I know that accelerate have a `gather_for_metrics` can gathers input and potentially **drops duplicates** in the last batch if on a distributed system. But this function seems only works for data which is tensor type, in this example, my data is string, is there any way to achieve this? (if i use `print(accelerator.gather_for_metrics((batch)), accelerator.device)`, it will raise error like below ``` TypeError: Unsupported types (<class 'str'>) passed to `_gpu_gather_one`. Only nested list/t uple/dicts of objects that are valid for `is_torch_tensor` should be passed. ``` Thanks for any potential answers!
https://github.com/huggingface/accelerate/issues/2187
closed
[]
2023-11-25T02:51:21Z
2023-11-27T06:07:19Z
null
shliu0
huggingface/chat-ui
581
Trying to set up with TGI
I have installed TGI using docker, I can see the api docs at http://127.0.0.1:8080/docs/ But still cannot set up the env.local file, I have tried to set it up with the example, but always failing. ![image](https://github.com/huggingface/chat-ui/assets/20077386/032a02c0-9d3b-473e-9c1b-a3c948eb06d3) ![image](https://github.com/huggingface/chat-ui/assets/20077386/3cd0a46d-0334-448e-bad8-2124045abc42)Can someone who set it up correctly give me the rough idea of how to write the file ? I have tried a lot of combinations, and it always fail either internal error or the screenshot above.
https://github.com/huggingface/chat-ui/issues/581
open
[ "support" ]
2023-11-24T19:20:27Z
2023-12-19T06:02:25Z
2
iChristGit
huggingface/transformers.js
412
[Question] Does any version support Node 14
Hi, I have tried downgrading the library to version 2, and even to 1, but that one was missing types. Is there some way to be able to use it with Node 14? I have seen that mostly the issues are with nullish coalescing characters, so wanted to make sure if there could be other issues that tie it to Node 18+, and also if there have been any security and vulnerability issues from said version (that could work with Node 14). Thanks
https://github.com/huggingface/transformers.js/issues/412
closed
[ "question" ]
2023-11-24T16:01:54Z
2023-12-04T13:16:26Z
null
Ncifra
huggingface/hf_transfer
20
[Usage] How to enable the progress bar?
I've installed `hf_transfer-0.1.4`. But when I use `huggingface-cli download`, the progress bar mentioned [here](https://huggingface.co/docs/huggingface_hub/guides/download#faster-downloads) seems to be disabled at default. And I failed to figure out how to enable it. Could anyone be kind enough to provide some guidance?
https://github.com/huggingface/hf_transfer/issues/20
closed
[]
2023-11-24T08:13:00Z
2023-11-27T12:15:10Z
null
tongyx361
huggingface/gsplat.js
39
How to implement point clouds render?
Hi, great work! I see that this library is upon [antimatter15/splat](https://github.com/antimatter15/splat), but this library does not have the same render which is very similar to point clouds like that lib. I want to know how to implement this function base on your gsplat library? By the way, do you have any document about the config options, so I can set some render options?
https://github.com/huggingface/gsplat.js/issues/39
open
[]
2023-11-24T07:27:33Z
2024-01-22T21:12:06Z
null
xinnai
huggingface/alignment-handbook
46
Weird DPO loss
Hi, I would like to raise some attention to issue #38. It seems that the DPO-Lora training loss (red line) drops abruptly at the beginning of each epoch, which seems weird. (I tried Lora model global batch size 64, multi_gpu acceleration, 8GPUs, learning rate 1e-4, others same suggested) In the mean time, the full parameter fine tunning has no such problem (official settings). ![image](https://github.com/huggingface/alignment-handbook/assets/40993476/5ffa7fd5-c93b-44e5-a150-2a133371ab13) I don't know if this is normal and **assume this is a bug associated with the lora model**. Is there any explanations? Has anyone encountered the same issue? If your rerun loss is normal, can you share your configs?
https://github.com/huggingface/alignment-handbook/issues/46
open
[]
2023-11-24T03:07:46Z
2024-05-28T07:09:10Z
1
ChenDRAG
huggingface/diffusers
5,912
How to set config in VaeImageProcessor?
I created a `StableDiffusionControlNetImg2ImgPipeline` and I want to manually set the config `do_normalize` in `VaeImageProcessor`. I wonder how can I set? I look for it in the pipe.vae.config and see nothing about it.
https://github.com/huggingface/diffusers/issues/5912
closed
[ "stale" ]
2023-11-23T12:54:22Z
2023-12-26T21:29:17Z
null
youyuge34
huggingface/chat-ui
576
Cannot build using latest Chat UI Space template
Using the Dockerfile created from the ChatUI-Space template, but cloning it to a local machine and trying to build it fails at `npm run build` > #18 [chatui-builder 12/12] RUN npm run build #0 0.673 #0 0.673 > chat-ui@0.6.0 build #0 0.673 > vite build #0 0.673 #0 1.678 vite v4.3.9 building SSR bundle for production... #0 1.678 #0 1.707 transforming... #0 4.381 "BaseClient" and "TokenSet" are imported from external module "openid-client" but never used in "src/lib/server/auth.ts". #0 4.381 ✓ 210 modules transformed. #0 4.473 rendering chunks... #0 5.665 #0 5.665 node:internal/event_target:1036 #0 5.665 process.nextTick(() => { throw err; }); #0 5.665 ^ #0 5.666 SyntaxError [Error]: Bad control character in string literal in JSON at position 157 #0 5.666 at JSON.parse (<anonymous>) #0 5.666 at file:///app/chat-ui/.svelte-kit/output/server/chunks/models.js:512:51 #0 5.666 at ModuleJob.run (node:internal/modules/esm/module_job:193:25) #0 5.666 Emitted 'error' event on Worker instance at: #0 5.666 at [kOnErrorMessage] (node:internal/worker:309:10) #0 5.666 at [kOnMessage] (node:internal/worker:320:37) #0 5.666 at MessagePort.<anonymous> (node:internal/worker:216:57) #0 5.666 at [nodejs.internal.kHybridDispatch] (node:internal/event_target:761:20) #0 5.666 at exports.emitMessage (node:internal/per_context/messageport:23:28) #0 5.666 #0 5.666 Node.js v19.9.0 #0 5.751 npm notice #0 5.751 npm notice New major version of npm available! 9.6.3 -> 10.2.4 #0 5.751 npm notice Changelog: <https://github.com/npm/cli/releases/tag/v10.2.4> #0 5.751 npm notice Run `npm install -g npm@10.2.4` to update! #0 5.751 npm notice #18 ERROR: process "/bin/sh -c npm run build" did not complete successfully: exit code: 1 #------ #> [chatui-builder 12/12] RUN npm run build: #0 5.666 at MessagePort.<anonymous> (node:internal/worker:216:57) #0 5.666 at [nodejs.internal.kHybridDispatch] (node:internal/event_target:761:20) #0 5.666 at exports.emitMessage (node:internal/per_context/messageport:23:28) #0 5.666 #0 5.666 Node.js v19.9.0 #0 5.751 npm notice #0 5.751 npm notice New major version of npm available! 9.6.3 -> 10.2.4 #0 5.751 npm notice Changelog: <https://github.com/npm/cli/releases/tag/v10.2.4> #0 5.751 npm notice Run `npm install -g npm@10.2.4` to update! #0 5.751 npm notice #------ #Dockerfile:49 #-------------------- #47 | npm ci #48 | #49 | >>> RUN npm run build #50 | #51 | FROM ghcr.io/huggingface/text-generation-inference:latest #-------------------- #ERROR: failed to solve: process "/bin/sh -c npm run build" did not complete successfully: exit code: 1
https://github.com/huggingface/chat-ui/issues/576
open
[ "support", "spaces" ]
2023-11-23T12:23:06Z
2023-11-30T14:11:32Z
1
simon376
huggingface/transformers
27,666
how to remove punctuation marks.
### System Info i trained t5-large for translation. the result of train was good But when i input some sentence, the result is like that "What are you doing now?.??....." [?.??......] <- how to delete that punctuation marks. i put some parameter like max_length. But i can not solve that situation ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction c ### Expected behavior cfdvf
https://github.com/huggingface/transformers/issues/27666
closed
[]
2023-11-23T07:21:33Z
2023-12-31T08:03:43Z
null
chanyong-owl
huggingface/blog
1,655
how to scale fine-tuning whisper in English?
I'm attempting to fine-tune whisper using the excellent hugging face tut: https://huggingface.co/blog/fine-tune-whisper. The delta between the tut's case and my case is that I am using English which has 1M more test cases (and also I'm using big GPUs so I am using `whisper-large-v3`). No matter how much compute I throw at the core data preparation step (e.g. take a look at `num_proc`): `common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=108)` I still only prepare the data at about 30 examples / s. For 1M examples this doesn't scale. My last test was on an 8 GPU 112 vCPU instance and still there was no change. Indeed `htop` shows that all 112 of my vCPUs are engaged, but the actual prep speed remains flat across all compute types. The only thing I haven't tried is crazy fast storage like NVMe, which I'm going to do, but I have a feeling it has to do with either the `datasets` library configuration or something else. I've never had problems with GPUs or whisper previously so I'm a bit baffled as to what the issue could. I've followed the tutorial to a 't' except for changing the language to `en`, whisper to `whisper-large-v3` and `num_proc` to higher parallels. Any insight would be greatly appreciated!
https://github.com/huggingface/blog/issues/1655
open
[]
2023-11-22T22:45:29Z
2024-03-10T06:55:47Z
null
jsteinberg-rbi
huggingface/datasets
6,446
Speech Commands v2 dataset doesn't match AST-v2 config
### Describe the bug [According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover, the class labels themselves don't match between the model config and the dataset. It is difficult to reproduce the data used to fine tune `MIT/ast-finetuned-speech-commands-v2`. ### Steps to reproduce the bug ``` >>> model = ASTForAudioClassification.from_pretrained("MIT/ast-finetuned-speech-commands-v2") >>> model.config.id2label {0: 'backward', 1: 'follow', 2: 'five', 3: 'bed', 4: 'zero', 5: 'on', 6: 'learn', 7: 'two', 8: 'house', 9: 'tree', 10: 'dog', 11: 'stop', 12: 'seven', 13: 'eight', 14: 'down', 15: 'six', 16: 'forward', 17: 'cat', 18: 'right', 19: 'visual', 20: 'four', 21: 'wow', 22: 'no', 23: 'nine', 24: 'off', 25: 'three', 26: 'left', 27: 'marvin', 28: 'yes', 29: 'up', 30: 'sheila', 31: 'happy', 32: 'bird', 33: 'go', 34: 'one'} >>> dataset = load_dataset("speech_commands", "v0.02", split="test") >>> torch.unique(torch.Tensor(dataset['label'])) tensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32., 33., 34., 35.]) ``` If you try to explore the [dataset itself](https://huggingface.co/datasets/speech_commands/viewer/v0.02/test), you can see that the id to label does not match what is provided by `model.config.id2label`. ### Expected behavior The labels should match completely and there should be the same number of label classes between the model config and the dataset itself. ### Environment info datasets = 2.14.6, transformers = 4.33.3
https://github.com/huggingface/datasets/issues/6446
closed
[]
2023-11-22T20:46:36Z
2023-11-28T14:46:08Z
3
vymao
pytorch/rl
1,708
[Question] What is ESS in PPO?
Here [ppo.py](https://github.com/pytorch/rl/blob/main/torchrl/objectives/ppo.py#L649) from PPO source code is the definition. <img width="983" alt="Screenshot 2023-11-22 at 1 21 12 AM" src="https://github.com/pytorch/rl/assets/22335780/3ec3663e-7140-4353-a65a-8b13f761fab2"> Does ESS stand for **Effective Sample Size** or something else? What is the purpose logging this info? A reference for 'ESS' would be helpful. Thank you.
https://github.com/pytorch/rl/issues/1708
closed
[]
2023-11-22T06:13:37Z
2023-11-23T03:07:41Z
null
gitfourteen
huggingface/alignment-handbook
45
Reproducing of Lora Model Result on MT-Bench
Recently, I attempted to fit the DPO on my own dataset. Initially, I tried to reproduce the results of your LORA model( 7.43 on MT-Bench). However, I encountered some issues. Despite using all your parameters and data, here are my results on MT-Bench: | Model | MT-Bench | |--------|--------| | Zephyr-SFT-Lora-Own | 6.37 | | Zephyr-DPO-Lora-Own | 6.95 | Then, I downloaded your models from [here](https://huggingface.co/alignment-handbook), and the results were nearly the same as mine. | Model | MT-Bench | |--------|--------| | Zephyr-SFT-Lora| 6.4| | Zephyr-DPO-Lora| 6.93 | DPO does help improve performance on MT-Bench, but I can't achieve a score of **7.43**. Is there any difference between the model described in your paper and the model available on your homepage? Or could it be the difference between the full and LORA? By the way, I truly love the "yaml style" argument parser; it's clear and elegant! @edbeeching @lewtun
https://github.com/huggingface/alignment-handbook/issues/45
open
[]
2023-11-22T03:42:32Z
2023-12-11T17:09:32Z
27
wlhgtc
huggingface/optimum
1,551
Running llama-2-13b resulted in `Killed`
### System Info ```shell This is my run.py code: import torch import transformers import requests print(torch.cuda.is_available()) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Load model and adapter weights from local directory model = transformers.AutoModelForCausalLM.from_pretrained("/home/maxloo/src/pastoring/llama/llama-2-13b") model.to(device) adapter = transformers.AutoModelForCausalLM.from_pretrained("/home/maxloo/src/pastoring/adapter", config=transformers.configuration.AdapterConfig.from_json_file("adapter_config.json")) model.load_state_dict(adapter.state_dict()) adapter.load_state_dict(model.state_dict()) # Define prompt prompt = "Hello, I am a chatbot." # Perform inference response = model.generate(prompt, max_length=50) # Print response print(response) This is my adapter_config.json code: { "base_model_name_or_path": "../llama/llama-2-13b/", "bias": "none", "enable_lora": null, "fan_in_fan_out": false, "inference_mode": true, "init_lora_weights": true, "lora_alpha": 16, "lora_dropout": 0.05, "merge_weights": false, "modules_to_save": null, "peft_type": "LORA", "r": 16, "target_modules": [ "q_proj", "k_proj", "v_proj", "o_proj" ], "task_type": "CAUSAL_LM", "task": "question_answering", "domain": "general" } These are my hardware specs: Intel Core i7-13700HX, NVIDIA RTX 4060, 32GB DDR5, 1TB SSD I'm using Windows 11 WSL2 Bash to run this command: python3 run.py I have set my .wslconfig file as follows: [wsl2] memory=24GB processors=24 I expect a chat message to be displayed and a prompt for my chat input, but this is the actual output: Killed How do I resolve this? Should I be testing llama-13b first before llama-2-13b? ``` ### Who can help? @echarlaix, @philschmid ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) python3 run.py ![Screenshot 2023-11-21 211036](https://github.com/huggingface/optimum/assets/71763812/fc5b7e1c-1e57-41e5-a986-130681eba41d) ### Expected behavior I expect a chat message to be displayed and a prompt for my chat input, but this is the actual output: Killed How do I resolve this? Should I be testing llama-13b first before llama-2-13b?
https://github.com/huggingface/optimum/issues/1551
closed
[ "bug" ]
2023-11-21T13:11:40Z
2024-01-09T15:58:09Z
1
maxloopinmok
huggingface/optimum-quanto
32
Are threre some exmples show how to export onnx model ? torch.onnx.export
https://github.com/huggingface/optimum-quanto/issues/32
closed
[]
2023-11-21T11:33:37Z
2024-03-13T08:15:51Z
null
youkiwang
pytorch/executorch
1,252
What is the codegen really done at the Executorch flow?
Hi, Although I study the https://pytorch.org/executorch/stable/concepts.html#codegen about codegen part, I do not understand very well about this part. ![Screenshot from 2023-11-21 16-38-38](https://github.com/pytorch/executorch/assets/87454575/669a120d-714a-4861-9b5c-1d822bfd29dd) Above the concepts map, after I export the model.pte file which is the binary file. Can I directly select the kernel op to run the model with Executorch Runtime library ? And there is another branch of model.pte file which do the codegen to gen the Kernel Registration Library. I do not understand very well about this part. My question is that if I can run with model.pte file with kernel op run time library, why need to codegen again? Or what is the codegen output at real flow? Is it a c code about the graph of the model with ops and the weight?
https://github.com/pytorch/executorch/issues/1252
closed
[ "need-user-input", "module: kernels", "triaged" ]
2023-11-21T08:38:57Z
2024-02-14T00:53:21Z
null
kris-himax
huggingface/transformers
27,615
How to get the number of trainable parameters for a hf model
### Feature request ' peft_parameters = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=8, bias="none", task_type="CAUSAL_LM" ) train_params = TrainingArguments( output_dir="./results_modified", num_train_epochs=1, per_device_train_batch_size=4, gradient_accumulation_steps=1, optim="paged_adamw_32bit", save_steps=25, logging_steps=25, learning_rate=2e-4, weight_decay=0.001, fp16=False, bf16=False, max_grad_norm=0.3, max_steps=-1, warmup_ratio=0.03, group_by_length=True, lr_scheduler_type="constant", report_to="tensorboard" ) fine_tuning = SFTTrainer( model=base_model, train_dataset=training_data, peft_config=peft_parameters, dataset_text_field="text", tokenizer=llama_tokenizer, args=train_params ) fine_tuning.train() I am using the above code for model training with Lora. I wonder after applying to Lora. How could I check the number of trainable parameters of the model before and after? ### Motivation Understand the training process well ### Your contribution I'd love to
https://github.com/huggingface/transformers/issues/27615
closed
[]
2023-11-21T00:37:01Z
2023-11-21T19:28:32Z
null
mathmax12
huggingface/chat-ui
571
trying to replicate the api search with the local search option
When I try searching for information on the site (huggingface.co/chat) it works fine and gives correct information, but when doing the same thing using the same model I get hallucinations.. Ive tried all sorts of temperature settings and models. This is the result locally: ![image](https://github.com/huggingface/chat-ui/assets/20077386/cee5a762-3004-4953-9a9b-c6dc2291c569) This is with the site: ![image](https://github.com/huggingface/chat-ui/assets/20077386/0f1001bf-6c16-4dc0-84b5-b668d135c1d6) The sources look the smae on both but the actual response is always not even real information.. This is my current config: MONGODB_URL=mongodb://localhost:27017 PUBLIC_APP_NAME=PrivateGPT MODELS=`[ { "name": "text-generation-webui", "id": "text-generation-webui", "parameters": { "temperature": 0.1, "top_p": 0.95, "repetition_penalty": 1.2, "top_k": 12, "truncate": 1000, "max_new_tokens": 1024, "stop": [] }, "endpoints": [{ "type" : "openai", "baseURL": "http://127.0.0.1:5000/v1/" }] } ]` TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed at new NodeError (node:internal/errors:405:5) at ReadableStreamDefaultController.enqueue (node:internal/webstreams/readablestream:1040:13) at update (C:/ChatUI/src/routes/conversation/[id]/+server.ts:155:20) at Object.start (C:/ChatUI/src/routes/conversation/[id]/+server.ts:189:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { code: 'ERR_INVALID_STATE' } TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed at new NodeError (node:internal/errors:405:5) at ReadableStreamDefaultController.enqueue (node:internal/webstreams/readablestream:1040:13) at update (C:/ChatUI/src/routes/conversation/[id]/+server.ts:155:20) at Object.start (C:/ChatUI/src/routes/conversation/[id]/+server.ts:189:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
https://github.com/huggingface/chat-ui/issues/571
closed
[ "support" ]
2023-11-20T20:57:23Z
2023-12-05T15:19:49Z
29
iChristGit
huggingface/trl
1,014
How to avoid training radomness?
I’m using the `trl.SFTTrainer` to fine-tune Vicuna, and I’m using the same data and parameters for fine-tuning. However, I’ve noticed that even after setting: ``` def set_seed(seed=42): # set seed for all possible avenues of stochasticity numpy.random.seed(seed=seed) random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True training_args = TrainingArguments( report_to="none", output_dir=str(ckpt_path), do_eval=False, save_strategy="epoch", evaluation_strategy="no", num_train_epochs=training_epochs, seed=42, ) ``` the fine-tuned checkpoint’s evaluation remains unstable. Every time I fine-tune with the same dataset, I get significantly different results. How can I ensure the stability of my fine-tuning? I also tried this: https://discuss.huggingface.co/t/fixing-the-random-seed-in-the-trainer-does-not-produce-the-same-results-across-runs/3442 But I was wrong even with this codes: ``` def model_init(): return AutoModelForCausalLM.from_pretrained( "/data/ckpts/huggingface/models/models--lmsys--vicuna-7b-v1.5/snapshots/de56c35b1763eaae20f4d60efd64af0a9091ebe5", device_map="auto", torch_dtype=torch.bfloat16, use_flash_attention_2=True, ) training_args = TrainingArguments( report_to="none", output_dir=str(ckpt_path), do_eval=False, save_strategy="epoch", evaluation_strategy="no", num_train_epochs=training_epochs, seed=42, ) trainer = SFTTrainer( model_init=model_init, args=training_args, train_dataset=mapped_dataset, dataset_text_field="text", data_collator=data_collator, max_seq_length=1500, ) ``` This would end in errors. ``` Traceback (most recent call last): File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status response.raise_for_status() File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/transformers/utils/hub.py", line 429, in cached_file resolved_file = hf_hub_download( ^^^^^^^^^^^^^^^^ File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1346, in hf_hub_download raise head_call_error File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1232, in hf_hub_download metadata = get_hf_file_metadata( ^^^^^^^^^^^^^^^^^^^^^ File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1608, in get_hf_file_metadata hf_raise_for_status(r) File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 293, in hf_raise_for_status raise RepositoryNotFoundError(message, response) from e huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-655b8b21-096243713e568c65194e1a69;8e4415fe-8069-43e1-8412-fdd028a8ebcd) Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/cyzhao/main/test_scripts/main.py", line 402, in <module> finetune_vicuna( File "/home/cyzhao/main/test_scripts/main.py", line 207, in finetune_vicuna trainer = SFTTrainer( ^^^^^^^^^^^ File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/trl/trainer/sft_trainer.py", line 162, in __init__ model = AutoModelForCausalLM.from_pretrained(model) ^^^^^^^
https://github.com/huggingface/trl/issues/1014
closed
[]
2023-11-20T16:47:28Z
2024-01-03T15:05:11Z
null
zhaochenyang20
huggingface/candle
1,349
How to pass bounding box instead of points in the segment-anything example?
Is it possible to pass a bounding box instead of points when using the segment-anything model? Is this just 4 points?
https://github.com/huggingface/candle/issues/1349
open
[]
2023-11-20T15:44:22Z
2023-11-20T15:44:22Z
null
svelterust
huggingface/alignment-handbook
43
Did you use RMSprop or AdamW as the optimizer?
Hi to whoever is reading this 🤗 ## Question After reading the Zephyr pre-printed paper https://arxiv.org/pdf/2310.16944.pdf and going through the configuration files here, I saw that there was a mismatch between the optimizer used in https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-beta/dpo/config_full.yaml, and the one reported in the paper, AdamW. So the question is, did you use RMSprop to run the full DPO fine-tuning or AdamW with no weight decay as stated in the paper? Thanks in advance!
https://github.com/huggingface/alignment-handbook/issues/43
closed
[]
2023-11-20T15:23:03Z
2024-03-07T06:55:07Z
3
alvarobartt
huggingface/sentence-transformers
2,359
How to evaluate the result of dataset that does not have any labels
Hi, I was trying to look at the different evaluation metrics that are provided to SentenceTransformers. I have a column of text in my dataset that I compare against a query and get the top k similarity using cosine similarity. I do not know if there is any method to evaluate the result. Should I consider the cosine similarity score as my evaluation metric as well? By evaluation, I mean, how can I show that the result I got is good? Is reasonable? from sentence_transformers import SentenceTransformer, util import pandas as pd # Load a pre-trained model model = SentenceTransformer('msmarco-distilbert-cos-v5') # Example query query = "Semantic search example query" # Example corpus corpus = ["Example sentence 1", "Example sentence 2", "Example sentence 3", ...] # Add more sentences to your corpus # Encode the query and corpus into embeddings query_embedding = model.encode(query, convert_to_tensor=True) corpus_embeddings = model.encode(corpus, convert_to_tensor=True) # Compute cosine similarities cosine_similarities = util.pytorch_cos_sim(query_embedding, corpus_embeddings)[0] # Get indices of the 3 nearest neighbors indices_nearest_neighbors = pd.Series(cosine_similarities).nlargest(3).index # Retrieve the 3 nearest neighbors nearest_neighbors = [corpus[i] for i in indices_nearest_neighbors] # Print the results print(f"Query: {query}") print("3 Nearest Neighbors:") for neighbor in nearest_neighbors: print("-", neighbor)
https://github.com/huggingface/sentence-transformers/issues/2359
open
[]
2023-11-20T14:52:21Z
2023-11-20T14:52:21Z
null
Yarmohamadshr
huggingface/alignment-handbook
42
How to QLoRA training with ZeRO-3 on two or more GPUs?
I added a 4-bit load after the command LoRA training with ZeRO-3 on two or more GPUs to achieve a mix of QLoRA and ZeRO-3. But the program encountered the following error: RuntimeError: expected there to be only one unique element in <generator object Init._convert_to_deepspeed_param.<locals>.all_gather_coalesced.<locals>.<genexpr> at 0x7f2ec8daf900> The command is: ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero3.yaml --num_processes=2 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml --load_in_4bit=true
https://github.com/huggingface/alignment-handbook/issues/42
open
[]
2023-11-20T14:13:36Z
2024-05-17T00:27:27Z
null
Di-Zayn
huggingface/transformers
27,600
How to get input sentence embedding from Llama or Llama2?
I'm trying to get the sentence embedding that I input, I checked some common practice to do it, but I'm not sure I'm doing the it right. Who may be help? @gante Thanks if you can be help. my code is as below: ``` model = LlamaForCausalLM.from_pretrained( args.pretrained_name_or_path, torch_dtype=torch.float16, device_map=device, ) tokenizer = LlamaTokenizer.from_pretrained(args.pretrained_name_or_path, fast_tokenizer=True) model.to(device) model.eval() tokenizer.pad_token_id = 0 tokenizer.padding_side = "left" for i in range(0, len(sentences), batch_size): batch_sentences = sentences[i: i+batch_size] inputs = tokenizer(batch_sentences, padding=True, truncation=False, return_tensors='pt') inputs = inputs.to(device) with torch.no_grad(): outputs = model(**inputs, output_hidden_states=True) hidden_states = outputs.hidden_states[-1] sentence_embeddings = hidden_states[:, -1, :] # # here is using the **last token's** last layer hidden states as sentence embeddings, # or sentence_embeddings = outputs.hidden_states[-1].mean(dim=1) # here use average sentence embedding. # and I'm not sure which one is better. embeddings.append(sentence_embeddings.cpu()) embeddings = torch.cat(embeddings, dim=0) ```
https://github.com/huggingface/transformers/issues/27600
closed
[]
2023-11-20T13:18:08Z
2023-11-22T14:32:26Z
null
waterluck
pytorch/serve
2,801
When is initialize method called?
### 📚 The doc issue I've created a custom handler with the following initialize method ```python class CustomHandler(VisionHandler): def initialize(self, context): print("Got here 000!") time.sleep(20) print("Got here 111!") super(VisionHandler, self).__init__() ``` I spin up the server using a single runner by running `torchserve --start --ncs --ts-config model-store/config.properties`, where config.properties looks like: ```python inference_address=http://127.0.0.1:8080 management_address=http://127.0.0.1:8081 metrics_address=http://127.0.0.1:8082 model_store=/home/inaki/code/animal_classifier/model-store load_models=animal.mar min_workers=1 max_workers=1 default_workers_per_model=1 model_snapshot={"name":"startup.cfg", "modelCount":1, "models":{"animal":{"1.0":{"defaultVersion":true, "marName":"animal.mar", "minWorkers":1, "maxWorkers":1, "batchSize":2, "maxBatchDelay":2000, "responseTimeout":30000}}}} ``` I notice the "Got here" logs don't show up during the initial phase, where I assumed the model was loaded. Instead, they show up when I submit the first request to the server (`curl -X POST http://localhost:8080/predictions/animal -T ./data/cats_and_dogs/frames/2.png`), but not for subsequent requests. And there's no sleep time in between the two prints. My assumption is that printing the logs is somehow cached? I'd like to know if there's a diagram to better understand the flow. I noticed too that in the model_service_worker, there seem to be two routes for handling incoming requests based on this [branching](https://github.com/pytorch/serve/blob/aa96cf60c044087e75a1472f3bd090422d4d349c/ts/model_service_worker.py#L180-L195). Can somebody explain what is the distinction between cmd == b"I" and cmd == b"L"? ### Suggest a potential alternative/fix Including a diagram/explanation with the spin-up flow in the documentation
https://github.com/pytorch/serve/issues/2801
closed
[]
2023-11-20T12:03:07Z
2023-11-23T20:47:00Z
4
InakiRaba91
pytorch/serve
2,800
When is initialize method called?
### 📚 The doc issue I've created a custom handler with the following initialize method ```python class CustomHandler(VisionHandler): def initialize(self, context): print("Got here 000!") time.sleep(20) print("Got here 111!") super(VisionHandler, self).__init__() ``` I spin up the server using a single runner by running `torchserve --start --ncs --ts-config model-store/config.properties`, where config.properties looks like: ```python inference_address=http://127.0.0.1:8080 management_address=http://127.0.0.1:8081 metrics_address=http://127.0.0.1:8082 model_store=/home/inaki/code/animal_classifier/model-store load_models=animal.mar min_workers=1 max_workers=1 default_workers_per_model=1 model_snapshot={"name":"startup.cfg", "modelCount":1, "models":{"animal":{"1.0":{"defaultVersion":true, "marName":"animal.mar", "minWorkers":1, "maxWorkers":1, "batchSize":2, "maxBatchDelay":2000, "responseTimeout":30000}}}} ``` I notice the "Got here" logs don't show up during the initial phase, where I assumed the model was loaded. Instead, they show up when I submit the first request to the server (`curl -X POST http://localhost:8080/predictions/animal -T ./data/cats_and_dogs/frames/2.png`), but not for subsequent requests. And there's no sleep time in between the two prints. My assumption is that printing the logs is somehow cached? I'd like to know if there's a diagram to better understand the flow. I noticed too that in the model_service_worker, there seem to be two routes for handling incoming requests based on this [branching](https://github.com/pytorch/serve/blob/aa96cf60c044087e75a1472f3bd090422d4d349c/ts/model_service_worker.py#L180-L195). Can somebody explain what is the distinction between cmd == b"I" and cmd == b"L"? ### Suggest a potential alternative/fix Including a diagram/explanation with the spin-up flow in the documentation
https://github.com/pytorch/serve/issues/2800
closed
[]
2023-11-20T11:46:16Z
2023-11-20T12:02:53Z
0
irabanillo91
pytorch/executorch
1,239
How to access to result of tensor after inference
Hi, I am implementing executorch by following step. 1. Exporting resnet18 including softmax layer. 2. Implementing executor_runner.cpp to access to result of tensor after inference. I expected that I could get each classes' result like [0,0,0,0.1,0.9] after inference(including softmax). But when I try to access to each result and stdout, the outputs were like following: ``` OutputTensor 0 1: <Unknown EValue tag 1915577445> OutputTensor 0 2: <Unknown EValue tag -284942848> . . OutputTensor 0 14: None ``` I expected that I could get each classes' probability like following. ``` OutputTensor 0 1: 0 OutputTensor 0 2: 0.5 . . OutputTensor 0 14: 0.25 OutputTensor 0 15: 0.25 ``` ### model export file `export-model-resnet18.py` ```py import torch import torchvision.models as models import torch.nn.functional as F from torchvision.models.resnet import ResNet18_Weights from torch._export import capture_pre_autograd_graph from torch.export import export, ExportedProgram import executorch.exir as exir # ========== resnet18 + softmax layer ============ resnet18 = models.resnet18(weights=ResNet18_Weights.DEFAULT).eval() resnet18.fc = torch.nn.Sequential( resnet18.fc, torch.nn.Softmax(dim=1) ) example_args = (torch.randn(1, 3, 224, 224), ) # ==================================== ## export to exir pre_autograd_aten_dialect = capture_pre_autograd_graph(resnet18, example_args) ## export to aten dialect aten_dialect: ExportedProgram = export(pre_autograd_aten_dialect, example_args) ## export to edge edge_program: exir.EdgeProgramManager = exir.to_edge(aten_dialect) ## export to executorch from executorch.exir import ExecutorchBackendConfig, ExecutorchProgramManager executorch_program: exir.ExecutorchProgramManager = edge_program.to_executorch( ExecutorchBackendConfig( passes=[], # User-defined passes ) ) ## save pte model print("save pte file") with open("exported-resnet18.pte", "wb") as file: file.write(executorch_program.buffer) ``` ### executor_runner.cpp ```cpp for (int i = 0; i < outputs.size(); ++i) { std::cout << "Output " << i << ": " << outputs[i] << std::endl; printTypeName<decltype(outputs[i])>(); for (int j = 0; j < 1001; ++j) { // address //std::cout << "OutputTensor 0 " << j << ": " << &outputs[0,j] << std::endl; // value std::cout << "OutputTensor 0 " << j << ": " << outputs[j] << std::endl; } ``` ### output while running executor_runner.cpp ```sh (executorch) root@c2aef39cb16e:~/test/executorch# ./cmake-out/executor_runner --model_path ./exported-resnet18.pte --img_path test.jpg Number of arguments: 5 Argument 0: ./cmake-out/executor_runner Argument 1: --model_path Argument 2: ./exported-resnet18.pte Argument 3: --img_path Argument 4: test.jpg I 00:00:00.356738 executorch:executor_runner.cpp:139] Model file ./exported-resnet18.pte is loaded. I 00:00:00.356793 executorch:executor_runner.cpp:148] Using method forward I 00:00:00.356799 executorch:executor_runner.cpp:196] Setting up planned buffer 0, size 64348896. I 00:00:00.406562 executorch:executor_runner.cpp:219] Method loaded. I 00:00:00.406674 executorch:executor_runner.cpp:225] Inputs prepared. I 00:02:09.169815 executorch:executor_runner.cpp:234] Model executed successfully. I 00:02:09.169871 executorch:executor_runner.cpp:238] 1 outputs: OutputTensor 0 0: tensor(sizes=[1, 1000], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., ..., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., ]) OutputTensor 0 1: <Unknown EValue tag 1915577445> OutputTensor 0 2: <Unknown EValue tag -284942848> OutputTensor 0 3: <Unknown EValue tag 64348896> OutputTensor 0 4: <Unknown EValue tag -284943136> OutputTensor 0 5: <Unknown EValue tag -284942944> OutputTensor 0 6: <Unknown EValue tag 939732227> OutputTensor 0 7: <Unknown EValue tag -117183485> OutputTensor 0 8: <Unknown EValue tag -754662396> OutputTensor 0 9: <Unknown EValue tag -989481723> OutputTensor 0 10: <Unknown EValue tag -788086778> OutputTensor 0 11: <Unknown EValue tag 176> OutputTensor 0 12: <Unknown EValue tag -287441008> OutputTensor 0 1
https://github.com/pytorch/executorch/issues/1239
closed
[ "need-user-input" ]
2023-11-20T09:03:14Z
2023-11-22T19:24:01Z
null
EarthMu
huggingface/transformers
27,592
How to always use initial prompt in Whisper?
I checked this PR (#22496 ) but still can't figure out how to always use the initial prompt. is it possible to provide a use case?
https://github.com/huggingface/transformers/issues/27592
closed
[]
2023-11-19T18:35:23Z
2023-11-20T08:29:41Z
null
GanymedeNil
huggingface/pytorch-image-models
2,038
how to run the efficientmit.py
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
https://github.com/huggingface/pytorch-image-models/issues/2038
closed
[ "enhancement" ]
2023-11-19T02:50:59Z
2023-11-19T17:16:48Z
null
1377534928
huggingface/chat-ui
566
Is Chat-UI gonna support the new Assistant API?
They store the threads, and there's also multi-modal support
https://github.com/huggingface/chat-ui/issues/566
open
[ "enhancement", "models" ]
2023-11-19T02:06:44Z
2023-11-20T08:42:49Z
1
wayliums
huggingface/alignment-handbook
40
How do I get the training scrips to utilize all my GPUs?
Hello there, I'm running this script: ``` ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml ``` ... but on my machine with 2x3090s ... only GPU 0 is being utilized. What do I need to change to utlize both of my 3090s for the training run? Thanks
https://github.com/huggingface/alignment-handbook/issues/40
closed
[]
2023-11-19T00:11:24Z
2023-11-19T01:20:21Z
null
ohmeow
huggingface/transformers.js
401
[Question | Bug] What am I doing wrong while using the `question-answering` model?
## The Problem I'm trying to use `question-answering` model to answer simple questions in a given context. But I always get a TypeError about floats. I guess that's an internal issue, because at top level of code I am not using floating point numbers. But maybe I am doing something wrong. By the way, I'm using TypeScript and I was following the [docs for this model](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.QuestionAnsweringPipeline). ## Code ```ts /** THIS CODE IS WRAPPED BY AN ASYNC FUNCTION */ const { pipeline } = await import("@xenova/transformers"); const answerer = await pipeline( "question-answering", "Xenova/distilbert-base-uncased-distilled-squad" ); const results = await answerer( "Who is Dominic Toretto?", "Dominic Toretto is part of the family." ); ``` ## Error TypeError: A float32 tensor's data must be type of function Float32Array() ![image](https://github.com/xenova/transformers.js/assets/53703706/a248457f-e47a-4f42-8604-622bf8fe49ed) ![image](https://github.com/xenova/transformers.js/assets/53703706/a9f50b80-c9d6-4a83-aea7-908afd684759)
https://github.com/huggingface/transformers.js/issues/401
closed
[ "question" ]
2023-11-18T12:58:50Z
2023-11-19T12:44:00Z
null
AyresMonteiro
huggingface/transformers.js
399
[Question] Is it possible to encode and decode with `AutoTokenizer.from_pretrained` and keep spaces?
I'm trying to build a pure JS online tokenizer, visually similar to https://github.com/1rgs/tokenwiz (but without the Python backend) I'm doing something like: ```js const model = await AutoTokenizer.from_pretrained('mistralai/Mistral-7B-v0.1') const textInput = `[INST] <<SYS>> You are a friendly Llama. <</SYS>> Do you spit at people? [/INST]` const tokens = model.encode(textInput) const tokenizedText = model.batch_decode( tokens.map((token) => [token]), { clean_up_tokenization_spaces: false } ) console.log(tokenizedText) ``` And get: ```js 0: "<s>" 1: "[" 2: "INST" 3: "]" 4: "<<" 5: "SYS" 6: ">>" 7: "\n" 8: "You" 9: "are" 10: "a" 11: "friendly" 12: "L" 13: "l" 14: "ama" 15: "." 16: "\n" 17: "<" 18: "</" 19: "SYS" 20: ">>" 21: "\n" 22: "\n" 23: "Do" 24: "you" 25: "sp" 26: "it" 27: "at" 28: "people" 29: "?" 30: "[" 31: "/" 32: "INST" 33: "]" ``` So while newlines are there, all the spaces are gone. Is there any way to get the original text back but with token boundaries for visualisation?
https://github.com/huggingface/transformers.js/issues/399
closed
[ "question" ]
2023-11-17T18:46:05Z
2023-11-17T20:18:02Z
null
daaain
huggingface/alignment-handbook
39
Why zephyr-7b-dpo-lora is finetuned from mistralai/Mistral-7B-v0.1 instead of zepher-7b-sft model?
There is a misalignment between zephyr-7b-dpo-lora and zephyr-7b-dpo-full. The former one is finetuned from mistralai/Mistral-7B-v0.1. The latter is finetuned from zephyr-7b-dpo-full. I wonder what causes this misalignment ? Also, have you benchmarked performance improvement of the lora finetunning script? In my experiment, lora finetunning seems do not provide any performance improvement compared with the base model on MT-bench. I think maybe some parameters are incorrect.
https://github.com/huggingface/alignment-handbook/issues/39
open
[]
2023-11-17T18:11:59Z
2024-03-21T19:18:08Z
2
ChenDRAG
huggingface/optimum
1,545
Add support to export facebook encodec models to ONNX
### Feature request When I try to use optimum-cli to export the facebook/encodec_32khz model I get this error: ``` % optimum-cli export onnx --model facebook/encodec_32khz encodec.onnx Framework not specified. Using pt to export to ONNX. /Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-packages/torch/nn/utils/weight_norm.py:30: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm. warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.") Traceback (most recent call last): File "/Users/micchig/micromamba/envs/music-representation/bin/optimum-cli", line 10, in <module> sys.exit(main()) ^^^^^^ File "/Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-packages/optimum/commands/optimum_cli.py", line 163, in main service.run() File "/Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-packages/optimum/commands/export/onnx.py", line 246, in run main_export( File "/Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-packages/optimum/exporters/onnx/__main__.py", line 408, in main_export raise ValueError( ValueError: Trying to export a encodec model, that is a custom or unsupported architecture for the task feature-extraction, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type encodec to be supported natively in the ONNX export. ``` I am following the advice in the message and opening an issue here. :) ### Motivation I want to use the encodec model for inference and I'd much rather use ONNX than importing the pretrained model from transformers every time and run it in pytorch as ONNX is much faster. ### Your contribution I'm afraid I can't contribute to this personally
https://github.com/huggingface/optimum/issues/1545
open
[ "feature-request", "onnx" ]
2023-11-17T11:16:01Z
2025-12-12T06:23:33Z
6
giamic
pytorch/audio
3,704
Random cropping for variable length sequences
### 🚀 The feature I am proposing to add a `torch.nn.Module` transform that automatically crops/pads signals (with different options for padding such as constant/mirroring). I have the implementation already local so I would push it myself if this is alright. The interface would like as follows: ```python class RandomCrop(torch.nn.Module): def __init__( self, output_size, # number of samples to be enforced on output signal axis=-1, # axis over which to crop pad="silence", # a string controlling the behavior of padding (constant vs reflection) ) def forward(self, signal): # signal of arbitrary size signal = ... return signal # signal now has a fixed size of `output_size` at `axis` ``` I am looking for feedback to see if this is also needed/desired by others and whether I should open a PR to add it. ### Motivation, pitch This feature is needed for datasets with variable lengths (a common occurrence for audio). By default, this mismatch in lengths now needs to be handled in the collate function of the dataloader. With the proposed transform, the user can add it directly to their transform pipeline and/or make it part of their model if they so wish. Moreover, they could simply utilize it in their `collate_fn` if they want to crop based on the particular batch statistics (e.g. crop/pad to the shortest/longest sample in the batch). ### Alternatives _No response_ ### Additional context A reference implementation and interface can be seen [here](https://github.com/audeering/audtorch/blob/d7144a4b5a6cd7da1c5b570a8e86f047a2170890/audtorch/transforms/transforms.py#L113). As it is implemented with `numpy`, I would update to `torch`.
https://github.com/pytorch/audio/issues/3704
open
[]
2023-11-17T10:37:24Z
2024-05-23T06:24:00Z
4
ATriantafyllopoulos
huggingface/peft
1,142
How to do Gradient Checkpoint + LoRA
### System Info <img width="570" alt="image" src="https://github.com/huggingface/peft/assets/18441985/9b3ae040-d78a-477b-a9ec-6ab26b687a68"> ### Who can help? I need help with using LoRA + gradient checkpointing. Using the reentrant option appears to be the solution, but it slows down training a lot, for LLama-7b it's more than 2x the training time of a full fine-tune on the same hardware (A100). <img width="817" alt="image" src="https://github.com/huggingface/peft/assets/18441985/6c58b8b2-eb3c-472a-8643-dcec6193dfe6"> We should be able to just use vanilla gradient checkpoint. ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder - [ ] My own task or dataset (give details below) ### Reproduction ```python import torch from transformers import AutoModelForCausalLM from peft import LoraConfig, get_peft_model # model_id, vocab = 'meta-llama/Llama-2-7b-hf', 32000 model_id, vocab = "stas/tiny-random-llama-2", 3000 seq_len = 1024 bs=8 use_lora=True model_config = dict( pretrained_model_name_or_path=model_id, device_map=0, trust_remote_code=True, low_cpu_mem_usage=True, torch_dtype=torch.bfloat16, use_cache=False, ) model = AutoModelForCausalLM.from_pretrained(**model_config) # Just freeze embeddings for small memory decrease model.model.embed_tokens.weight.requires_grad_(False); if use_lora: lora_config = LoraConfig( r=2, # the rank of the LoRA matrices lora_alpha=16, # the weight lora_dropout=0.1, # dropout to add to the LoRA layers bias="none", # add bias to the nn.Linear layers? task_type="CAUSAL_LM", target_modules=["q_proj", "k_proj","v_proj","o_proj"], # the name of the layers to add LoRA ) model = get_peft_model(model, lora_config) example = {"input_ids": torch.randint(0, vocab, size=(bs,seq_len), device="cuda:0"), "labels":torch.randint(0, vocab, size=(bs,seq_len), device="cuda:0")} import torch, peft, accelerate, transformers for lib in [torch, peft, accelerate, transformers]: print(f"{lib.__name__}: {lib.__version__}") model.train() def call_forward(): with torch.amp.autocast("cuda", dtype=torch.bfloat16): out = model(**example) loss = out.loss return loss %timeit loss=call_forward() loss=call_forward() loss.requires_grad # 5.48 ms ± 31.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # True model.gradient_checkpointing_enable() %timeit loss=call_forward() loss=call_forward() loss.requires_grad # 5.13 ms ± 33.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # False model.gradient_checkpointing_enable(dict(use_reentrant=False)) %timeit loss=call_forward() loss=call_forward() loss.requires_grad # 7.23 ms ± 40.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # True ``` ### Expected behavior Nothing to add here.
https://github.com/huggingface/peft/issues/1142
closed
[]
2023-11-17T09:34:16Z
2025-10-06T10:22:58Z
null
tcapelle
pytorch/pytorch
113,933
How to re-use torch.compile results in different python processes?
### 🚀 The feature, motivation and pitch I'm trying to compile my custom vision transformer-based model. The compiled version is indeed faster than the traditional one. However, as scaled_dot_product_attention does not support dynamic shapes, the program compiles the transformer block for every input size. Thus, the TEST program takes ~15-20 minutes to compile the model and then processes hundreds to thousends pictures, which is ~10 times slower than the eager mode. I wonder if there's some api to save the intermediate states, so that when I run the same code again, I can reuse the compilation results in /tmp/torchinductor_$user and skip the boring compilation stage? ### Alternatives _No response_ ### Additional context _No response_ cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @bdhirsh @anijain2305 @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @wconstab @aakhundov
https://github.com/pytorch/pytorch/issues/113933
closed
[ "high priority", "feature", "triaged", "months", "oncall: pt2", "module: dynamic shapes", "module: dynamo" ]
2023-11-17T08:22:11Z
2024-08-30T06:47:28Z
null
flishwang