repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
25,205
open
Using Trainer with torch.compile() and use_orig_params=True produce model checkpoints that cannot be loaded
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-4.18.0-477.10.1.el8_8.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.7 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: NO - mixed_precision: fp16 - use_cpu: False - num_processes: 1 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: False - main_training_function: main - downcast_bf16: False - tpu_use_cluster: False - tpu_use_sudo: False - PyTorch version (GPU?): 2.1.0.dev20230523+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am using the Huggingface Trainer to fine-tune an LLaMA2 model with FSDP. I launch the script with the following command. I set `--fsdp_use_orig_params true` because without it I cannot get `torch.compile()` + FSDP to work (https://github.com/huggingface/transformers/pull/23481) ```bash accelerate launch --num_processes=4 \ --use_fsdp \ --mixed_precision=bf16 \ --fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP \ --fsdp_transformer_layer_cls_to_wrap="LlamaDecoderLayer" \ --fsdp_sharding_strategy=1 \ --fsdp_state_dict_type=FULL_STATE_DICT \ --fsdp_use_orig_params true \ src/run.py ${CONFIGS_FOLDER}/LLaMa2_FSDP.yaml ``` In the trainer configuration, I set the following parameters that are relevant to the issue. ```yalm torch_dtype: float32 gradient_checkpointing: true fsdp: full_shard auto_wrap bf16: true fp16: false torch_compile: true ``` The model is compiled correctly by PyTorch, the training is fast, and the loss looks good. However, when the training ends, I use the following line to save the model: ``` trainer.save_model() ``` The first problem is that the trainer seems to save two copies of the model: one is split into multiple parts, and the other one contains the same model in a single .bin file. ```bash ls output_path checkpoint-10322 config.json pytorch_model-00002-of-00003.bin pytorch_model.bin.index.json tokenizer.json checkpoint-15483 generation_config.json pytorch_model-00003-of-00003.bin special_tokens_map.json tokenizer.model checkpoint-5161 pytorch_model-00001-of-00003.bin pytorch_model.bin tokenizer_config.json training_args.bin ``` The second problem, which is related to torch.compile(), is that the model weights are saved with the _orig_mod prefix. ```json { "metadata": { "total_size": 26953670656 }, "weight_map": { "_orig_mod.lm_head.weight": "pytorch_model-00003-of-00003.bin", "_orig_mod.model.embed_tokens.weight": "pytorch_model-00001-of-00003.bin", "_orig_mod.model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00003.bin", "_orig_mod.model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin", "_orig_mod.model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin", "..." } } ``` So, when I try to load the model for inference using model: PreTrainedModel = AutoModelForCausalLM.from_pretrained(), I get a huge warning that says all the weights in the .bin file are not used, and all the LLaMA2 weights have been randomly initialized. ### Expected behavior The trainer should save the uncompiled model or correctly handle the `_orig_mod` prefixes.
07-31-2023 10:42:35
07-31-2023 10:42:35
I think this comes from a bas interaction between `torch.compile` and FSDP, cc @pacman100
transformers
25,204
closed
auto move input to device of the first-layer if necessary
# What does this PR do? This PR adds feature to model.generate() to check if input_ids is at the same device of input_embeddings. If not, move input to the device rather than simply raising an error or warning. It should be helpful when device_map="auto" or when there are multi-gpus to choose from, in which case we only need to worry about where to place to model. Hope it is acceptable. if not, I will close it.
07-31-2023 10:22:33
07-31-2023 10:22:33
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @ranchlai, thanks to opening this PR! Could you share a code snippet that we can run which would currently fail on main and which runs with this update? cc @gante <|||||>@amyeroberts We don't automatically move tensors in simple forward passes, with or without ` device_map="auto"`, so I don't see why we should do it in `generate` 🤗 (And, if we do decide do move the tensors in the forward pass, `generate` would automatically benefit from it :D)<|||||>sure. Before the PR, ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "mosaicml/mpt-7b-chat" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, llm_int8_enable_fp32_cpu_offload=True, load_in_8bit=True, device_map="auto") text = "Write a python program to find the largest prime number below 1000." input_ids = tokenizer.encode(text, return_tensors="pt") output = model.generate(input_ids, max_length=100, do_sample=True) response = tokenizer.decode(output[0]) print(response) ``` Error trace: ``` Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:05<00:00, 2.84s/it] [INFO|/home/ranch/models/transformers/src/transformers/modeling_utils.py:3383] 2023-07-31 23:53:40,770 >> All model checkpoint weights were used when initializing MptForCausalLM. [INFO|/home/ranch/models/transformers/src/transformers/modeling_utils.py:3391] 2023-07-31 23:53:40,770 >> All the weights of MptForCausalLM were initialized from the model checkpoint at mosaicml/mpt-7b-chat. If your task is similar to the task the model of the checkpoint was trained on, you can already use MptForCausalLM for predictions without further training. [INFO|/home/ranch/models/transformers/src/transformers/generation/configuration_utils.py:576] 2023-07-31 23:53:40,771 >> loading configuration file mosaicml/mpt-7b-chat/generation_config.json [INFO|/home/ranch/models/transformers/src/transformers/generation/configuration_utils.py:616] 2023-07-31 23:53:40,771 >> Generate config GenerationConfig { "_from_model_config": true, "eos_token_id": [ 0, 50278 ], "transformers_version": "4.32.0.dev0", "use_cache": false } [INFO|/home/ranch/models/transformers/src/transformers/generation/configuration_utils.py:616] 2023-07-31 23:53:40,786 >> Generate config GenerationConfig { "_from_model_config": true, "transformers_version": "4.32.0.dev0", "use_cache": false } /home/ranch/models/transformers/src/transformers/generation/utils.py:1296: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation ) warnings.warn( /home/ranch/models/transformers/src/transformers/generation/utils.py:1501: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`. warnings.warn( Traceback (most recent call last): File "/home/ranch/models/auto_move_input_ids_to_devcie/mpt.py", line 13, in <module> output = model.generate(input_ids, max_length=100, do_sample=True) File "/media/ranch/sda1/anaconda3/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/ranch/models/transformers/src/transformers/generation/utils.py", line 1622, in generate return self.sample( File "/home/ranch/models/transformers/src/transformers/generation/utils.py", line 2765, in sample next_token_scores = logits_warper(input_ids, next_token_scores) File "/home/ranch/models/transformers/src/transformers/generation/logits_process.py", line 97, in __call__ scores = processor(input_ids, scores) File "/home/ranch/models/transformers/src/transformers/generation/logits_process.py", line 388, in __call__ indices_to_remove = scores < torch.topk(scores, top_k)[0][..., -1, None] RuntimeError: "topk_cpu" not implemented for 'Half' ``` After PR: ``` Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:06<00:00, 3.01s/it] [INFO|/home/ranch/models/transformers/src/transformers/modeling_utils.py:3383] 2023-07-31 23:55:37,080 >> All model checkpoint weights were used when initializing MptForCausalLM. [INFO|/home/ranch/models/transformers/src/transformers/modeling_utils.py:3391] 2023-07-31 23:55:37,080 >> All the weights of MptForCausalLM were initialized from the model checkpoint at mosaicml/mpt-7b-chat. If your task is similar to the task the model of the checkpoint was trained on, you can already use MptForCausalLM for predictions without further training. [INFO|/home/ranch/models/transformers/src/transformers/generation/configuration_utils.py:576] 2023-07-31 23:55:37,081 >> loading configuration file mosaicml/mpt-7b-chat/generation_config.json [INFO|/home/ranch/models/transformers/src/transformers/generation/configuration_utils.py:616] 2023-07-31 23:55:37,081 >> Generate config GenerationConfig { "_from_model_config": true, "eos_token_id": [ 0, 50278 ], "transformers_version": "4.32.0.dev0", "use_cache": false } [INFO|/home/ranch/models/transformers/src/transformers/generation/configuration_utils.py:616] 2023-07-31 23:55:37,097 >> Generate config GenerationConfig { "_from_model_config": true, "transformers_version": "4.32.0.dev0", "use_cache": false } /home/ranch/models/transformers/src/transformers/generation/utils.py:1296: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation ) warnings.warn( [INFO|/home/ranch/models/transformers/src/transformers/generation/utils.py:1339] 2023-07-31 23:55:37,097 >> Moving input tensor from device `cpu` to `cuda:0` Write a python program to find the largest prime number below 1000. We are going to learn how to create a program in python to find largest prime number in O(logN) time using Sieve of Eratosthenes. Sieve of Eratosthenes is an efficient algorithm for making the list of prime numbers. We will learn how to create a python program to find the largest prime number (sieve of eratosthenes). Eratosthenes is ````<|||||>> @amyeroberts We don't automatically move tensors in simple forward passes, with or without ` device_map="auto"`, so I don't see why we should do it in `generate` 🤗 > > (And, if we do decide do move the tensors in the forward pass, `generate` would automatically benefit from it :D) Or think it another way? please let me know if I am not thinking clearly. ^_^ Advanced function/class such as `.generate()` or `Pipeline()`, receive more information such as user inputs / device informations than `.forward()`. Hence, these functions/classes should know better on how to make use of resources. If we know that we can safely(which I also need comments on whether it is safe to move or not), why not just move instead of raising an error? <|||||>@ranchlai our `transformers` [philosophy](https://huggingface.co/docs/transformers/philosophy) dictates that lower-level interfaces like `.forward()` or `.generate()` avoid hidden/implicit transformations (like setting the right device), but higher-level interfaces like the `pipeline()` may do it :)<|||||>Thank you for comments @gante I thought generate is high-level. Will close. <|||||>@ranchlai no worries, new proposals are always welcome 🤗
transformers
25,203
closed
add pathname and line number to logging formatter in debug mode
# What does this PR do? This PR adds pathname and line number to logging formatter in debug mode It make the debug much easier when setting `export TRANSFORMERS_VERBOSITY=debug`. It has no effect in other logging levels(info, warning, etc ). Hope this is acceptable. It's ok if not. before, there is no way to know where the logging comes from. ``` loading file vocab.json loading file merges.txt loading file tokenizer.json loading file added_tokens.json loading file special_tokens_map.json loading file tokenizer_config.json ``` after: ``` [INFO|/home/ranch/models/transformers/src/transformers/tokenization_utils_base.py:1842] 2023-07-31 17:59:45,311 >> loading file vocab.json [INFO|/home/ranch/models/transformers/src/transformers/tokenization_utils_base.py:1842] 2023-07-31 17:59:45,311 >> loading file merges.txt [INFO|/home/ranch/models/transformers/src/transformers/tokenization_utils_base.py:1842] 2023-07-31 17:59:45,311 >> loading file tokenizer.json [INFO|/home/ranch/models/transformers/src/transformers/tokenization_utils_base.py:1842] 2023-07-31 17:59:45,311 >> loading file added_tokens.json [INFO|/home/ranch/models/transformers/src/transformers/tokenization_utils_base.py:1842] 2023-07-31 17:59:45,311 >> loading file special_tokens_map.json [INFO|/home/ranch/models/transformers/src/transformers/tokenization_utils_base.py:1842] 2023-07-31 17:59:45,311 >> loading file tokenizer_config.json ```
07-31-2023 10:08:20
07-31-2023 10:08:20
_The documentation is not available anymore as the PR was closed or merged._<|||||>hi, @amyeroberts thanks very much for commenting! I have added a "detail" level. It's the same as debug but will also print the pathname and line-number for easy debugging. I don't know if that looks good ? Thanks!
transformers
25,202
closed
Better error message in `_prepare_output_docstrings`
# What does this PR do? Currently, if an output type has no docstring, or its docstring doesn't have `Args` or `Parameters`, we get an error ```bash File "/transformers/src/transformers/utils/doc.py", line 137, in _prepare_output_docstrings full_output_type = f"{output_type.__module__}.{output_type.__name__}" UnboundLocalError: local variable 'params_docstring' referenced before assignment ``` when `_prepare_output_docstrings` is called (for example, running a script that uses the relevant model). This is not super informative what's wrong and what to fix. This PR adds an error message to explain what's going on.
07-31-2023 09:35:42
07-31-2023 09:35:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,201
closed
[`MPT`] Add `require_bitsandbytes` on MPT integration tests
# What does this PR do? As per title and as discussed offline with @ydshieh adding `require_bitsandbytes` is needed to avoid issues with past torch CI that doesn't have bnb installed on their Docker images. cc @ydshieh
07-31-2023 08:51:40
07-31-2023 08:51:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>Jus out of curiosity does the `tooslow` decorator leads to tests still being run? ```bash Slow tests are skipped while they're in the process of being fixed. No test should stay tagged as "tooslow" as these will not be tested by the CI. ``` Currently running MPT-7B is the only way to check if we are in sync with trust remote code weights, as it is the smallest model available<|||||>We also have `Salesforce/instructblip-vicuna-7b` but I haven't checked how long it takes to run on CI. So far it doesn't seems too problematic (those 7b models)<|||||>@ydshieh we also load that model using bnb: https://github.com/huggingface/transformers/blob/main/tests/models/instructblip/test_modeling_instructblip.py#L526 perhaps I can also add `require_bitsandbytes` there too<|||||>Yes, please. I missed that in the Past CI report. Thanks a lot!<|||||>No the `tooslow` tests are only run manually, not a runner.<|||||>Ok so I would say maybe we should keep testing mpt-7b so that we're aware of any potential issue through the daily CI
transformers
25,200
closed
[`Pix2Struct`] Fix pix2struct cross attention
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/25175 As pointed out by @leitro on the issue, I can confirm the cross-attention should be in `layer_outputs[5]`. Also fixes the attention output index that should be in index `3` as the index `2` is the `position_bias` (they have the same shape so we didn't noticed the silent bug on the CI tests.) to repro: ```python import requests import torch from PIL import Image from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor url = "https://www.ilankelman.org/stopsigns/australia.jpg" image = Image.open(requests.get(url, stream=True).raw) model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base") processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-base") input_ids = torch.LongTensor([[0, 2, 3, 4]]) # image only inputs = processor(images=image, return_tensors="pt") outputs = model.forward(**inputs, decoder_input_ids=input_ids, output_attentions=True) print(outputs.cross_attentions[0].shape) >>> should be torch.Size([1, 12, 4, 2048]) ``` cc @amyeroberts
07-31-2023 08:48:55
07-31-2023 08:48:55
_The documentation is not available anymore as the PR was closed or merged._<|||||>Fixed a slow test of torchscript that was failing, however the test: ```bash tests/models/pix2struct/test_modeling_pix2struct.py::Pix2StructIntegrationTest::test_batched_inference_image_captioning_conditioned ``` is failing but can confirm is also failing on main, I think it is unrelated to this PR (env issues on my VM probably) so I am merging
transformers
25,199
closed
[LLaMA] Rotary positional embedding differs with official implementation
`transformers` implement LLaMA model's Rotary Positional Embedding (RoPE) as follows: https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/models/llama/modeling_llama.py#L173-L188 This is **GPT-NeoX style** RoPE. But in Meta's official model implementation, the model adopts **GPT-J style** RoPE, which processes query and key vectors in an **interleaved way** instead of split into two half (as in `rotate_half` method). Meta's official repo implements RoPE as ([full code link](https://github.com/facebookresearch/llama/blob/6c7fe276574e78057f917549435a2554000a876d/llama/model.py#L64-L74)): ```python def apply_rotary_emb( xq: torch.Tensor, xk: torch.Tensor, freqs_cis: torch.Tensor, ) -> Tuple[torch.Tensor, torch.Tensor]: xq_ = torch.view_as_complex(xq.float().reshape(*xq.shape[:-1], -1, 2)) xk_ = torch.view_as_complex(xk.float().reshape(*xk.shape[:-1], -1, 2)) freqs_cis = reshape_for_broadcast(freqs_cis, xq_) xq_out = torch.view_as_real(xq_ * freqs_cis).flatten(3) xk_out = torch.view_as_real(xk_ * freqs_cis).flatten(3) return xq_out.type_as(xq), xk_out.type_as(xk) ``` I'm confused with this difference, since `transformers.LlamaModel` can directly load weights converted from the officially released checkpoint, won't this lead to inconsistency in inference results? Is this difference expected?
07-31-2023 08:35:57
07-31-2023 08:35:57
transformers
25,198
open
Save tokenizer and model config when training with FSDP
# What does this PR do? Currently when training models with FSDP the tokenizer and model config are not saved (at least using standard configs I have). This is especially bad when running a custom train.py that modifies the tokenizer and model before training. In that scenario there is no record of the new model config or tokenizer. I have altered trainer.py to have a `_save_tokenizer_and_configs` method and added a call to this method in the model saving logic when FSDP is enabled. If the team feels there could be better refactoring to handle this would be happy to discuss improvements to this PR! Other note, to the best of my knowledge this is happening because when the logic flows to FSDP enabled it is just using the custom FSDP model saving and there is no logic for saving the other tokenizer and config info. If there is already a known config setting I'm missing that would fix this please let me know. ## Before submitting I have not added any new tests. ## Who can review? This involves modifications to the trainer so maybe @sgugger would be interested in reviewing?
07-31-2023 08:07:25
07-31-2023 08:07:25
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25198). All of your documentation changes will be reflected on that endpoint.
transformers
25,197
open
Multi-threaded parallel inferrence problem
### System Info accelerate == 0.19.0 bitsandbytes == 0.37.1 - `transformers` version: 4.29.2 - Platform: Linux-3.10.0-1160.92.1.el7.x86_64-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When I use chatglm2/bloomz-7b for multithreaded parallel inference: ``` import os os.environ['CUDA_VISIBLE_DEVICES'] = '0' from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM, TextIteratorStreamer import transformers import torch from threading import Thread, currentThread import time model = "/workspace/model-files/chatglm2" tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model, device_map='auto', trust_remote_code=True, load_in_8bit=True) def infer(prompt): inputs = tokenizer(prompt, return_tensors='pt') inputs = inputs.to(model.device) print('before generate') # out = model.generate(**inputs) # print('after generate') # out_text = tokenizer.decode(out[0]) # print('out_text is:', out_text) t = currentThread() streamer = TextIteratorStreamer(tokenizer) generation_kwargs = dict(inputs, streamer=streamer, max_length=2048) thread = Thread(target=model.generate, kwargs=generation_kwargs) thread.start() print("------******-------") for new_text in streamer: print("thread id:", t.ident ,"new text:",new_text) print("------******-------") if __name__ == '__main__': prompt1 = '写一篇关于黄鹤楼的800字作文' prompt2 = 'Describe each state in the United States in detail' t1 = Thread(target=infer, args=(prompt1,)) t2 = Thread(target=infer, args=(prompt2,)) t1.start() time.sleep(5) t2.start() t1.join() t2.join() ``` I get below error info: ``` Traceback (most recent call last): File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "/usr/lib/python3.8/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 1515, in generate return self.greedy_search( File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 2332, in greedy_search outputs = self( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py", line 932, in forward transformer_outputs = self.transformer( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py", line 828, in forward hidden_states, presents, all_hidden_states, all_self_attentions = self.encoder( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py", line 638, in forward layer_ret = layer( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py", line 563, in forward mlp_output = self.mlp(layernorm_output) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py", line 499, in forward output = self.dense_4h_to_h(intermediate_parallel) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/bitsandbytes/nn/modules.py", line 242, in forward out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state) File "/usr/local/lib/python3.8/dist-packages/bitsandbytes/autograd/_functions.py", line 488, in matmul return MatMul8bitLt.apply(A, B, out, bias, state) File "/usr/local/lib/python3.8/dist-packages/torch/autograd/function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/usr/local/lib/python3.8/dist-packages/bitsandbytes/autograd/_functions.py", line 397, in forward output += torch.matmul(subA, state.subB) RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x4 and 2x4096) ``` ### Expected behavior able to inference normally
07-31-2023 07:05:51
07-31-2023 07:05:51
Hi @zhaotyer, thanks for opening this issue! Could you edit the issue details to make sure that the code and traceback are properly formatted? This will make it easier for us to read and understand what's going on. The code examples should be wrapped around three backticks. You can also specify the language to get color coded formatting :) i.e. ` ```python CODE GOES HERE ``` ` Same for the traceback - it should go between a pair of three tickbacks ` ``` error message ``` ` cc @gante as this seems to be related to generate :) <|||||>@zhaotyer that seems to be a bitsandbytes issue -- would you be able to update this library to its latest version and confirm whether the issue persists? :)<|||||>> Collaborator Thanks for the reminder, I have edited it <|||||>> @zhaotyer that seems to be a bitsandbytes issue -- would you be able to update this library to its latest version and confirm whether the issue persists? :) Should have nothing to do with bitsandbytes The following are the test results of different transformers/accelerate versions ``` 1.transformers==4.31.0 accelerate==0.21.0 bitsandbytes==0.37.1 1.1 chatglm2 load_in_8bit=true singlethread normal, multithread normal 1.2 chatglm2 load_in_8bit=false singlethread normal, multithread normal 2.transformers==4.29.2 accelerate==0.19.0 bitsandbytes==0.37.1 1.1 chatglm2 load_in_8bit=true singlethread normal, multithread have error(RuntimeError: mat1 and mat2 shapes cannot be multiplied) 1.2 chatglm2 load_in_8bit=false singlethread normal, multithread normal ``` @gante <|||||>@zhaotyer OK, thanks for running with updated versions and reporting. If I've understood correctly, it looks like everything is running as expected on the most recent versions of accelerate and transformers and so the issue has been resolved in the most recent releases. <|||||>> @zhaotyer OK, thanks for running with updated versions and reporting. If I've understood correctly, it looks like everything is running as expected on the most recent versions of accelerate and transformers and so the issue has been resolved in the most recent releases. Can you explain what changes have been made between the two releases to solve this problem? thks<|||||>@zhaotyer The best way to find this is running `git bisect` to identify the commit and PR which resolved this for your example script.
transformers
25,196
open
[DOCS] Add descriptive docstring to MinNewTokensLength
# What does this PR do? It addresses one of the arguments in https://github.com/huggingface/transformers/issues/24783 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @gante
07-31-2023 06:22:57
07-31-2023 06:22:57
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25196). All of your documentation changes will be reflected on that endpoint.<|||||>cc @gante <|||||>> Thank you for iterating raised_hands Always a pleasure, thanks for the opportunity :hugs:
transformers
25,195
open
Incorrect segmentation results on float input in 4.31.0
### System Info python-3.9.10 transformers-4.31.0 pytorch-2.0.1 ### Who can help? @amyeroberts ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The following example (based on the examples from the docs) gives consistent results with transformers-4.27.2 whether or not `image` is kept as `uint8` or converted to `float32`. But with 4.31.0, the result is wrong when using the `float32` input: ``` import torch import numpy as np from transformers import AutoImageProcessor, UperNetForSemanticSegmentation from PIL import Image from huggingface_hub import hf_hub_download image_processor = AutoImageProcessor.from_pretrained("openmmlab/upernet-convnext-tiny") model = UperNetForSemanticSegmentation.from_pretrained("openmmlab/upernet-convnext-tiny") filepath = hf_hub_download( repo_id="hf-internal-testing/fixtures_ade20k", filename="ADE_val_00000001.jpg", repo_type="dataset" ) image = Image.open(filepath).convert("RGB") image = np.array(image) # Comment the line below to get the right result in 4.31.0 image = image.astype(np.float32)/255.0 inputs = image_processor(images=image, return_tensors="pt").pixel_values outputs = model(inputs) sizes = [np.array(image).shape[:2]] seg = torch.stack(image_processor.post_process_semantic_segmentation(outputs, target_sizes=sizes)) torch.unique(seg) ``` ### Expected behavior Expected result (observerd behaviour in 4.27.2 regardless of whether the float conversion is commented out): ``` tensor([ 0, 1, 2, 4, 6, 9, 17, 25, 52, 53]) ``` Actual result in 4.31.0 (unless the float conversion is commented out and the input image is kept as `uint8`): ``` tensor([2]) ``` (i.e., the whole image is perceived as one class)
07-31-2023 04:26:57
07-31-2023 04:26:57
cc @amyeroberts this seems to be the same issue as discussed in https://github.com/huggingface/transformers/issues/24857. Basically there's no need to rescale the images yourself before passing them to the image processor. The image processor already handles the rescaling for you. Alternatively, if you want to handle the rescaling yourself, just instantiate the image processor as follows: ``` from transformers import AutoImageProcessor image_processor = AutoImageProcessor.from_pretrained("openmmlab/upernet-convnext-tiny", do_rescale=False) ``` I'm curious to know why you expected to handle the rescaling yourself :) maybe we can improve documentation on this.<|||||>Thanks for the quick reply. I'm not sure I 100% understand what you mean with "rescaling" here. Do you mean dividing the values by 255 to go from a `(0,255)` range to a `(0,1)` range? If that is what you mean, then the answer to your question is we never do "rescaling", because we never use 8-bit representations in the first place. The example I posted here converts from `uint8` to `float32` and "rescales" because I wanted to start from a basic example that was as close to the docs as possible. But in our actual applications, we always work in floating point representations, and therefore in the `(0,1)` range. I don't know of any rationale to have floating point representations use an arbitrary range like `(0, 255)`, and as far as I have seen, any code dealing with floating-point image representations treat the usable range as `(0,1)`. Images in `uint8` use the `(0, 255)` range simply because that's the only values this finite integer can represent. If an image was in, say, a `uint16` representation, they I'd expect it to use the `(0, 65536)` range. So from my point of view, it seems odd to have to "rescale" my values by multiplying them by 255 in order for them to be processed correctly. From #24857, I'm also under the impression that `transformers` might be resizing my inputs by converting them into uint8 in order to let PIL do the resizing. If that is the case, it is also very problematic for us. The whole reason we are using floating point image representations is so we can work on high-precision, high-dynamic-range images. Converting floating point values to `uint8` completely destroys this ability. Given that both pytorch and tensorflow can resize image tensors natively, I don't really understand the need to go through a third library that will decimate my data. Hope it helps, happy to provide more information if that was unclear.<|||||>Hi @antoche, thanks for raising this issue and providing so much detail. Yes, by rescaling we're referring to scaling the pixel values between `[0, 1]` (or sometimes `[-1, 1]`. > it seems odd to have to "rescale" my values by multiplying them by 255 in order for them to be processed correctly. I think we're on the same page: you shouldn't have to rescale them. You're right in understanding that we do convert any input images into `uint8` when resizing and so your input images would be rescaled by 255, resized and then rescaled back again. This is for historical reasons as Pillow was first used for resizing images and we've kept it mainly for backwards compatibility. Part of the reason Pillow was used is that the processing classes should (as much as possible) be framework independent i.e. a TensorFlow user and PyTorch user should be able to use the same class. That all being said, this issue has cropped up a few times. For the linked issues e.g. #24857, we're thinking about adding a warning when we detect images with float values being passed in to prevent double rescaling. For this particular issue, the change to remove rescaling to convert to `PIL.Image.Image` is more involved and not something I have immediate bandwidth for unfortunately, but will add to my longer term to-do list. For preprocessing the images, I would suggest using torchvision's transforms (this will also likely be a lot faster!). Their recent transforms v2 is v. good for simultaneously handling images and masks. If you have any other feedback please do let us know. <|||||>Ok, I think I understand what transformers is doing now. It looks like passing `do_rescale=False` should work around this issue for now at the moment.<|||||>I just ran into another instance of this issue in `diffusers`' `StableDiffusionDepth2ImgPipeline`. The `StableDiffusionDepth2ImgPipeline` passes its input image (which can be either a PIL image containing 8-bit values, a numpy array containing floating point values between 0 and 1, or a torch float tensor in the range 0 and 1) to `transformers`' `DPTFeatureExtractor`. With transformers-4.31.0, `DPTFeatureExtractor` rescales the input by dividing it by 255 no matter what representation it's in, which mangles the tensors and results in an invalid, unusable output. Because it is `StableDiffusionDepth2ImgPipeline` doing the call, I can't just change my code to pass `do_rescale=False` to work around it, the call has to be changed in the `diffusers` codebase. I strongly advise against applying such arbitrary blind rescaling of the input data. Rescaling should only be required when converting from an integer type to a floating point type, and depends on the bit depth of the integer type (e.g., if the input values were int16, and the model expects floating point values, the input should be divided by 65535, not 255).
transformers
25,194
open
`AutoTokenizer.from_pretrained` raise error when another same filename tokenizer imported
### System Info - `transformers` version: 4.28.1 - Platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.16.2 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. bug reproduction ```python from transformers import AutoTokenizer # import one tokenizer from local (/home/admin/notebook/THUDM/chatglm2-6b/tokenization_chatglm.py) tokenizer = AutoTokenizer.from_pretrained("/home/admin/notebook/THUDM/chatglm2-6b/", trust_remote_code=True) # import another version tokenizer from another directory, but same filename (/home/admin/notebook/THUDM/chatglm-6b/tokenization_chatglm.py) tokenizer = AutoTokenizer.from_pretrained("/home/admin/notebook/THUDM/chatglm-6b/", trust_remote_code=True) ``` raise exception ```text --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[1], line 10 6 from transformers import AutoTokenizer 8 tokenizer = AutoTokenizer.from_pretrained("/home/admin/notebook/THUDM/chatglm2-6b/", trust_remote_code=True) ---> 10 tokenizer = AutoTokenizer.from_pretrained("/home/admin/notebook/THUDM/chatglm-6b/", trust_remote_code=True) File /usr/local/lib/python3.8/dist-packages/transformers/models/auto/tokenization_auto.py:702, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 698 if tokenizer_class is None: 699 raise ValueError( 700 f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported." 701 ) --> 702 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 704 # Otherwise we have to be creative. 705 # if model is an encoder decoder, the encoder tokenizer class is used by default 706 if isinstance(config, EncoderDecoderConfig): File /usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py:1811, in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1808 else: 1809 logger.info(f"loading file {file_path} from cache at {resolved_vocab_files[file_id]}") -> 1811 return cls._from_pretrained( 1812 resolved_vocab_files, 1813 pretrained_model_name_or_path, 1814 init_configuration, 1815 *init_inputs, 1816 use_auth_token=use_auth_token, 1817 cache_dir=cache_dir, 1818 local_files_only=local_files_only, 1819 _commit_hash=commit_hash, 1820 **kwargs, 1821 ) File /usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py:1965, in PreTrainedTokenizerBase._from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, local_files_only, _commit_hash, *init_inputs, **kwargs) 1963 # Instantiate tokenizer. 1964 try: -> 1965 tokenizer = cls(*init_inputs, **init_kwargs) 1966 except OSError: 1967 raise OSError( 1968 "Unable to load vocabulary from file. " 1969 "Please check that the provided vocabulary is accessible and not corrupted." 1970 ) File ~/.cache/huggingface/modules/transformers_modules/tokenization_chatglm.py:69, in __init__(self, vocab_file, padding_side, **kwargs) 66 def _get_text_tokenizer(self): 67 return self.text_tokenizer ---> 69 @staticmethod 70 def get_blank_token(length: int): 71 assert length >= 2 72 return f"<|blank_{length}|>" File /usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils.py:347, in PreTrainedTokenizer.__init__(self, **kwargs) 346 def __init__(self, **kwargs): --> 347 super().__init__(**kwargs) 349 # Added tokens - We store this for both slow and fast tokenizers 350 # until the serialization of Fast tokenizers is updated 351 self.added_tokens_encoder: Dict[str, int] = {} File /usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py:1534, in PreTrainedTokenizerBase.__init__(self, **kwargs) 1530 self.deprecation_warnings = ( 1531 {} 1532 ) # Use to store when we have already noticed a deprecation warning (avoid overlogging). 1533 self._in_target_context_manager = False -> 1534 super().__init__(**kwargs) File /usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py:828, in SpecialTokensMixin.__init__(self, verbose, **kwargs) 826 setattr(self, key, value) 827 elif isinstance(value, (str, AddedToken)): --> 828 setattr(self, key, value) 829 else: 830 raise TypeError(f"special token {key} has to be either str or AddedToken but got: {type(value)}") AttributeError: can't set attribute ``` In this traceback, I found the exception raised from a abnormal line 69 `@staticmethod`, which pointed to the correct file `/home/admin/notebook/THUDM/chatglm-6b/tokenization_chatglm.py` ![image](https://github.com/huggingface/transformers/assets/25120867/1762dd64-8d92-49f4-963e-5ad26dece4d5) but the exception was raised from the wrong file (/home/admin/notebook/THUDM/chatglm2-6b/tokenization_chatglm.py) ![image](https://github.com/huggingface/transformers/assets/25120867/dbaf37e0-3a1a-4795-8613-4016332818bb) 2. a hard fix After edit one filename of these two tokenizers, the bug disappear. ### Expected behavior import the tokenizer with no exception
07-31-2023 02:52:01
07-31-2023 02:52:01
I think you should report the issue in the repositories where this code comes from, as the fix you suggest is in that code (and not in the Transformers library).<|||||>hmmm that's an ad-hoc solution to validate the problem, and I'm not sure whether this import error is expected by Transformers?<|||||>I don't see an import error in the code and traceback you pasted. I see the custom code of this tokenizer failing to execute.
transformers
25,193
closed
make build_mpt_alibi_tensor a method of MptModel so that deepspeed co…
…uld override it to make autoTP work # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) enable autoTP for mpt in huggingface model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada
07-31-2023 02:23:00
07-31-2023 02:23:00
@sgugger @ArthurZucker please review. thanks<|||||>should work with https://github.com/microsoft/DeepSpeed/pull/4062<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
25,192
open
Unable to upload/load a tool.
### System Info - transformers version: 4.31.0 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.11.3 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When pushing my tool to the hub with the following function, I'm getting an error. ```python tool.push_to_hub("romainlg/hf-sql") ``` ``` Traceback (most recent call last): File "d:\Devs\python\ai\tools\sql\main.py", line 4, in <module> tool.push_to_hub("romainlg/hf-sql") File "C:\Python311\Lib\site-packages\transformers\tools\base.py", line 315, in push_to_hub metadata_update(repo_id, {"tags": ["tool"]}, repo_type="space") File "C:\Python311\Lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\site-packages\huggingface_hub\repocard.py", line 810, in metadata_update return card.push_to_hub( ^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\site-packages\huggingface_hub\repocard.py", line 275, in push_to_hub tmp_path.write_text(str(self)) File "C:\Python311\Lib\pathlib.py", line 1079, in write_text return f.write(data) ^^^^^^^^^^^^^ File "C:\Python311\Lib\encodings\cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UnicodeEncodeError: 'charmap' codec can't encode character '\U0001f525' in position 27: character maps to <undefined> ``` I bypassed it by saving the tool and then uploading it to the hub but when trying to import it later, I'm getting: ```python tool.save('./hf-sql-tmp') ``` ```python from transformers import load_tool sql = load_tool("romainlg/hf-sql") ``` ``` Traceback (most recent call last): File "d:\Devs\python\ai\tools\sql\test.py", line 5, in <module> sql = load_tool("romainlg/hf-sql") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\site-packages\transformers\tools\base.py", line 690, in load_tool return Tool.from_hub(task_or_repo_id, model_repo_id=model_repo_id, token=token, remote=remote, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\site-packages\transformers\tools\base.py", line 245, in from_hub raise EnvironmentError( OSError: romainlg/hf-sql does not appear to provide a valid configuration in `tool_config.json` or `config.json`. ``` However, by looking at the other working tools, i've the same config base... I don't know what's the issue here... The full code is available at [https://huggingface.co/romainlg/hf-sql/tree/main](https://huggingface.co/romainlg/hf-sql/tree/main) Thank you ! ### Expected behavior Be able to push the tool to the hub and the ability to load the tool from the hub.
07-30-2023 20:54:38
07-30-2023 20:54:38
Hi @Romainlg29, thanks for reporting this issue! Could you update the issue information with the full running environment info: run `transformers-cli env` in the terminal and copy-paste the output? cc @LysandreJik <|||||>Hi, I just updated the config, which is: ``` transformers version: 4.31.0 Platform: Windows-10-10.0.22621-SP0 Python version: 3.11.3 Huggingface_hub version: 0.16.4 Safetensors version: 0.3.1 Accelerate version: 0.21.0 Accelerate config: not found PyTorch version (GPU?): 2.0.1+cu118 (True) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed ``` Thank you for your reply.
transformers
25,191
closed
_forward_unimplemented() got an unexpected keyword argument 'input_ids'
### System Info I am training on Google Colab Pro+ with following info: - `transformers` version: 4.31.0 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (False) - Tensorflow version (GPU?): 2.12.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hi, I am trying to build a Multi-tasking Model following this [article](https://towardsdatascience.com/how-to-create-and-train-a-multi-task-transformer-model-18c54a146240) for two task: **zero-shot classification** **and sentiment analysis**. **1. Creates the encoder and an output head for each task.** I have try three different loaders based on the suggestion from the similar issues [here](https://github.com/huggingface/transformers/issues/21335): `AutoModel`, `AutoModelForSequenceClassification`, `RobertaForSequenceClassification ` ``` import torch.nn as nn from typing import List from transformers import AutoModel, AutoModelForSequenceClassification, RobertaForSequenceClassification class SequenceClassificationHead(nn.Module): def __init__(self, hidden_size, num_labels, dropout_p=0.1): super().__init__() self.num_labels = num_labels self.dropout = nn.Dropout(dropout_p) self.classifier = nn.Linear(hidden_size, num_labels) self._init_weights() def forward(self, sequence_output, pooled_output, labels=None, **kwargs): pooled_output = self.dropout(pooled_output) logits = self.classifier(pooled_output) loss = None if labels is not None: if labels.dim() != 1: # Remove padding labels = labels[:, 0] loss_fct = nn.CrossEntropyLoss() loss = loss_fct( logits.view(-1, self.num_labels), labels.long().view(-1) ) return logits, loss def _init_weights(self): self.classifier.weight.data.normal_(mean=0.0, std=0.02) if self.classifier.bias is not None: self.classifier.bias.data.zero_() class MultiTaskModel(nn.Module): def __init__(self, encoder_name_or_path, tasks: List): super().__init__() self.encoder = AutoModel.from_pretrained(encoder_name_or_path) self.output_heads = nn.ModuleDict() for task in tasks: decoder = self._create_output_head(self.encoder.config.hidden_size, task) # ModuleDict requires keys to be strings self.output_heads[str(task.task_id)] = decoder @staticmethod def _create_output_head(encoder_hidden_size: int, task): if task.task_type == "seq_classification": return SequenceClassificationHead(encoder_hidden_size, task.num_labels) else: raise NotImplementedError() ``` **2. Define the metrics** ``` from transformers import EvalPrediction import numpy as np from datasets import load_metric import evaluate accuracy_metric = evaluate.load("accuracy") f1_metric = evaluate.load("f1") precision_metric = evaluate.load("precision") recall_metric = evaluate.load("recall") def compute_metrics(eval_preds: EvalPrediction): preds_dim = (eval_preds.predictions[0] if isinstance(eval_preds.predictions, tuple) else eval_preds.predictions).ndim if preds_dim == 2: # Sentiment analysis average="binary" elif preds_dim == 3: # Sequence classification average="macro" else: raise NotImplementedError() logits, labels = eval_preds predictions = np.argmax(logits, axis=-1) accuracy = accuracy_metric.compute(predictions=predictions, references=labels)["accuracy"] precision = precision_metric.compute(predictions=predictions, references=labels, average=average)["precision"] recall = recall_metric.compute(predictions=predictions, references=labels, average=average)["recall"] f1 = f1_metric.compute(predictions=predictions, references=labels, average=average)["f1"] return {"accuracy": accuracy, "precision": precision, "recall": recall, "f1": f1} ``` **3. Fine-tuning pre-trained model `vinai/phobert-base-v2` which based on `RoBERTa`** ``` from transformers import DataCollatorWithPadding, IntervalStrategy from transformers.trainer_utils import get_last_checkpoint import os import random transformers.logging.set_verbosity_info() set_seed(42) model_args = ModelArguments(encoder_name_or_path="vinai/phobert-base") training_args = TrainingArguments( do_train=True, do_eval=True, output_dir="./mtl_zsl_sa_model", evaluation_strategy = IntervalStrategy.STEPS, per_device_train_batch_size=32, per_device_eval_batch_size=32, eval_steps = 500, save_steps = 2000, logging_steps = 500, learning_rate=5e-5, label_smoothing_factor=0.1, # fp16=True, num_train_epochs=3, weight_decay=0.01, save_strategy=IntervalStrategy.STEPS, load_best_model_at_end = True, metric_for_best_model = 'f1', optim="adamw_torch", resume_from_checkpoint=True, remove_unused_columns=False ) data_args = DataTrainingArguments(max_seq_length=128, max_train_samples=100) tokenizer = AutoTokenizer.from_pretrained( model_args.encoder_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) tasks, raw_datasets = load_datasets(tokenizer, data_args, training_args) model = MultiTaskModel(model_args.encoder_name_or_path, tasks) train_dataset = raw_datasets["train"] eval_datasets = raw_datasets["validation"] data_collator = DataCollatorWithPadding( tokenizer, pad_to_multiple_of=8 ) # Initialize our Trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset compute_metrics=compute_metrics, tokenizer=tokenizer, data_collator=data_collator, ) trainer.train() ``` This is what my merged tokenized datasets look like ``` DatasetDict({ train: Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'task_ids'], num_rows: 29250 }) validation: [Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'task_ids'], num_rows: 1510 }), Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'task_ids'], num_rows: 2262 })] }) ``` This is my loaded model: ``` MultiTaskModel( (encoder): RobertaForSequenceClassification( (roberta): RobertaModel( (embeddings): RobertaEmbeddings( (word_embeddings): Embedding(64001, 768, padding_idx=1) (position_embeddings): Embedding(258, 768, padding_idx=1) (token_type_embeddings): Embedding(1, 768) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): RobertaEncoder( (layer): ModuleList( (0-11): 12 x RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) ) (classifier): RobertaClassificationHead( (dense): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) (out_proj): Linear(in_features=768, out_features=2, bias=True) ) ) (output_heads): ModuleDict( (0): SequenceClassificationHead( (dropout): Dropout(p=0.1, inplace=False) (classifier): Linear(in_features=768, out_features=2, bias=True) ) (1): SequenceClassificationHead( (dropout): Dropout(p=0.1, inplace=False) (classifier): Linear(in_features=768, out_features=3, bias=True) ) ) ) ``` And I run into the following error: ``` [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1537 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size 1538 ) -> 1539 return inner_training_loop( 1540 args=args, 1541 resume_from_checkpoint=resume_from_checkpoint, [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1807 1808 with self.accelerator.accumulate(model): -> 1809 tr_loss_step = self.training_step(model, inputs) 1810 1811 if ( [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in training_step(self, model, inputs) 2652 2653 with self.compute_loss_context_manager(): -> 2654 loss = self.compute_loss(model, inputs) 2655 2656 if self.args.n_gpu > 1: [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in compute_loss(self, model, inputs, return_outputs) 2677 else: 2678 labels = None -> 2679 outputs = model(**inputs) 2680 # Save past state if it exists 2681 # TODO: this needs to be fixed and made cleaner later. [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs) 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] TypeError: _forward_unimplemented() got an unexpected keyword argument 'input_ids' ``` ### Expected behavior The trainer run smoothly without errors
07-30-2023 18:26:17
07-30-2023 18:26:17
I have add another `forward` method for my `MultiTaskModel `and the error is gone even though the new one appear.
transformers
25,190
closed
[quantization.md] fix
Fix typos (use case is 2 words)
07-30-2023 16:27:22
07-30-2023 16:27:22
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,189
closed
Can BlipForImageTextRetrieval be used to generate captions?
### System Info `transformers` version: 4.31.0.dev0 Platform: Linux-5.15.0-76-generic-x86_64-with-debian-bullseye-sid Python version: 3.7.15 Huggingface_hub version: 0.15.1 Safetensors version: 0.3.1 PyTorch version (GPU?): 1.13.1+cu117 (True) Tensorflow version (GPU?): 2.11.0 (False) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: no Using distributed or parallel set-up in script?: no ### Who can help? @younesbelkada @ArthurZucker @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am seeking a Blip model that can serve two purposes: predicting the similarity between an input image and text and generating a caption for an input image. I am aware that `BlipForImageTextRetrieval` is suitable for predicting the similarity between an image and text, while `BlipForConditionalGeneration` can generate captions for images. However, I was wondering whether either of these models can be employed to perform the alternate task as well. A bit more context: I have a fine-tuned `BlipForImageTextRetrieval` model that I would like to use for generating captions. ### Expected behavior Any guidance on obtaining a Blip model that can do both the tasks mentioned above would be extremely helpful. Thanks.
07-30-2023 14:49:22
07-30-2023 14:49:22
Hey! Since you are not reporting a bug, could you open a discussion on [the forum](https://discuss.huggingface.co/) for this kind of questions?<|||||>Sure, apologies. <|||||>No worries! 🤗 Feel free to ping @younesbelkada there
transformers
25,188
open
Loosen output shape restrictions on GPT-style models
# What does this PR do? This PR loosens checks in the model classes of a couple of GPT-style models that enforce the output shape of the model to be identical to the input shape. This aligns the changed model classes to most other model classes which don't enforce the shapes to be identical. I might not be aware of some legitimate reasons why these restrictions are in place specifically for these models. Please let me know if there are any and I'll close this PR :) ## Motivation We're building a library on top of Transformers that leverages various model implementations. Among others, some features in our library will result in the batch size to change dynamically during one model forward pass. While implementing these features, we didn't find this to be an issue for most model classes as they don't require the input and output shapes to be identical. However, some GPT-style models do enforce this. To avoid copying and keeping in sync the full model classes, it would be super helpful to us if these restrictions could also be loosened for GPT models. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. - text models: @ArthurZucker and @younesbelkada
07-30-2023 11:30:04
07-30-2023 11:30:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25188). All of your documentation changes will be reflected on that endpoint.<|||||>In principle, it should be OK to loosen this restriction as it doesn't introduce any breaking changes. However, it is making the code more likely to have a silent bug. Let's get a second opinion from @sgugger With regard to the changes, I'd prefer that the instead of slicing, `input_shape[1:]`, we name each of the dimensions explicitly so it's clearer what `output_shape` is e.g. `output_shape = (-1, sequence_length, hidden_size)`.
transformers
25,187
closed
Loading LLaMA model does not use GPU memory neither offload folder
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): Tensorflow version (GPU?): 2.13.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes (**this is part of the issue**) - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ![2023-07-30 105328](https://github.com/huggingface/transformers/assets/31779190/11d18954-ad61-43e0-a12e-42b95ebd75b1) code used: ``` from pynvml import * nvmlInit() h = nvmlDeviceGetHandleByIndex(0) info = nvmlDeviceGetMemoryInfo(h) print(f'total : {info.total}') print(f'free : {info.free}') print(f'used : {info.used}') from transformers import LlamaForCausalLM, LlamaTokenizer model_id="./converted-llama-2-7b-chat" tokenizer = LlamaTokenizer.from_pretrained(model_id) model =LlamaForCausalLM.from_pretrained(model_id, device_map="auto", offload_folder="offload") ``` The memory tops and script finish with **killed** message without usage of offload folder and usage of 618 MiB of GPU RAM only ### Expected behavior - Usage of available GPU RAM - Usage of offload folder before finishing with **killed** message
07-30-2023 09:17:57
07-30-2023 09:17:57
cc @sgugger <|||||>Hi @not4fame , can you print the `device_map` of the model by printing `model.hf_device.map` ? What do you mean that it is not using the `offloaded folder` ? To me, if you were able to load your model without error, it means that you are indeed using disk offload as your GPU RAM + RAM < model size. As for the gpu, we need to leave some space to bring back the layers that were offloaded to the cpu to the gpu during inference. To make better use of your gpus, you should probably load the model in fp16. Check out this [colab](https://colab.research.google.com/drive/11HJsgGJl8eK57FEPVmHmfnxzbfod55yM?usp=sharing) where load the model in fp16. <|||||>Also, you might need to use `offload_state_dict=True` to avoid getting out of CPU RAM while loading your model.<|||||>Thank you @sgugger `offload_state_dict=True` flag was exactly what was missing. Now my script is properly using offload folder. ![2023-08-01 070505](https://github.com/huggingface/transformers/assets/31779190/cdad57f4-928b-403d-960d-b256c6b58e9b) thank you @SunMarc for your comments, I wasn't able to dump the device_map because the model didn't load, but when trying to access this configuration, after some initial research, I managed to find a way to have even more control when loading the model which resulted in better performance ``` from pynvml import * nvmlInit() h = nvmlDeviceGetHandleByIndex(0) info = nvmlDeviceGetMemoryInfo(h) print(f'total : {info.total}') print(f'free : {info.free}') print(f'used : {info.used}') from transformers import LlamaForCausalLM, LlamaTokenizer, AutoConfig, AutoModelForCausalLM import accelerate import json model_id="./converted-llama-2-7b-chat" config = AutoConfig.from_pretrained(model_id) with accelerate.init_empty_weights(): dummy_model = AutoModelForCausalLM.from_config(config) device_map = accelerate.infer_auto_device_map(dummy_model, max_memory={0: "4GiB", "cpu": "10GiB"}) tokenizer = LlamaTokenizer.from_pretrained(model_id) model =LlamaForCausalLM.from_pretrained( model_id, device_map=device_map, load_in_8bit=True, llm_int8_enable_fp32_cpu_offload=True, offload_folder="offload", offload_state_dict=True) ```
transformers
25,186
open
[DOCS] Add `NoRepeatNGramLogitsProcessor` Example for `LogitsProcessor` class
# What does this PR do? This PR adds an example to the docstring of `NoRepeatNGramLogitsProcessor` and edits the docstring description of the same. This is with reference to #24783 and is part of the #24575 Fixes # (issue) #24783 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). ## Who can review? @gante @sgugger ### Additional Notes: In comparison with [TFNoRepeatNGramLogitsProcessor](https://github.com/huggingface/transformers/blob/05cda5df3405e6a2ee4ecf8f7e1b2300ebda472e/src/transformers/generation/tf_logits_process.py#L388), I noticed that the required functions `_get_ngrams`, `_get_generated_ngrams`, and `_calc_banned_ngram_tokens` were outside the class. Is this expected?
07-29-2023 22:08:42
07-29-2023 22:08:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25186). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @gante Thanks for your feedback. I've made the requested changes. Let me know what you think. <|||||>@gante I've made the recommended changes. Does this look better?
transformers
25,185
open
conversational + text-generation pipelines fail to read max_length from GenerationConfig
### System Info - `transformers` version: 4.32.0.dev0 - Platform: macOS-13.3.1-arm64-arm-64bit - Python version: 3.10.5 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: false - Using distributed or parallel set-up in script?: false ### Who can help? @Narsil @gante ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ~~~ pipe = pipeline("conversational", model="facebook/blenderbot_small-90M") conv = Conversation("Does money buy happiness?" * 80) gc = GenerationConfig(max_length=512) pipe(conv, generation_config=gc) ~~~ This trims the input and log this message: `Conversation input is to long (401), trimming it to (128 - 10)` Even though I asked for `max_length=512` becase it ignores the gc and takes the models default ### Expected behavior I expect the pipeline code to consider the `GenerationConfig` max_length but I see in the [code](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/conversational.py#L269) that it just looks for the (deprecated) `max_length`. There is a similar issue with the `text-generation` pipeline. Ill be happy to open a pr and fix this. As I understand, I just need to check the `generate_kwargs` for a `generation_config` and check if it has a `max_length` or/and `max_new_tokens`.
07-29-2023 17:36:54
07-29-2023 17:36:54
Hey @yonigottesman 👋 We are aware of this issue, where `GenerationConfig` is not being piped correctly at input verification time. This requires a pipeline-level change (and not simply a `Conversation`-level change), and we are working on it :)
transformers
25,184
closed
Add timeout parameter to load_image function
# What does this PR do? This PR adds a timeout parameter to all pipelines that can fetch images from remote URLs. Without a timeout, the request can hang indefinitely. Fixes #25168 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x ] Did you write any new necessary tests? ## Who can review? @amyeroberts
07-29-2023 15:27:11
07-29-2023 15:27:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts I've done the requested changes
transformers
25,183
open
audio pipeline utility ffmpeg_microphone_live doesn't work in Google Colab
### System Info Googel Colab 2023/07/21 / Chrome 115.0.5790.114 / macOS 13.5 Default MacBook microphone ### Who can help? @sanchit-gandhi ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. make sure microphone works and access to microphone is enabled in Colab 2. follow https://huggingface.co/learn/audio-course/chapter7/voice-assistant ### Expected behavior Colab cell `launch_fn(debug=True)` should output scores while listening for the wake word as described in the tutorial. Instead, nothing happens. No error is shown and no audio is recorded. This makes the `transcribe` cell in the tutorial crash.
07-29-2023 14:02:10
07-29-2023 14:02:10
Hey @crcdng! Thanks for flagging this - could you share details of how you achieved step 1 (make sure microphone works and access to microphone is enabled in Colab)? Once I can reproduce I can take a look into how we can get this working!
transformers
25,182
open
Knowledge distillation tutorial initial commit
This is a documentation PR for task guide to show how to distil an image classification model into another, using `Trainer`. I also tried to explain my intuition about distillation process so let me know if there's anything else to clarify.
07-29-2023 09:09:53
07-29-2023 09:09:53
transformers
25,181
open
RagGenerator
### System Info import os from transformers import RagTokenizer, RagTokenForGeneration from datasets import load_dataset dataset_path = '/Volumes/WD_BLACK/Pycharm/dataset/downloads' # 修改为外部硬盘的路径 dataset = load_dataset(dataset_path, "psgs_w100.nq.compressed") model_name = 'facebook/rag-token-base' tokenizer = RagTokenizer.from_pretrained(model_name) model = RagTokenForGeneration.from_pretrained(model_name) # 定义查询 query = "What is the capital of France?" # 生成答案 inputs = tokenizer(query, return_tensors='pt') generated = model.generate(input_ids=inputs.input_ids, attention_mask=inputs.attention_mask, max_length=20) # 解码生成的答案 answer = tokenizer.decode(generated[0], skip_special_tokens=True) print("Answer:", answer) bug:Traceback (most recent call last): File "/Users/maskxman/PycharmProjects/AutoKG/AutoKG/RAG.py", line 18, in <module> generated = model.generate(input_ids=inputs.input_ids, attention_mask=inputs.attention_mask, max_length=20) File "/Users/maskxman/anaconda3/envs/camel/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/Users/maskxman/anaconda3/envs/camel/lib/python3.10/site-packages/transformers/models/rag/modeling_rag.py", line 1491, in generate assert (context_input_ids.shape[0] % n_docs) == 0, ( AttributeError: 'NoneType' object has no attribute 'shape' Could you pls help me? Thank you !!!! ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction import os from transformers import RagTokenizer, RagTokenForGeneration from datasets import load_dataset dataset_path = '/Volumes/WD_BLACK/Pycharm/dataset/downloads' # 修改为外部硬盘的路径 dataset = load_dataset(dataset_path, "psgs_w100.nq.compressed") model_name = 'facebook/rag-token-base' tokenizer = RagTokenizer.from_pretrained(model_name) model = RagTokenForGeneration.from_pretrained(model_name) # 定义查询 query = "What is the capital of France?" # 生成答案 inputs = tokenizer(query, return_tensors='pt') generated = model.generate(input_ids=inputs.input_ids, attention_mask=inputs.attention_mask, max_length=20) # 解码生成的答案 answer = tokenizer.decode(generated[0], skip_special_tokens=True) print("Answer:", answer) ### Expected behavior /Users/maskxman/anaconda3/envs/camel/bin/python3.10 /Users/maskxman/PycharmProjects/AutoKG/AutoKG/RAG.py 2023-07-29 15:06:31.519470: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Resolving data files: 100%|██████████| 51/51 [00:00<00:00, 229763.16it/s] The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'RagTokenizer'. The class this function is called from is 'DPRQuestionEncoderTokenizer'. The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'RagTokenizer'. The class this function is called from is 'DPRQuestionEncoderTokenizerFast'. The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'RagTokenizer'. The class this function is called from is 'BartTokenizer'. The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'RagTokenizer'. The class this function is called from is 'BartTokenizerFast'. Some weights of the model checkpoint at facebook/rag-token-base were not used when initializing RagTokenForGeneration: ['rag.question_encoder.question_encoder.bert_model.pooler.dense.bias', 'rag.question_encoder.question_encoder.bert_model.pooler.dense.weight'] - This IS expected if you are initializing RagTokenForGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RagTokenForGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Traceback (most recent call last): File "/Users/maskxman/PycharmProjects/AutoKG/AutoKG/RAG.py", line 18, in <module> generated = model.generate(input_ids=inputs.input_ids, attention_mask=inputs.attention_mask, max_length=20) File "/Users/maskxman/anaconda3/envs/camel/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/Users/maskxman/anaconda3/envs/camel/lib/python3.10/site-packages/transformers/models/rag/modeling_rag.py", line 1491, in generate assert (context_input_ids.shape[0] % n_docs) == 0, ( AttributeError: 'NoneType' object has no attribute 'shape' Process finished with exit code 1
07-29-2023 07:10:11
07-29-2023 07:10:11
Hi @MaskXman, thanks for raising this issue! So that we can best help you, could you please: * Make sure there is a minimal reproducible code snippet. We don't have access to the dataset used: `/Volumes/WD_BLACK/Pycharm/dataset/downloads'`. Either a dummy dataset can be created in the script or a public dataset on the hub used. * Format the code and errors so they're easier to read in markdown code formatting - between a pair of three backticks: ` ``` code goes here ``` ` * Provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output Are you able to run the example [in the docs](https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/rag#transformers.RagSequenceForGeneration.forward.example)? (This will help us pinpoint the issue)
transformers
25,180
open
Accelerator FSDP state does not reflect the arguments fsdp_config
### System Info - `transformers` version: 4.31.0 - Platform: Linux-4.18.0-477.15.1.el8_8.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes, running in SLURM with multi-node and multi-GPU ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Using the official `run_clm.py` script with FSDP enabled: ``` python run_clm.py \ --model_name_or_path facebook/opt-350m \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --do_train \ --do_eval \ --output_dir output \ --fsdp "shard_grad_op auto_wrap" --fsdp_config fsdp_config.json ``` where `fsdp_config.json` looks like: ``` { "sharding_strategy": "shard_grad_op auto_wrap", "fsdp_transformer_layer_cls_to_wrap": "OPTDecoderLayer", "sync_module_states": true } ``` ### Expected behavior We expect to use the sharding strategy `shard_grad_op`, but the accelerator is not instantiated with the fsdp config in `create_accelerator_and_postprocess()`. As a result, if we print out `self.accelerator.state.fsdp_plugin.sharding_strategy` at the end of `__init__`, we get the default sharding strategy `full_shard`, even though `self.fsdp == shard_grad_op`. I did not set the sharding strategy using `accelerate config` since I'm experimenting with different strategy and I believe it would make sense to overwrite the default strategy with the input config. I'm not completely sure if this would be the correct fix, but I found the following to work with the intended behavior: ``` if FSDPOption.FULL_SHARD in args.fsdp: self.fsdp = ShardingStrategy.FULL_SHARD elif FSDPOption.SHARD_GRAD_OP in args.fsdp: self.fsdp = ShardingStrategy.SHARD_GRAD_OP elif FSDPOption.NO_SHARD in args.fsdp: self.fsdp = ShardingStrategy.NO_SHARD if self.is_fsdp_enabled: self.accelerator.state.fsdp_plugin.sharding_strategy=self.fsdp ``` where we update the sharding_strategy after determining the strategy used for fsdp from args in `__init__()`.
07-29-2023 02:41:34
07-29-2023 02:41:34
cc @pacman100
transformers
25,179
closed
Sudden random bug
### System Info Here is the bug ``` File "/home/suryahari/Vornoi/QA.py", line 5, in <module> model = AutoModelForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/suryahari/miniconda3/envs/diffusers/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 493, in from_pretrained return model_class.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/suryahari/miniconda3/envs/diffusers/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2629, in from_pretrained state_dict = load_state_dict(resolved_archive_file) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/suryahari/miniconda3/envs/diffusers/lib/python3.11/site-packages/transformers/modeling_utils.py", line 447, in load_state_dict with safe_open(checkpoint_file, framework="pt") as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: No such device (os error 19) ``` - `transformers` version: 4.31.0 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes but can avoid - Using distributed or parallel set-up in script?: not really ### Who can help? @Narsil ? @younesbelkada @ArthurZucker @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Create a new env and run the following code ``` # Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2") model = AutoModelForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2") ``` Also happened to me while running diffusers code, just posting QA code for now. ### Expected behavior should be able to load a model
07-28-2023 23:29:59
07-28-2023 23:29:59
I can't really reproduce this and have not seen this anywhere else. The OS Error suggests that the interface is not available, meaning that most probably the path to your hugging face cache cannot be reached (not mounted/ not right etc). A [simple reproducer](https://colab.research.google.com/drive/1S7CRnNIUPnmDWcTuFY4BI0H8ASu221-w?usp=sharing) available here.<|||||>I've seen that happen when using network mounted disks. If the network is flaky then the read might fail even though the rest went fine. Error should be transient though. Could that be it ? <|||||>Not sure - the program fails even on a new env on my computer but works in google colab. @ArthurZucker the link you sent has permission issues.
transformers
25,178
open
BERT: TensorFlow Model Garden Conversion scripts
### Feature request Hi, after working some time with the [TensorFlow Model Garden Repository](https://github.com/tensorflow/models) and training BERT models, I found out the following things that could be changed in Transformers library: I added the Token Dropping BERT Conversion script a while ago, see #17142. Now I found out, that latest BERT models pretrained with Model Garden Repository repository can also be converted with this script. For this reason I would propose to rename the script `convert_bert_token_dropping_original_tf2_checkpoint_to_pytorch.py` just to `convert_bert_original_tf2_checkpoint_to_pytorch.py`. However, this script also exists, but it is no longer working, as this was deprecated in Model Garden Repository a while ago, I added this notice in #16171. I see now two possibilities to proceed with the different conversion scripts: * Rename the current `convert_bert_original_tf2_checkpoint_to_pytorch.py` to something like `convert_deprecated_bert_original_tf2_checkpoint_to_pytorch.py` so that this name is free for the "new" conversion script that supports Token Dropping BERT und latest BERT models from Model Garden Repository. * Delete the old script completely ### Motivation More recent BERT and Token Dropping BERT models can be pretrained with TensorFlow Model Garden repository. There should be one script that does these conversions, the old one that is only working with deprecated models from Model Garden repo should be renamed or deleted. ### Your contribution I can take care of renaming/deletion and extending the conversion script to have better documentation.
07-28-2023 23:08:17
07-28-2023 23:08:17
cc @Rocketknight1 <|||||>This isn't really my area either! A lot of this code goes back to the earliest code in `transformers` when it was a port of TF code for BERT to PyTorch. Pinging @LysandreJik - do you know if the code is intended to support ports from recent versions of the TF Model Garden?<|||||>As long as everything is correctly documented, I'm all for having up to date scripts that work with the most recent BERT releases.<|||||>In that case @stefan-it I think it's okay to delete the old script entirely and replace it with a modern one, since it's no longer usable and people who need it for historical purposes can always find it in past release branches.
transformers
25,177
open
TEAMS: Add TensorFlow 2 Model Garden Conversion Script
Hi, with this PR a pretrained TEAMS model with TensorFlow Models Garden can be converted to an ELECTRA compatible model. The TEAMS model was proposed in the "[Training ELECTRA Augmented with Multi-word Selection](https://aclanthology.org/2021.findings-acl.219.pdf) paper and accepted at ACL 2021: > A new text encoder pre-training method is presented that improves ELECTRA based on multi-task learning and develops two techniques to effectively combine all pre- training tasks: using attention-based networks for task-specific heads, and sharing bottom layers of the generator and the discriminator. The [TEAMS](https://github.com/tensorflow/models/tree/master/official/projects/teams) implementation can be found in the TensorFlow Models Garden repository. Unfortunately, the authors did not release any pretrained models. However, I pretrained a TEAMS model on [German Wikipedia](https://huggingface.co/gwlms/teams-base-dewiki-v1-generator) and release all checkpoints on the Hugging Face Model Hub. Additionally, the conversion script to integrate pretrained TEAMS into Transformers is included in this PR. Closes #16466. ### Implementation Details TEAMS use the same architecture as ELECTRA (just pretraining approach is different). ELECTRA in Transformers comes with two models: Generator and Discriminator. In contrast to ELECTRA, the TEAMS generator use shared layers with discriminator: ``` Our study confirms this observation and finds that sharing some transformer layers of the generator and discriminator and can further boost the model performance. More specifically, we design the generator to have the same “width” (i.e., hidden size, intermediate size and number of heads) as the discriminator and share the bottom half of all transformer layers between the generator and the discriminator. ``` More precisely, the sharing of layers can be seen in the reference implementation: https://github.com/tensorflow/models/blob/master/official/projects/teams/teams_task.py#L48 This shows, that the generator uses the first n layers from discriminator first (which is usually half size of specified total layers). <img width="543" alt="Bildschirmfoto 2023-07-29 um 00 36 22" src="https://github.com/huggingface/transformers/assets/20651387/4ba96b79-0afe-4bc5-905a-b1941a4670b0"> ### Retrieving TensorFlow 2 Checkpoints In order to test the conversion script, the original TensorFlow 2 checkpoints need to be downloaded from Model Hub: ```bash $ wget https://huggingface.co/gwlms/teams-base-dewiki-v1-generator/resolve/main/ckpt-1000000.data-00000-of-00001 $ wget https://huggingface.co/gwlms/teams-base-dewiki-v1-generator/resolve/main/ckpt-1000000.index ``` Additionally, to test the model locally, we need to download tokenizer: ```bash $ wget https://huggingface.co/gwlms/teams-base-dewiki-v1-generator/resolve/main/tokenizer_config.json $ wget https://huggingface.co/gwlms/teams-base-dewiki-v1-generator/resolve/main/vocab.txt ``` ### Converting TEAMS Generator After retrieving the original checkpoints, the generator configuration must be downloaded: ```bash $ mkdir generator && cd $_ $ wget https://huggingface.co/gwlms/teams-base-dewiki-v1-generator/resolve/main/config.json $ cd .. ``` After that, the conversion script can be run to convert TEAMS (generator part) into ELECTRA generator: ```bash $ python3 convert_teams_original_tf2_checkpoint_to_pytorch.py \ --tf_checkpoint_path ckpt-1000000 \ --config_file ./generator/config.json \ --pytorch_dump_path ./exported-generator \ --discriminator_or_generator generator $ cp tokenizer_config.json exported-generator $ cp vocab.txt exported-generator ``` The generator can be tested with masked lm pipeline to predict next work: ```python3 from transformers import pipeline predictor = pipeline("fill-mask", model="./exported-generator", tokenizer="./exported-generator") predictor("Die Hauptstadt von Finnland ist [MASK].") ``` The example German should predict the capital city of Finland, which is Helsinki: ```python [{'score': 0.971819281578064, 'token': 16014, 'token_str': 'Helsinki', 'sequence': 'Die Hauptstadt von Finnland ist Helsinki.'}, {'score': 0.006745012942701578, 'token': 12388, 'token_str': 'Stockholm', 'sequence': 'Die Hauptstadt von Finnland ist Stockholm.'}, {'score': 0.003258457174524665, 'token': 12227, 'token_str': 'Finnland', 'sequence': 'Die Hauptstadt von Finnland ist Finnland.'}, {'score': 0.0025941277854144573, 'token': 23596, 'token_str': 'Tallinn', 'sequence': 'Die Hauptstadt von Finnland ist Tallinn.'}, {'score': 0.0014661155873909593, 'token': 17408, 'token_str': 'Riga', 'sequence': 'Die Hauptstadt von Finnland ist Riga.'}] ``` ### Converting TEAMS Discriminator After retrieving the original checkpoints, the generator configuration must be downloaded: ```bash $ mkdir discriminator && cd $_ $ wget https://huggingface.co/gwlms/teams-base-dewiki-v1-discriminator/resolve/main/config.json $ cd .. ``` After that, the conversion script can be run to convert TEAMS (generator part) into ELECTRA generator: ```bash $ python3 convert_teams_original_tf2_checkpoint_to_pytorch.py \ --tf_checkpoint_path ckpt-1000000 \ --config_file ./discriminator/config.json \ --pytorch_dump_path ./exported-discriminator \ --discriminator_or_generator discriminator ``` I made experiments on downstream tasks (such as NER or text classification) and the results are superior than to compared BERT models (original BERT and Token Dropping BERT). Made with 🥨and ❤️.
07-28-2023 22:29:24
07-28-2023 22:29:24
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25177). All of your documentation changes will be reflected on that endpoint.<|||||>cc @Rocketknight1
transformers
25,176
open
Llama Tokenizer Unexpectedly Producing Unknown Token
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.19.0-1027-aws-x86_64-with-glibc2.31 - Python version: 3.10.8 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) ### Who can help? @ArthurZucker @younesbelkada I am trying to use special tokens with the LlamaTokenizer in Transformers 4.31.0 and with certain configurations of input, the tokenizer is returning a token id of 0 corresponding to the unknown token. For example, I have added the special token "<REPR_END>", and if I pass that through the tokenizer to get [1, 32003] which is good. Additionally if I pass the word "inform" through the tokenizer, I get [1, 1871], which is also good. However, if I pass "<REPR_END>inform" through the tokenizer, I get [1, 32003, 0] which does not make sense. If I try this exact same input in Transformers 4.29.2, I get [1, 32003, 1871] which is correct. ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ```python from transformers.models.llama.tokenization_llama import LlamaTokenizer tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-hf", use_auth_token=...) tokenizer.pad_token = tokenizer.eos_token tokenizer.add_tokens(['<TARGET_BEGIN>', '<TARGET_END>', '<REPR_BEGIN>', '<REPR_END>'], special_tokens=True) print(tokenizer("<REPR_END>inform") ``` ### Expected behavior I should expect to get the output [1, 32003, 1871] but I do not. I instead get [1, 32003, 0]
07-28-2023 20:19:14
07-28-2023 20:19:14
Hey! 👋🏻 Thanks for providing a reproduction script. I suspect that you do not have `tokenizers` installed, since when I use `use_fast = True` (which is the default if you have tokenizers) the issue is not present. Now, this behaviour is expected: - Quick fix, use `legacy=True` when initialising the tokenizer: `tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-13b-hf", legacy = True)` - Other quick fix, use the fast tokenizer ( `pip install tokenizers`) This is a very nice catch otherwise! The issue is that [`in`, `form`] should be the tokenization of `inform`, but when we use the hack around sentencepiece, we actually just output [`inform`] which is not recognised as a token. Also if you pass `"<REPR_END> inform"` the extra space is automatically strip by default. This is also gonna be fixed. cc @Narsil I think I'll implement handling the `add_dummy_prefix = False` parameter. As pointed out somewhere else, our decoding function is also broken for Llama (it add extra spaces).
transformers
25,175
closed
Pix2Struct -- mismatched output of cross attention weights
### System Info Hi huggingface team! The output of cross attention weights is mismatched as shown in [https://github.com/huggingface/transformers/blob/05cda5df3405e6a2ee4ecf8f7e1b2300ebda472e/src/transformers/models/pix2struct/modeling_pix2struct.py#L1551C22-L1551C22](https://github.com/huggingface/transformers/blob/05cda5df3405e6a2ee4ecf8f7e1b2300ebda472e/src/transformers/models/pix2struct/modeling_pix2struct.py#L1551C22-L1551C22). In the code: `all_cross_attentions = all_cross_attentions + (layer_outputs[3],)` where `layer_outputs[3]` is still the self attention weights, the REAL cross attention weights should be `layer_outputs[5]`. Please correct me if I made some mistakes. Looking forward to the updated version. Thank you! @amyeroberts @ArthurZucker @younesbelkada ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hightlight of the training code: ``` model = Pix2StructForConditionalGeneration.from_pretrained('google/pix2struct-docvqa-base') outputs = model.forward(**inputs, labels=labels, output_attentions=True) ``` Turn on the attention output button by `output_attentions=True`, and then get the cross attention weights by `outputs.cross_attentions` where the bug exists. ### Expected behavior Change the index from `3` to `5` for selecting the correct cross attention weights, then everything's done hopefully.
07-28-2023 20:11:25
07-28-2023 20:11:25
Nice catch @leitro ! I can confirm this is correct, just made https://github.com/huggingface/transformers/pull/25200 to fix the issue on the main branch.<|||||>Cheers!!
transformers
25,174
closed
🚨🚨🚨 Fix rescale ViVit Efficientnet
# What does this PR do? Fixes the rescaling logic for both EffiicentNet and ViVit. EfficientNet: The values were being rescale between [-0.5, 0.5] ViVit: values were being rescale between [-7.689350249903882e-06, 0.9999923106497501] as scale was being treated as having a value 255 in the `rescale` method, rather than 1/255. **This is a breaking change** and will affect the model outputs for both these models. However, it is a bug fix and should improve model predictions. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
07-28-2023 15:50:07
07-28-2023 15:50:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>If I'm not mistaken this PR is problematic as it breaks the `rescale` function. If I rebase my PR https://github.com/huggingface/transformers/pull/24796 to `main` the torchvision transforms vs transformers image transforms equivalency test fails: `tests/models/idefics/test_image_processing_idefics.py::IdeficsImageProcessingTest::test_torchvision_numpy_transforms_equivalency`
transformers
25,173
closed
Musicgen: CFG is manually added
# What does this PR do? This PR exists to keep `musicgen`'s current functionalities considering the changes in #24654. In a nutshell, the #24654 has a more flexible version of CFG (allows negative prompting, is compatible with existing generation methods, doesn't need to expand the batch size by 2 before the fwd pass, lower memory requirements), but would mean an execution time regression on `musicgen` (because it needs 2x forward passes).
07-28-2023 15:22:19
07-28-2023 15:22:19
@sanchit-gandhi there is a potential change due to order of operations, depending on the processors commonly used. Would you be able to confirm whether it would be okay like this?<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
25,172
closed
Add `token` arugment in example scripts
# What does this PR do? If the change is good, I will apply the same to other files. Let me know if you have opinion on the `False` vs `None` thing here.
07-28-2023 15:21:36
07-28-2023 15:21:36
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Ready to go 🚀 <|||||>Thank you for the review, changed it to ```python token: str = field( default=None, metadata={ "help": ( "The token to use as HTTP bearer authorization for remote files. If not specified, will use the token " "generated when running `huggingface-cli login` (stored in `~/.huggingface`)." ) }, ) use_auth_token: bool = field( default=None, metadata={ "help": "The `use_auth_token` argument is deprecated and will be removed in v4.34. Please use `token`." }, ) ```
transformers
25,171
closed
[`InstructBlip`] Fix instructblip slow test
# What does this PR do? Fixes the current failing daily CI test: https://github.com/huggingface/transformers/actions/runs/5675853423/job/15381807774 let's make the daily CI happy! Ran the test on the latest docker image and the test now pass with these values. cc @sgugger @ydshieh
07-28-2023 13:57:05
07-28-2023 13:57:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25171). All of your documentation changes will be reflected on that endpoint.
transformers
25,170
closed
[`Mpt`] Fix mpt slow test
# What does this PR do? As per title, failure report here: https://github.com/huggingface/transformers/actions/runs/5675853423/job/15381788713 Probably an issue with libs I had when designing the tests (had torch +cu117 instead of cu118) - ran the tests on the latest docker image and they all pass cc @sgugger @ydshieh
07-28-2023 13:40:53
07-28-2023 13:40:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,169
closed
[MusicGen] Fix integration tests
# What does this PR do? Fixes the integration tests for MusicGen: 1. Places all input tensors on the correct device 2. Updates expected values with those obtained on cuda 3. Fixes for fp16 generation cc @ydshieh
07-28-2023 12:09:32
07-28-2023 12:09:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,168
closed
Add support for timeout parameter for load_image
### Feature request Add a parameter timeout to the `image_utils.py:load_image` function, which would enable setting the timeout for the requests call. This parameter should be plumbed in through all the ways to call that function (so add support for it in all the image related pipelines). Alternatively, you should add a `requests_params` parameter, which should be a dictionary, to enable passing any parameters to requests.get. ### Motivation When using requests, the default timeout is None, which means that the request will wait (hang) until the connection is closed. Some servers for whatever reason don't return anything, but also don't close the connection. It would be useful to be able to set a timeout for these cases. ### Your contribution I can contribute a PR for this.
07-28-2023 09:33:35
07-28-2023 09:33:35
cc @amyeroberts Sounds like a good idea, so if you want to open a PR, please go ahead!<|||||>Do you have any preferences, keeping it simple, add just the timeout parameter, or adding a more general `requests_params`?<|||||>I think keeping it simple is probably for the best.
transformers
25,167
closed
Update `use_auth_token` -> `token` in example scripts
# What does this PR do? Update example scripts to use `token`. We have `datasets!=2.5.0` in transformers, and I see `datasets=2.7.0` still only uses `use_auth_token`, so I don't touch the usage in `load_dataset`. Let me know if we should change to `token` + pin a higher minimum `datasets` version. The files under `examples/research_projects` are not touched.
07-28-2023 09:19:33
07-28-2023 09:19:33
`use_auth_token` has been deprecated in favor of `token` in the latest release of `datasets` 🙂. <|||||>> `use_auth_token` has been deprecated in favor of `token` in the latest release of `datasets` 🙂. Yes, I got to check it and update my comment :-) But thanks a lot for the information. It's very nice.<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
25,166
closed
override .cuda() to check if model is already quantized
# What does this PR do? This PR is a quick fix by adding .cuda() to prevent device casting after 8-bit quantization. Same spirit as #20409. @younesbelkada @sgugger Would you please have a quick look . It fixes the following unexpected error: ### For reberta-large, output `nan` without raising an error ```python from transformers import AutoTokenizer, AutoModelForMaskedLM from transformers import AutoConfig from transformers import pipeline model_name = "roberta-large" # or any other models tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) config = AutoConfig.from_pretrained( model_name, ) model = AutoModelForMaskedLM.from_pretrained( model_name, trust_remote_code=True, load_in_8bit=True, device_map="auto" ) model.cuda() unmasker = pipeline("fill-mask", model=model, tokenizer=tokenizer) print(unmasker("Hello I'm a <mask> model.")) >>> [{'score': nan, 'token': 3, 'token_str': '<unk>', 'sequence': "Hello I'm a model."}, {'score': nan, 'token': 4, 'token_str': '.', 'sequence': "Hello I'm a. model."}, {'score': nan, 'token': 1, 'token_str': '<pad>', 'sequence': "Hello I'm a model."}, {'score': nan, 'token': 0, 'token_str': '<s>', 'sequence': "Hello I'm a model."}, {'score': nan, 'token': 2, 'token_str': '</s>', 'sequence': "Hello I'm a model."}] ``` ### for mpt, RuntimeError as follows ```python from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import AutoConfig model_name = "mosaicml/mpt-7b" tokenizer = AutoTokenizer.from_pretrained(model_name) config = AutoConfig.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, load_in_4bit=True, device_map="auto" ) model.cuda() text = "Here is a recipe for vegan banana bread:\n" input_ids = tokenizer.encode(text, return_tensors="pt").to("cuda:0") output = model.generate(input_ids, max_length=100, do_sample=True) response = tokenizer.decode(output[0]) print(response) >>> output = torch.nn.functional.linear(A, F.dequantize_4bit(B, state).to(A.dtype).t(), bias) >>> RuntimeError: mat1 and mat2 shapes cannot be multiplied (10x4096 and 1x25165824) ```
07-28-2023 08:53:25
07-28-2023 08:53:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,165
open
cached_file() got an unexpected keyword argument 'token'
### System Info transformers-4.32.0.dev0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction `AutoConfig.from_pretrained(model, trust_remote_code=trust_remote_code)` Throws this error ### Expected behavior Not throw error
07-28-2023 08:49:59
07-28-2023 08:49:59
Hi, can you provide a full code snippet with which I can reproduce the error? <|||||>Hi I'm just loading the llama 2 model but from my local machine. I have an update however. I tested it on the latest pip install version and it works fine. So I think the code broke in between the pip version and the current status of the repo.<|||||>And it should be resolved now, but can you provide us your exact snippet of code yielding to the bug? cc @ydshieh who is working on the migration `use_auth_token` -> `token`<|||||>@nivibilla You can try a version after this commit on `main` 0c790ddbd1c91250b26bab4308acbf271df063a7 (which is #25146 being merged) Let me know 🙏
transformers
25,164
closed
Represent query_length in a different way to solve jit issue
Hi @ArthurZucker @younesbelkada @sgugger Thanks for contributing to the MPT model. I found a jit issue with this model. ```python from transformers import AutoModelForCausalLM, AutoConfig from optimum.intel.generation.modeling import jit_trace model = AutoModelForCausalLM.from_pretrained("mosaicml/mpt-7b") jit_model = jit_trace(model=model, task="text-generation", use_cache=True) ``` When I tried to trace mpt model, I got this error ![image](https://github.com/huggingface/transformers/assets/107918818/67244f76-625a-4080-aa54-34b8b032545a) This is because float values like seq_length and query_length are detected as tensor in trace mode. When we set `query_length = seq_length` and `seq_length += past_key_value[0].shape[2]`, the `seq_length` is changed too which is unexpected. ![MicrosoftTeams-image (1)](https://github.com/huggingface/transformers/assets/107918818/8a6cff61-3a07-4f1a-96d6-d17a7efe9292) So I use a more clean way to set `query_length` and it can also avoid the jit issue. `query_length = seq_length if past_key_value is None else seq_length + past_key_value[0].shape[2]` Would you please help me review it? Thanks!
07-28-2023 08:28:16
07-28-2023 08:28:16
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,163
open
torch.jit._trace.TracingCheckError: Tracing failed sanity checks! ERROR: Graphs differed across invocations!
### System Info - `transformers` version: 4.31.0 - Platform: Linux-4.18.0-2.4.3.3.kwai.x86_64-x86_64-with-glibc2.17 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import os import torch from transformers import SwinForImageClassification, TrainingArguments, Trainer label2id ={'bad': 1, 'good': 0} id2label = {1:'bad', 0:'good'} model_name = 'microsoft/swin-base-patch4-window12-384-in22k' model = SwinForImageClassification.from_pretrained( model_name, label2id=label2id, id2label=id2label, ignore_mismatched_sizes=True, torchscript=True ) ''' ckpt = "./pytorch_model_swin.bin" checkpoint = torch.load(ckpt) model.load_state_dict(checkpoint) # my weight can be successfully loaded into the model ''' t = torch.randn(1,3,384,384) t_model = torch.jit.trace(model, t) # error occurred here ``` **Log Message:** ``` --------------------------------------------------------------------------- TracingCheckError Traceback (most recent call last) Cell In[24], line 4 2 out = model(t) 3 print(out[0].shape) ----> 4 t_model = torch.jit.trace(model, [t]) 5 print("ok") File /home/web_server/anaconda3/envs/ram/lib/python3.11/site-packages/torch/jit/_trace.py:794, in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit, example_kwarg_inputs, _store_inputs) 792 else: 793 raise RuntimeError("example_kwarg_inputs should be a dict") --> 794 return trace_module( 795 func, 796 {"forward": example_inputs}, 797 None, 798 check_trace, 799 wrap_check_inputs(check_inputs), 800 check_tolerance, 801 strict, 802 _force_outplace, 803 _module_class, 804 example_inputs_is_kwarg=isinstance(example_kwarg_inputs, dict), 805 _store_inputs=_store_inputs 806 ) 807 if ( 808 hasattr(func, "__self__") 809 and isinstance(func.__self__, torch.nn.Module) 810 and func.__name__ == "forward" 811 ): 812 if example_inputs is None: File /home/web_server/anaconda3/envs/ram/lib/python3.11/site-packages/torch/jit/_trace.py:1084, in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit, example_inputs_is_kwarg, _store_inputs) 1072 _check_trace( 1073 check_inputs, 1074 func, (...) 1081 example_inputs_is_kwarg=example_inputs_is_kwarg, 1082 ) 1083 else: -> 1084 _check_trace( 1085 [inputs], 1086 func, 1087 check_trace_method, 1088 check_tolerance, 1089 strict, 1090 _force_outplace, 1091 True, 1092 _module_class, 1093 example_inputs_is_kwarg=example_inputs_is_kwarg, 1094 ) 1095 finally: 1096 torch.jit._trace._trace_module_map = old_module_map File /home/web_server/anaconda3/envs/ram/lib/python3.11/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File /home/web_server/anaconda3/envs/ram/lib/python3.11/site-packages/torch/jit/_trace.py:562, in _check_trace(check_inputs, func, traced_func, check_tolerance, strict, force_outplace, is_trace_module, _module_class, example_inputs_is_kwarg) 560 diag_info = graph_diagnostic_info() 561 if any(info is not None for info in diag_info): --> 562 raise TracingCheckError(*diag_info) TracingCheckError: Tracing failed sanity checks! ERROR: Graphs differed across invocations! Graph diff: graph(%self.1 : __torch__.transformers.models.swin.modeling_swin.SwinForImageClassification, %pixel_values : Tensor): %classifier : __torch__.torch.nn.modules.linear.Linear = prim::GetAttr[name="classifier"](%self.1) %swin : __torch__.transformers.models.swin.modeling_swin.SwinModel = prim::GetAttr[name="swin"](%self.1) ... ``` ### Expected behavior Hello, an error occured in the following code when I was using torch.jit.trace to transfer the Swin Transformer model to TorchScript. What should I do to fix it?
07-28-2023 08:25:14
07-28-2023 08:25:14
Hi it seems you code snippet is somehow different than what has been shown in the log where it has ```Cell In[24], line 4 2 out = model(t) 3 print(out[0].shape) ----> 4 t_model = torch.jit.trace(model, [t]) 5 print("ok") ``` Also the code snippet won't work for us as we don't have ckpt = "./pytorch_model_swin.bin" and `t_model` is defined. Could you update the code snippet so we can reproduce the error directly? Thanks in advance! <|||||>Thanks for reply! The code and log message are update as follows. I didn't paste the log message after the 'graph diff' because it's very long. **Code** ```python import os os.environ['CUDA_VISIBLE_DEVICES'] = '1' import torch from transformers import SwinForImageClassification, TrainingArguments, Trainer label2id ={'bad': 1, 'good': 0} id2label = {1:'bad', 0:'good'} model_name = 'microsoft/swin-base-patch4-window12-384-in22k' model = SwinForImageClassification.from_pretrained( model_name, label2id=label2id, id2label=id2label, ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint torchscript=True ) dummy_input = torch.randn(1,3,384,384) traced_model = torch.jit.trace(model, dummy_input) print("ok") ``` **Log Message** ```python '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /microsoft/swin-base-patch4-window12-384-in22k/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f117efe3690>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 46579f44-3f58-4627-8466-22306063fb52)')' thrown while requesting HEAD https://huggingface.co/microsoft/swin-base-patch4-window12-384-in22k/resolve/main/config.json Some weights of SwinForImageClassification were not initialized from the model checkpoint at microsoft/swin-base-patch4-window12-384-in22k and are newly initialized because the shapes did not match: - classifier.weight: found shape torch.Size([21841, 1024]) in the checkpoint and torch.Size([2, 1024]) in the model instantiated - classifier.bias: found shape torch.Size([21841]) in the checkpoint and torch.Size([2]) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. --------------------------------------------------------------------------- TracingCheckError Traceback (most recent call last) Cell In[2], line 18 9 model = SwinForImageClassification.from_pretrained( 10 model_name, 11 label2id=label2id, (...) 14 torchscript=True 15 ) 17 dummy_input = torch.randn(1,3,384,384) ---> 18 traced_model = torch.jit.trace(model, dummy_input) 19 print("ok") File /home/web_server/anaconda3/envs/ram/lib/python3.11/site-packages/torch/jit/_trace.py:794, in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit, example_kwarg_inputs, _store_inputs) 792 else: 793 raise RuntimeError("example_kwarg_inputs should be a dict") --> 794 return trace_module( 795 func, 796 {"forward": example_inputs}, 797 None, 798 check_trace, 799 wrap_check_inputs(check_inputs), 800 check_tolerance, 801 strict, 802 _force_outplace, 803 _module_class, 804 example_inputs_is_kwarg=isinstance(example_kwarg_inputs, dict), 805 _store_inputs=_store_inputs 806 ) 807 if ( 808 hasattr(func, "__self__") 809 and isinstance(func.__self__, torch.nn.Module) 810 and func.__name__ == "forward" 811 ): 812 if example_inputs is None: File /home/web_server/anaconda3/envs/ram/lib/python3.11/site-packages/torch/jit/_trace.py:1084, in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit, example_inputs_is_kwarg, _store_inputs) 1072 _check_trace( 1073 check_inputs, 1074 func, (...) 1081 example_inputs_is_kwarg=example_inputs_is_kwarg, 1082 ) 1083 else: -> 1084 _check_trace( 1085 [inputs], 1086 func, 1087 check_trace_method, 1088 check_tolerance, 1089 strict, 1090 _force_outplace, 1091 True, 1092 _module_class, 1093 example_inputs_is_kwarg=example_inputs_is_kwarg, 1094 ) 1095 finally: 1096 torch.jit._trace._trace_module_map = old_module_map File /home/web_server/anaconda3/envs/ram/lib/python3.11/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File /home/web_server/anaconda3/envs/ram/lib/python3.11/site-packages/torch/jit/_trace.py:562, in _check_trace(check_inputs, func, traced_func, check_tolerance, strict, force_outplace, is_trace_module, _module_class, example_inputs_is_kwarg) 560 diag_info = graph_diagnostic_info() 561 if any(info is not None for info in diag_info): --> 562 raise TracingCheckError(*diag_info) TracingCheckError: Tracing failed sanity checks! ERROR: Graphs differed across invocations! Graph diff: ... ```<|||||>Thank you for updating, very nice 🤗 <|||||>Confirmed the reproduction (and yes the log after Graph diff is super super long 😅 <|||||>@fxmarty Could you help on this? You can check the following code snippet. Basically, it will pass or fail depending on different config values. There is some data flow (tensor bool values) issue in the modeling code, but I am really bad on identifying where is the root cause and how to fix things here. If possible, that would be great if you can share how you debug this kind of thing with us 🙏 Thank you in advance. ```python import os os.environ['CUDA_VISIBLE_DEVICES'] = '1' import torch from transformers import SwinForImageClassification, TrainingArguments, Trainer, SwinConfig label2id ={'bad': 1, 'good': 0} id2label = {1:'bad', 0:'good'} # this fails USE_SMALL_CONFIG = False # this works # USE_SMALL_CONFIG = True model_name = 'microsoft/swin-base-patch4-window12-384-in22k' config = SwinConfig.from_pretrained(model_name) config.torchscript = True config.label2id=label2id config.id2label=id2label if USE_SMALL_CONFIG: config.image_size = 32 config.patch_size = 2 config.depths=[1, 2, 1] config.num_heads=[2, 2, 4] config.window_size=2 model = SwinForImageClassification(config) dummy_input = torch.randn(1,3, config.image_size, config.image_size) # make sure it can run in normal mode o = model(dummy_input) print("model forward ok") # trace it traced_model = torch.jit.trace(model, dummy_input) print("trace ok") ```
transformers
25,162
closed
torch.jit._trace.TracingCheckError: Tracing failed sanity checks! ERROR: Graphs differed across invocations!
### System Info - `transformers` version: 4.31.0 - Platform: Linux-4.18.0-2.4.3.3.kwai.x86_64-x86_64-with-glibc2.17 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import os import torch from transformers import SwinForImageClassification, TrainingArguments, Trainer label2id ={'bad': 1, 'good': 0} id2label = {1:'bad', 0:'good'} model_name = 'microsoft/swin-base-patch4-window12-384-in22k' model = SwinForImageClassification.from_pretrained( model_name, label2id=label2id, id2label=id2label, ignore_mismatched_sizes=True, torchscript=True ) ''' ckpt = "./pytorch_model_swin.bin" checkpoint = torch.load(ckpt) model.load_state_dict(checkpoint) # my weight can be successfully loaded into the model ''' t = torch.randn(1,3,384,384) t_model = torch.jit.trace(model, t) # error occurred here ### Expected behavior Hello, an error occured when I was using torch.jit.trace to transfer the Swin Transformer model to TorchScript. What should I do to fix it? **Log Message:** ``` --------------------------------------------------------------------------- TracingCheckError Traceback (most recent call last) Cell In[24], line 4 2 out = model(t) 3 print(out[0].shape) ----> 4 t_model = torch.jit.trace(model, [t]) 5 print("ok") File /home/web_server/anaconda3/envs/ram/lib/python3.11/site-packages/torch/jit/_trace.py:794, in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit, example_kwarg_inputs, _store_inputs) 792 else: 793 raise RuntimeError("example_kwarg_inputs should be a dict") --> 794 return trace_module( 795 func, 796 {"forward": example_inputs}, 797 None, 798 check_trace, 799 wrap_check_inputs(check_inputs), 800 check_tolerance, 801 strict, 802 _force_outplace, 803 _module_class, 804 example_inputs_is_kwarg=isinstance(example_kwarg_inputs, dict), 805 _store_inputs=_store_inputs 806 ) 807 if ( 808 hasattr(func, "__self__") 809 and isinstance(func.__self__, torch.nn.Module) 810 and func.__name__ == "forward" 811 ): 812 if example_inputs is None: File /home/web_server/anaconda3/envs/ram/lib/python3.11/site-packages/torch/jit/_trace.py:1084, in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit, example_inputs_is_kwarg, _store_inputs) 1072 _check_trace( 1073 check_inputs, 1074 func, (...) 1081 example_inputs_is_kwarg=example_inputs_is_kwarg, 1082 ) 1083 else: -> 1084 _check_trace( 1085 [inputs], 1086 func, 1087 check_trace_method, 1088 check_tolerance, 1089 strict, 1090 _force_outplace, 1091 True, 1092 _module_class, 1093 example_inputs_is_kwarg=example_inputs_is_kwarg, 1094 ) 1095 finally: 1096 torch.jit._trace._trace_module_map = old_module_map File /home/web_server/anaconda3/envs/ram/lib/python3.11/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File /home/web_server/anaconda3/envs/ram/lib/python3.11/site-packages/torch/jit/_trace.py:562, in _check_trace(check_inputs, func, traced_func, check_tolerance, strict, force_outplace, is_trace_module, _module_class, example_inputs_is_kwarg) 560 diag_info = graph_diagnostic_info() 561 if any(info is not None for info in diag_info): --> 562 raise TracingCheckError(*diag_info) TracingCheckError: Tracing failed sanity checks! ERROR: Graphs differed across invocations! Graph diff: graph(%self.1 : __torch__.transformers.models.swin.modeling_swin.SwinForImageClassification, %pixel_values : Tensor): %classifier : __torch__.torch.nn.modules.linear.Linear = prim::GetAttr[name="classifier"](%self.1) %swin : __torch__.transformers.models.swin.modeling_swin.SwinModel = prim::GetAttr[name="swin"](%self.1) ...
07-28-2023 08:17:25
07-28-2023 08:17:25
transformers
25,161
open
Update configuration_glpn.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-28-2023 04:47:22
07-28-2023 04:47:22
C
transformers
25,160
open
"RuntimeError: expected scalar type Half but found Char" on LLaMa-2 () inference stage
### System Info Error when loading LLM with 8 bit quantization. **Versions:** tokenizers 0.13.3 transformers 4.31.0 **Error message:** ``` File "/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 408, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 295, in forward query_states = [F.linear(hidden_states, query_slices[i]) for i in range(self.pretraining_tp)] File "/opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 295, in <listcomp> query_states = [F.linear(hidden_states, query_slices[i]) for i in range(self.pretraining_tp)] RuntimeError: expected scalar type Half but found Char ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction **To reproduce the issue:** ``` import torch from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig, LlamaConfig model_id="WizardLM/WizardLM-13B-V1.2" tokenizer = LlamaTokenizer.from_pretrained(model_id) model = LlamaForCausalLM.from_pretrained( model_id, load_in_8bit=True, torch_dtype=torch.float16, device_map="auto", ) model.config.pad_token_id = tokenizer.pad_token_id = 0 # unk model.config.bos_token_id = 1 model.config.eos_token_id = 2 model.eval() ``` **Inference:** ``` prompt_ = "What is the difference between fusion and fission?" prompts = f"""A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt_} ASSISTANT:""" inputs = tokenizer(prompts, return_tensors="pt") device = "cuda" input_ids = inputs["input_ids"].to(device) max_new_tokens= 2048 with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens ) ``` ### Expected behavior Reply to the prompt.
07-28-2023 04:19:57
07-28-2023 04:19:57
cc @ArthurZucker <|||||>This is a duplicate of #25144. Make sure to check the `pretraining_tp` value in the config.json
transformers
25,159
open
Add GeoLM
# What does this PR do? Add a new model called **GeoLM** into the Transformer library. GeoLM is a language model based on BERT that facilitates **geospatial understanding** in NL documents. It is pretrained on world-wide OpenStreetMap (OSM), WikiData and Wikipedia data, and can be adapted to various geospatial related downstream tasks such as **toponym recognition** and **toponym linking**. Paper not published yet. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Model Weights: * Pretrained GeoLM (ready-to-use for zero-shot toponym linking): [zekun-li/geolm-base-cased](https://huggingface.co/zekun-li/geolm-base-cased) * Fine-tuned GeoLM for toponym recognition: [zekun-li/geolm-base-toponym-recognition](https://huggingface.co/zekun-li/geolm-base-toponym-recognition) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Open Source Status: - [x] The model implementation is available in this PR - [x] The model weights are available in HuggingFace model hub ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR: @sgugger , @ArthurZucker and @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-28-2023 02:56:50
07-28-2023 02:56:50
Hi @zekun-li, thanks for opening this PR! The easiest and recommended way to make a model available in transformers is to add the modeling code directly on the hub: https://huggingface.co/docs/transformers/custom_models This means, once working, the model can be found and used immediately without having to go through the PR process. We find this is a lot quicker as the bar for adding code into the library is high due to the maintenance cost of every new model, and so reviews take quite a while. Let us know if you have any questions about how to add a model using this process. Looking forward to seeing this model in action!<|||||>> Hi @zekun-li, thanks for opening this PR! > > The easiest and recommended way to make a model available in transformers is to add the modeling code directly on the hub: https://huggingface.co/docs/transformers/custom_models > > This means, once working, the model can be found and used immediately without having to go through the PR process. We find this is a lot quicker as the bar for adding code into the library is high due to the maintenance cost of every new model, and so reviews take quite a while. > > Let us know if you have any questions about how to add a model using this process. Looking forward to seeing this model in action! Hi @amyeroberts Thanks a lot for the suggestion! Although this model is built upon BERT, it has a customized embedding layer and the model input can be different from BERT. This model can take geocoordinates as additional inputs. So unlike the changes of using different values for `num_layers` or `num_hidden_units`, these changes require a different model structure, which is not supported in the existing models in Transformer. I wonder are these supported in the "Sharing custom models" approach? Thanks for your time! <|||||>@zekun-li Yes - you can add any model architecture directly onto the hub and share the model that way! In fact, it's an even more flexible way to define models as you don't have to be as strict about following certain library patterns.
transformers
25,158
open
Transformers not working with the new Jax 0.4.14 due to API deprecation
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (gpu) - Jax version: 0.4.14 - JaxLib version: 0.4.14 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada @sanchit-gandhi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction For example, loading a Bart model from Flax using msgpack_restore (modeling_flax_bart.py) raises `AttributeError: module 'jax.numpy' has no attribute 'DeviceArray'` ### Expected behavior Load the state dict correctly
07-28-2023 02:43:51
07-28-2023 02:43:51
Hi @SystemPanic `transformers` currently only support `jax<=0.4.13` and `jaxlib<=0.4.13`. You can see that in https://github.com/huggingface/transformers/blob/400e76ef11d94a12c255fe1a598966e1d6021511/setup.py#L127-L128
transformers
25,157
closed
Unexpected GPU requests during training
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.31 - Python version: 3.11.4 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No **When CUDA_VISIBLE_DEVICES=1 is specified, the program suddenly applies for calling GPU:0 memory after executing thousands of steps.** I executed the command below on a two-card 3090 server, and when I ran the program, GPU:0 card was already occupied. `CUDA_VISIBLE_DEVICES=1 nohup python train.py &` In the beginning, everything seemed to be normal, because I have done more than 10,000 steps ![image](https://github.com/huggingface/transformers/assets/88258534/90fa40f3-d34d-4f14-8a33-94614cbce155) However, very suddenly, the program starts to apply for the memory of GPU0 during eval stage(although it has go through a lot of eval stage before) ![image](https://github.com/huggingface/transformers/assets/88258534/79d16b30-92d7-4aac-bc22-54e9c450f65b) The error entry of the program is displayed as model.generate from the error report ![image](https://github.com/huggingface/transformers/assets/88258534/901567df-5672-493b-9486-0b3077049e60) this is a related document retirval code run on t5-large,not sure the rest of code will grant help or not. ![image](https://github.com/huggingface/transformers/assets/88258534/2133aae7-d92a-474d-83ec-265845f503ca) ![image](https://github.com/huggingface/transformers/assets/88258534/4dbec245-d923-449e-b5e4-7ade02f54f11) ![image](https://github.com/huggingface/transformers/assets/88258534/7cf39de4-dd3d-4699-9e7b-0b57cea7c17a) ### Who can help? @gan @ArthurZucker @younesbelkada Since it's a nlp project and happened in generation stage, so maybe I need help from you sincerely. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction It's a public project on github [DSI-transformers](https://github.com/ArvinZhuang/DSI-transformers), just run``CUDA_VISIBLE_DEVICES=1 nohup python train.py &`` to reproduce this problem. ### Expected behavior The program can correctly realize that it can only use GPU specified in command.
07-28-2023 02:34:22
07-28-2023 02:34:22
solved
transformers
25,156
open
Mask2Former Model Doesn't Move to GPU
### System Info transformers version: 4.31.0 (same bug occurs until version 4.27.0) pytorch 2.0.1+cu118 (same bug occurs with cu117) python: python 3.10 systems: NVIDIA RTX 3090 CUDA 12.0 ### Who can help? @amyeroberts @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction First of all, my full error traceback is this: Traceback (most recent call last): File "C:\Users\labuser\hubmap\mask2former\train.py", line 428, in <module> model, history = run_training(model, optimizer, scheduler,train_dataloader = train_dataloader, val_dataloader= val_dataloader, File "C:\Users\labuser\hubmap\mask2former\train.py", line 255, in run_training train_loss = epoch_train(model, optimizer, scheduler, File "C:\Users\labuser\hubmap\mask2former\train.py", line 166, in epoch_train outputs = model( File "C:\ProgramData\Anaconda3\envs\hubmap\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\hubmap\lib\site-packages\transformers\models\mask2former\modeling_mask2former.py", line 2496, in forward outputs = self.model( File "C:\ProgramData\Anaconda3\envs\hubmap\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\hubmap\lib\site-packages\transformers\models\mask2former\modeling_mask2former.py", line 2271, in forward transformer_module_output = self.transformer_module( File "C:\ProgramData\Anaconda3\envs\hubmap\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\hubmap\lib\site-packages\transformers\models\mask2former\modeling_mask2former.py", line 2066, in forward self.input_projections[i](multi_scale_features[i]).flatten(2) File "C:\ProgramData\Anaconda3\envs\hubmap\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\hubmap\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "C:\ProgramData\Anaconda3\envs\hubmap\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same Steps to reproduce error: 1. Use any dataset of choice, doesn't matter since the inputs to the model is in cuda already, the model is the issue. (And I've made sure there is the line "model = model.to(device)", and there is only one model loaded. 2. Now write any training code (dummy code) and make sure to run below two lines when loading empty model for Mask2FormerForUniversalSegmentation: config = config = Mask2FormerConfig(feature_size=512, mask_feature_size=512) model = Mask2FormerForUniversalSegmentation(config) 3. Then run dummy training code and you get this error. I'm really not sure how to resolve this issue- I've moved my model to my device, and by doing nvidia-smi I can confirm that the inputs are being transferred over to my GPU memory, I just cannot understand why the model weights are not being transferred when literally writing the code "model = model.to(device)". This also only happens with transformers, on other models on torch it works perfectly fine on the same exact environment so I doubt it's a bug with torch. Thank you! Also, the issue is also discussed without any solutions in this [forum link](https://discuss.huggingface.co/t/mask2former-cuda-training/47072) ### Expected behavior As discussed above, expected behavior would be so that the model is also moved to cuda (GPU).
07-28-2023 00:57:33
07-28-2023 00:57:33
Hi @chokevin8 Thank you for reporting. Could you provide a self-contained code snippet so the reproduction of the error is direct. Also you can enclose the error log in a block like the following \`\`\`bash error log ... \`\`\` to make it easier to read. Thank you in advance 🤗
transformers
25,155
open
torch compile changes model output in half precision
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.15.0-1034-oracle-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - Accelerate version: 0.20.3 - Accelerate config: not found - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: does not matter (provide GPU results, same on CPU) - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import torch import transformers if __name__ == "__main__": device = torch.device('cuda') model = transformers.AutoModelForTokenClassification.from_pretrained( "Jean-Baptiste/roberta-large-ner-english").to(device) model.eval() a = torch.randint(100, 2000, (128, 256), device=device) with torch.no_grad(), torch.cuda.amp.autocast(): out_not_compiled = model(input_ids=a, attention_mask=torch.ones_like(a)).logits model = torch.compile(model) with torch.no_grad(), torch.cuda.amp.autocast(): out_compiled = model(input_ids=a, attention_mask=torch.ones_like(a)).logits print( torch.sum(torch.abs(out_compiled.to(torch.float32) - out_not_compiled)) / (128 * 256)) >> tensor(0.0120, device='cuda:0') # note that actual difference is > 410. ``` Autocast to both `float16` and `bfloat16` produces the same difference. (Commenting out model compilation results into the same output) ### Expected behavior Small difference in output vectors.
07-27-2023 21:58:52
07-27-2023 21:58:52
That seems like an issue for PyTorch more than Transformers :-) Also note that there is a special order for context manager autocast and compiled model to respect (can't remember right now) which also may be the cause.
transformers
25,154
closed
`Pipeline.forward()` possibility to place `model_outputs` on GPU
### Feature request In `transformers.pipelines.base.py` (line 1035): `model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu"))` Is it possible to add a new argument that decides whether `model_outputs` should be placed on `self.device` instead of `torch.device("cpu")`? ### Motivation The variable `model_outputs` is always placed on CPU, which can cause a slowdown if I perform additional operations on the `Pipeline.postprocess()` function. For example, if I were to pass `logits` to `model_outputs`, the whole tensor would be transferred from GPU to CPU. If I do this extensively, I will face a severe slowdown in my pipeline. ### Your contribution Right now, I have to override the method to remove that particular line, but I can submit a PR.
07-27-2023 20:39:29
07-27-2023 20:39:29
`pipeline` is only a wrapper around the model and preprocessing class for quick demos. To customize things more to your needs, you should use those classes independently as you need :-)<|||||>Thanks for the feedback @sgugger! I will keep customizing it then 😄
transformers
25,153
closed
Add new model: GeoLM
# What does this PR do? Add a new model called **GeoLM** into the Transformer library. GeoLM is a language model based on BERT that facilitates **geospatial understanding** in NL documents. It is pretrained on world-wide OpenStreetMap (OSM), WikiData and Wikipedia data, and can be adapted to various geospatial related downstream tasks such as **toponym recognition** and **toponym linking**. ## Model Weights: * Pretrained GeoLM (ready-to-use for zero-shot toponym linking): [zekun-li/geolm-base-cased](https://huggingface.co/zekun-li/geolm-base-cased) * Fine-tuned GeoLM for toponym recognition: [zekun-li/geolm-base-toponym-recognition](https://huggingface.co/zekun-li/geolm-base-toponym-recognition) ## Open source status - [x] The model implementation is available in this PR - [x] The model weights are available in HuggingFace model hub <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR: @ArthurZucker and @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-27-2023 19:09:54
07-27-2023 19:09:54
This PR contains redundant commits and failed test cases. I will fix them and create a new PR later.
transformers
25,152
open
Model is not compiled when using `torch_compile=True` on a machine with multiple GPUs
### System Info - `transformers` version: 4.31.0 - Platform: Linux-4.14.318-241.531.amzn2.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run this code: ```python import torch import evaluate import numpy as np from datasets import load_dataset, DatasetDict from transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments def preprocess_function(examples): return tokenizer(examples["text"], truncation=True, padding=True, return_tensors='pt').to(device="cuda:0") def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return metric.compute(predictions=predictions, references=labels, average="weighted") model_id = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True, model_max_length=512) dataset = load_dataset('banking77', split=['train[:2048]', 'test[:512]']) dataset = DatasetDict({'train': dataset[0], 'test': dataset[1]}) dataset = dataset.map(preprocess_function, batched=True) labels = dataset["train"].features["label"].names num_labels = len(labels) label2id, id2label = dict(), dict() for i, label in enumerate(labels): label2id[label] = str(i) id2label[str(i)] = label metric = evaluate.load("f1") model = AutoModelForSequenceClassification.from_pretrained( model_id, num_labels=num_labels, label2id=label2id, id2label=id2label ) training_args = TrainingArguments( output_dir="./temp", per_device_train_batch_size=128, per_device_eval_batch_size=128, learning_rate=5e-5, num_train_epochs=3, torch_compile=True, optim="adamw_torch_fused", logging_steps=1, logging_strategy="steps", evaluation_strategy="epoch", save_strategy="epoch", save_total_limit=2, load_best_model_at_end=True, metric_for_best_model="f1", ) trainer = Trainer( model=model, args=training_args, train_dataset=dataset["train"], eval_dataset=dataset["test"], tokenizer=tokenizer, compute_metrics=compute_metrics, ) trainer.train() ``` ### Expected behavior This code is running as expected on a machine with a single GPU. The model is compiled (there is an output that says layers are optimized and stuff), and training speeds up significantly (well, not for this specific example model/data combination, but for the production one). Compilation-related output: ``` [2023-07-27 16:50:43,003] torch._inductor.utils: [WARNING] using triton random, expect difference from eager ``` But if I run the very same code on a machine with multiple GPUs - there are no signs of model compilation (no additional output in the logs) and the training speed does not improve. `nvidia-smi` output: ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A10G Off | 00000000:00:1B.0 Off | 0 | | 0% 30C P8 16W / 300W | 0MiB / 22731MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA A10G Off | 00000000:00:1C.0 Off | 0 | | 0% 33C P8 16W / 300W | 0MiB / 22731MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 NVIDIA A10G Off | 00000000:00:1D.0 Off | 0 | | 0% 30C P8 15W / 300W | 0MiB / 22731MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 NVIDIA A10G Off | 00000000:00:1E.0 Off | 0 | | 0% 31C P8 16W / 300W | 0MiB / 22731MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ```
07-27-2023 18:56:20
07-27-2023 18:56:20
cc @muellerzr
transformers
25,151
open
Correct Falcon code in github does not match Falcon's checkpoint
### System Info Transformers version: 4.31.0 ### Who can help? @ArthurZucker @younesbelkada @Rocketknight1 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I see there's a falcon implementation added by the HF team in the transformer github repo. What's the intent for this code? what model in the hub does actually uses it? The official model as far as I can tell in the hub (https://huggingface.co/tiiuae/falcon-7b) uses different (outdated?) code that is included in the checkpoint itself (`modeling_RW.py`). I've found `modeling_RW.py` has a number of problems to key/value caching (either bad model outputs, or key/value caching might not get used at all) that has been fixed already in the current code in github, this has been observed by others in the [model discussions](https://huggingface.co/tiiuae/falcon-7b/discussions/17), unfortunately without an official response from the Falcon team. That said, I am not fully sure the falcon model in the hub is compatible with the code in github, I do get warnings if I try to use it (`You are using a model of type RefinedWebModel to instantiate a model of type falcon. This is not supported for all configurations of models and can yield errors.`). **Concrete question**: what is the intended usage of the current falcon code (`src/transformers/models/falcon`) in the transformers repo? Is it compatible with the official falcon models? Steps to reproduce - loading with FalconForCausalLM ``` # Load model directly from transformers import FalconForCausalLM DEVICE = 'cuda' # This gives me: You are using a model of type RefinedWebModel to instantiate a model of type falcon. This is not supported for all configurations of models and can yield errors. model = FalconForCausalLM.from_pretrained(<path of the tiiuae/falcon-7b in the hub downloaded locally>).to(DEVICE) ``` Steps to reproduce - loading with Auto ``` from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-7b", trust_remote_code=True).to('cuda') tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b") input_text_tokens = tokenizer("Hello world, this is the story of Bob, a", return_tensors="pt").input_ids.to('cuda') # This actually *does not* use KV caching, due to a name bug from "past_key_values" to "past" in `prepare_inputs_for_generation` # If one attempts to fix this, shape errors might occur. If those are fixed, the output is gibberish due to position ids not correctly # into the RoPE embeddings. See this for more details: https://huggingface.co/tiiuae/falcon-7b/discussions/17 with torch.no_grad(): model.eval() generate_fn_output = model.generate(input_text_tokens, max_length=64, num_beams=1, do_sample=False) print("###".join(tokenizer.batch_decode(generate_fn_output, skip_special_symbols=True))) ``` ### Expected behavior Loading the falcon model should get the weights correctly read and calling the `generate` method should perform and correct efficient inference with KV caching.
07-27-2023 17:15:30
07-27-2023 17:15:30
The Falcon model inside Transformers is not ready to be used yet and is not compatible with the online checkpoint. To make the online checkpoints compatible with Transformer, we need to do some changes in the model repo that will break its integration with text-generation-inference. We are waiting for the new version of test-generation-inference to be deployed to be able to do those changes, and once this is done, the model will work with the code in Transformers. So TL;DR: be patient and use `trust_remote_code=True` for the time being.<|||||>Thanks @sgugger - would appreciate if this issue gets tagged once those changes are in the repo :) <|||||>Sure thing! Pinging @Rocketknight1 for when he does the migration.<|||||>Hi @afcruzs - you're correct on all counts here. Falcon-7B uses a different model architecture to Falcon-40B. When we ported Falcon to `transformers`, I added some config variables to handle the different code paths taken by the two models. The main variable is `config.new_decoder_architecture` - you can see it [in the repo code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/falcon/modeling_falcon.py#L211). Unfortunately, because I added these config variables and standardized the names of some others, the `config.json` in the current Falcon checkpoints is not compatible with our library code right now. This is the cause of the `RefinedWebModel` errors you saw. We intend to update the Falcon checkpoints to move them from custom code to library code very soon, which should resolve these errors, as well as fixing the issues with the generation cache. However, we're waiting to give users and other libraries a chance to prepare, since the change will affect the existing custom code checkpoints!<|||||>Ohh I didn't catch the differences between 40b and 7b before, good to know; thanks @Rocketknight1
transformers
25,150
open
Update modeling_gpt2.py
Changed the declaration order in __ init __ so it is aligned with operational order. As consequence, __ repr __ method is now also aligned with operational order # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-27-2023 16:25:12
07-27-2023 16:25:12
cc @ArthurZucker <|||||>Can you just run `make style`, maybe it will put it back? <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25150). All of your documentation changes will be reflected on that endpoint.
transformers
25,149
closed
[`IDEFICS`] Fix idefics config refactor
# What does this PR do? Refactors the `IdeficsConfig` to match the configuration composition patterns of multimodal models on transformers original PR: https://github.com/huggingface/transformers/pull/24796 Summary of the changes - Removed the copy of `CLIPTextConfig`, `CLIPConfig` in `clip.py` as they were used for type hints only - Retrieve the correct attributes on `modeling_idefics.py` (i.e. attributes from `perceiver_config` & `vision_config` - Adapted CI tests accordingly - Make the `utils/check_config_attributes.py` pass - since there is a duplicated CLIPVisionConfig (1 in the clip itself and the other in `configuration_idefics.py`), that script checks the unused attributes of that config for some reason (didn't investigated further) For compatiblity with weights on the Hub, changes similar than: https://huggingface.co/HuggingFaceM4/tiny-random-idefics/discussions/3 needs to be applied The docstring of the new config objects needs to be cleaned up, but can be done on the main PR. cc @stas00
07-27-2023 15:41:03
07-27-2023 15:41:03
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,148
closed
Add new model in doc table of content
# What does this PR do? As requested by @stas00 , this PR makes sure that the `add-new-model-like` command adds the model in the doc table of content. Since we are using another model as a reference, we can simply add it to the same section as that base model.
07-27-2023 15:31:17
07-27-2023 15:31:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,147
open
Add PromptTemplate and allow for default PromptTemplate in model configuration
### Feature request As a user, I want to be able to load a model and feed it my input in such a way that it matches the prompt template that it saw during training. I want to be able to load the default prompt with few lines of code and without having to look up how the model was trained. Additionally, I want to be able to modify the prompt to be different from the default prompt. The specific implementation is up for discussion. I imagine something like this: ``` from transformers import AutoModelForCausalLM, AutoTokenizer, AutoPromptTemplate model_id = "meta-llama/Llama-2-xb-chat-hf" model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) prompt_template = AutoPromptTemplate.from_pretrained(model_id) inputs = { "system_prompt":"You are a helpful assistant", "interactions":[ {"user":"What is the fastest sea mammal?"}, {"assistant":"The fastest sea mammal is the peregrine falcon"}, {"user":"the peregrine falcon is not a mammal"} ] } output = model(**tokenizer(prompt_template(inputs))) ``` ### Motivation The huggingface hub is accumulating many finetuned models, which have been trained with a specific prompt template in mind. However, this prompt template is often difficult to find, and even more often the prompt template is missing entirely from the model card. If the model is invoked with a different template, the model performance can be severely affected. The community would benefit from a PromptTemplate class that can be loaded from the model configuration that handles the prompt templating for the end user. At this very moment, there are likely many users that are using the `meta-llama/Llama-2-xb-chat-hf` models with a prompting style that differs from how the model is intended to be used. ### Your contribution I am happy to be a part of the discussion for implementation and testing.
07-27-2023 15:06:12
07-27-2023 15:06:12
cc @ArthurZucker <|||||>This is 100% needed!<|||||>Hey! Thanks for opening this. Not sure if you have seen this but we have the [`ConversationalPipeline` ](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.Conversation) along with the `Conversation` object, which can pretty easily handle conversations. You just need to override the `_build_conversation_input_ids` of the `tokenizer` that you are using. This allows for anyone to properly build their inputs and share the modeling code on the hub. Having an entirely new `Auto` module just for that is an overkill, and not really the intent of `transformers`. However adding support for `system_prompts` in the `Conversation` object or the `ConversationalPipeline` can be done. We where not entirely sure of whether it would be highly requested or not. <|||||>Hi @ArthurZucker , thanks for your reply. I was unaware of the ConversationalPipeline, so thanks for putting it on my radar. However, neither the ConversationalPipeline nor the Conversation class handle the templating that is really the core of this feature request. Perhaps illustration with some examples will be helpful: The `Llama-2-xb-chat` models use a very specific format [of the following type](https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI/blob/main/app.py): ``` input_prompt = f"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n " for interaction in chatbot: input_prompt = input_prompt + str(interaction[0]) + " [/INST] " + str(interaction[1]) + " </s><s> [INST] " ``` Instead, `oasst1` models often use a format of [the following type](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319): ``` input_prompt = f"""<|system|>{system_message}</s><|prompter|>{user_prompt}</s><|assistant|>""" ``` Even models that are not chat models can have very specific prompt templates, such as [this sql model](https://huggingface.co/juierror/text-to-sql-with-table-schema): ``` table_prefix = "table:" question_prefix = "question:" join_table = ",".join(table) input_prompt = f"{question_prefix} {question} {table_prefix} {join_table}" ``` I hope this illustrates that many models (not just chat models) on the Hugging Face hub come with an implicit specific prompt template. However, there is currently no way (that I know off) to instruct users to follow that specific prompt template, other than to describe the template on the model card. With this feature request, I am suggesting to create a more standardised way for model creators to add a prompt template to their model page. Note that [llama-2-70b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) has no mention of the expected prompt template. I think it is therefore likely that a significant portion of users are currently using the model with a different prompt template and are observing reduced model performance as a consequence. If `transformers` would provide a standardised way to add prompt templates, I believe this would create an incentive for model creators to add their prompt template. This, combined with an easy way to use said template, would make it easier for users to get the best out of models on Hugging Face Hub. For the implementation it is probably not necessary to have an entirely new `Auto` module. I'll let the developers be the judge of how to best implement this.<|||||>Hi @vincentmin! We did some internal discussion and we decided this was a great idea. We're still discussing the specifics, but our current plan is to add a `prompt` field to `tokenizer_config.json`. The method that formats conversational prompts is `Tokenizer._build_conversation_input_ids()`, which is called by `ConversationPipeline`. Therefore, we think the `tokenizer_config.json` is the right place to add fields that override the behaviour of the underlying `Tokenizer`. The specific fields in `prompt` would be class-specific, but for conversational models they would be e.g. `system_message_start`, `system_message_end`, etc. We think breaking up the prompt into string fields will work, and avoids the need to store full templates in the config files. These fields will be read by the tokenizer and used in `_build_conversation_input_ids()` to customize input prompts correctly. Since `_build_conversation_input_ids()` is currently a private method that we mostly use internally in the `Pipeline` code, we may also look at ways to expose the prompt information through other properties or methods. WDYT? The details are still flexibile, but we're planning to finalize a concrete plan soon!<|||||>> Hi @vincentmin! We did some internal discussion and we decided this was a great idea. We're still discussing the specifics, but our current plan is to add a `prompt` field to `tokenizer_config.json`. The method that formats conversational prompts is `Tokenizer._build_conversation_input_ids()`, which is called by `ConversationPipeline`. Therefore, we think the `tokenizer_config.json` is the right place to add fields that override the behaviour of the underlying `Tokenizer`. > > The specific fields in `prompt` would be class-specific, but for conversational models they would be e.g. `system_message_start`, `system_message_end`, etc. We think breaking up the prompt into string fields will work, and avoids the need to store full templates in the config files. These fields will be read by the tokenizer and used in `_build_conversation_input_ids()` to customize input prompts correctly. > > Since `_build_conversation_input_ids()` is currently a private method that we mostly use internally in the `Pipeline` code, we may also look at ways to expose the prompt information through other properties or methods. > > WDYT? The details are still flexibile, but we're planning to finalize a concrete plan soon! @Rocketknight1 How to use `ConversationPipeline` for llama2 chat?I want to do multi-turn chat. Could you show an example? My code example : ``` from transformers import AutoTokenizer, LlamaTokenizerFast from transformers import pipeline, Conversation import torch model = "/home/model_zoo/LLM/llama2/Llama-2-7b-chat-hf" tokenizer = LlamaTokenizerFast.from_pretrained(model) pipeline = pipeline( "conversational", model=model, tokenizer=tokenizer, torch_dtype=torch.float16, device_map="auto", ) conversation_1 = Conversation("Going to the movies tonight - any suggestions?") conversation_2 = Conversation("What's the last book you have read?") print(pipeline([conversation_1, conversation_2])) ``` However it can not return normal response.
transformers
25,146
closed
More `token` things
# What does this PR do? Fix #25141 A few places are missed in #25083 (I haven't work on the training example scripts)
07-27-2023 15:05:32
07-27-2023 15:05:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,145
closed
LLAMA 2 Distributed Training Support
### Feature request LLAMA 2 support for `device_map=True` ### Motivation The current LLAMA 2 does not include support for `device_map=True`. ``` Traceback (most recent call last): File "/u/haob2/saliency4alce/salience_llama_ecco.py", line 38, in <module> output = lm.generate(text, generate=3, beam_size=1, do_sample=True, attribution=['ig']) File "/u/haob2/saliency4alce/ecco/src/ecco/lm.py", line 221, in generate output = self.model.generate( File "/u/haob2/miniconda3/envs/salience/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/u/haob2/.local/lib/python3.9/site-packages/transformers/generation/utils.py", line 1588, in generate return self.sample( File "/u/haob2/.local/lib/python3.9/site-packages/transformers/generation/utils.py", line 2642, in sample outputs = self( File "/u/haob2/miniconda3/envs/salience/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/u/haob2/miniconda3/envs/salience/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/u/haob2/.local/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 806, in forward outputs = self.model( File "/u/haob2/miniconda3/envs/salience/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/u/haob2/.local/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 693, in forward layer_outputs = decoder_layer( File "/u/haob2/miniconda3/envs/salience/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/u/haob2/miniconda3/envs/salience/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/u/haob2/.local/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 405, in forward hidden_states = self.input_layernorm(hidden_states) File "/u/haob2/miniconda3/envs/salience/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/u/haob2/miniconda3/envs/salience/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/u/haob2/.local/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 89, in forward return self.weight * hidden_states.to(input_dtype) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! ``` ### Your contribution I'm looking for suggestions and possible help from distributed training.
07-27-2023 14:56:22
07-27-2023 14:56:22
Without a code reproducer of the error you encounter, there is little we will be able to do to help.<|||||>Just figured it out. So after we set `device_map=True`, we can't move the model to a specific device using `model = model.to(device)`, because the model is already dynamically allocated on all available devices. Sry for bothering @sgugger, and thanks a lot for your prompt reply! I'll give detailed reproduction steps next time (it also helps me identify the bug).
transformers
25,144
open
Having "RuntimeError: expected scalar type Half but found Char" on LLaMa-2 inference stage
### System Info Working on jupyter notebook of a docker Linux instance of A100 , Ubuntu, x86_64 ![image](https://github.com/huggingface/transformers/assets/55791584/67b33111-748a-49a4-aa28-2f2009884c4d) ### Who can help? @sgugger @muellerzr ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I encounter the error while I was trying to run a 8-bit quantized LLaMA-2-70B model on two 40GB GPU of A100 . ![WhatsApp Image 2023-07-27 at 18 07 56](https://github.com/huggingface/transformers/assets/55791584/8d23bab6-25ea-486e-b921-39ad2ad17af1) To reproduce the issue: 1. load model with local path ``` tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path, load_in_8bit = True, device_map = "auto") ``` 2. run inference code ``` question = f""" Human: xxxxxxxxxxxxx Assistant: """ question = tokenizer(question, return_tensors = "pt") question = question.to(0) output = model.generate(question["input_ids"], max_new_tokens = 120) ``` --- # Investigations & Attempts to solve this bug I notice that the error was raised from the part which is recently update for supporting llama-2 and to be specific, it's the implementation of the Grouped-Query Attention (GQA) architecture. ![image](https://github.com/huggingface/transformers/assets/55791584/7e490446-910a-4ecb-a25c-ceebca8410c8) Further looking into the code, i think the bug is caused by missing a dtype handling part for the new Grouped-Query Attention (GQA) architecture while F.linear is being used instead of a forward call. It is because by setting the ```load_in_8bit``` arguement as True, the ```nn.Linear``` layer will be replaced by a equivalent ```bnb.nn.Linear8bitLt```. The dtype of hidden_states and query_slices[i] are float16 and int64 respectively. Usually, the forward function of Linear8bitLt will handle. However, it doesn't work the same way in the F.linear and thus it raises this error as two tensor have different dtypes. Apart from disabling tensor parallelism by setting pretraining_tp = 1 in model config while loading model(os it will use the llama-1 part), I have a small workaround with this issue which is only tested on **inference** stage but not **training** stage. To align the dtype of hidden_states and query_slices[i], I try to manually dequantize the Linear8bitLt by adding a small code snippet like this ``` key_value_slicing = (self.num_key_value_heads * self.head_dim) // self.pretraining_tp if isinstance(self.q_proj, bnb.nn.Linear8bitLt): q_w = self.q_proj.weight q_w = (q_w.CB * q_w.SCB.unsqueeze(1) / 127 ).to(torch.float16) query_slices= q_w.split((self.num_heads*self.head_dim)//self.pretraining tp, dim = 0) if isinstance(self. k_proj, bnb.nn.Linear8bitLt): k_w = self.k_proj.weight k_w = (k_w.CB * k_w.SCB.unsqueeze(1) / 127 ).to(torch.float16) key_slices = k_w.split(key_value_slicing, dim = 0) if isinstance(self. v_proj, bnb.nn.Linear8bitLt): v_w = self.v_proj.weight v_w = (v_w.CB * v_w.SCB.unsqueeze(1) / 127 ).to(torch.float16) value_slices = v_w.split(key_value_slicing, dim = 0) #query_sLices = sett,q pro].weightTsplit((self.num heads * self.head dim) // selT.pretrainlng tp, dim=G) #key_slices = self.K proj.weight.split(key value slicing, dim=0) ~ #value_slices = seirTv pro].weight.split(key value slicing, dim=0) ``` similar code have to be added in: - line 202 (LLamaMLP forward function -> gate_proj, up_porj, down_proj - line 293 (LlamaAttention forward function -> q_proj, k_proj, v_proj) - line 364 (LlamaAttention forward function -> o_proj) After using this workaround , I was able to run the model and get result as expected. Hopefully, there will be a better solution for this issue so everyone can easily load a quantized model and run inference stage easily. ### Expected behavior Generate a completion that answer the human input
07-27-2023 14:46:20
07-27-2023 14:46:20
cc @younesbelkada <|||||>Hmm I really think we should set that value (`config.pretraining_tp`) to 1 (at least when the model is quantized) for all models as it can introduce unexpected behaviour to users. We saw it introduced bugs with PEFT (that users currently overcome by forcing `config.pretraining_tp` to be equal to 1) and now with quantization. I also don't think this is the right fix as de-quantizing the layers like that on the fly can introduce a lot of rounding errors. Not sure also how this will work with nested quantization in 4bit. TLDR; I think that it will be too much of a pain for a little gain - I am pretty sure the generation quality will remain pretty much the same if `pretraining_tp` is equal to 1 (from my experience with bloom). This will certainly create issues with the new quantization technique that is going to be added here: https://github.com/huggingface/transformers/pull/25062 and we can't patch the linear layer like that for each case (bnb 4bit, bnb 8bit, GPTQ). @sgugger @ArthurZucker what do you think about forcing `config.pretraining_tp` to be equal to 1 at least for the quantized models? <|||||>It is 1 on all checkpoints online and the provided code does not change it.<|||||>I checked and found that my config.json is not the most updated version. The latest version of config online is having ``` config.pretraining_tp``` as ```1``` Thanks for the reply ![image](https://github.com/huggingface/transformers/assets/55791584/53e61640-f382-44e5-9efa-999850812c6f) <|||||>@sgugger sorry for the confusion, I thought all models still had `pretraining_tp > 1`. @kenchanLOL thanks for confirming, setting that value to 1 should fix your issue I believe!
transformers
25,143
open
run_generation.py script does not work for most models
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-6.1.0-10-amd64-x86_64-with-glibc2.36 - Python version: 3.11.2 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` % python run_generation.py --model_type=xlnet --model_name_or_path=xlnet 07/27/2023 16:10:16 - WARNING - __main__ - device: cpu, n_gpu: 0, 16-bits training: False Traceback (most recent call last): File "/home/bortzmeyer/.local/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status response.raise_for_status() File "/usr/lib/python3/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/xlnet/resolve/main/spiece.model The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/bortzmeyer/.local/lib/python3.11/site-packages/transformers/utils/hub.py", line 418, in cached_file resolved_file = hf_hub_download( ^^^^^^^^^^^^^^^^ File "/home/bortzmeyer/.local/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/bortzmeyer/.local/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1195, in hf_hub_download metadata = get_hf_file_metadata( ^^^^^^^^^^^^^^^^^^^^^ File "/home/bortzmeyer/.local/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/bortzmeyer/.local/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1541, in get_hf_file_metadata hf_raise_for_status(r) File "/home/bortzmeyer/.local/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 293, in hf_raise_for_status raise RepositoryNotFoundError(message, response) from e huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-64c27ac9-32554ee53bfea1b506174ea7;b2323fa4-9a04-4657-a548-5ceed2fb666e) Repository Not Found for url: https://huggingface.co/xlnet/resolve/main/spiece.model. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/bortzmeyer/Programmation/Python/HuggingFace/essais/run_generation.py", line 448, in <module> main() File "/home/bortzmeyer/Programmation/Python/HuggingFace/essais/run_generation.py", line 354, in main tokenizer = tokenizer_class.from_pretrained(args.model_name_or_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bortzmeyer/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1800, in from_pretrained resolved_vocab_files[file_id] = cached_file( ^^^^^^^^^^^^ File "/home/bortzmeyer/.local/lib/python3.11/site-packages/transformers/utils/hub.py", line 439, in cached_file raise EnvironmentError( OSError: xlnet is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>` ``` We have a similar error message with most models listed in the output of `python run_generation.py --help` Only gpt2 and ctrl seems to work. ### Expected behavior I expected all models listed in the help to actually work.
07-27-2023 14:12:48
07-27-2023 14:12:48
I'm not sure why you expect this command to work: you are passing `--model_name_or_path=xlnet` which is not a valid model identifier on the Hub (as the error clearly says). You need to pick an actual model, all xlnet variants are listed [here](https://huggingface.co/models?sort=trending&search=xlnet).<|||||>> I'm not sure why you expect this command to work: Because this is the output of the `--help` option? OK, with a full name, it works better, thanks.
transformers
25,142
closed
Using Trainer with custom model caused dimension error
### System Info ``` - `transformers` version: 4.31.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` Not sure if it's the intended way of using the Trainer clas, but what I did was: - Created a custom image+text classifier where the image encoder and text encoders are huggingface models (i.e., Bert, ViT) and I extracted the last hidden state from each embedding, concatenated them, and added a linear layer for binary classification. - I modified the `compute_loss` function by subclassing the Trainer class but didn't do anything else. The issue was, since the model class only outputs logits but nothing else (not dict, not tuple), this part of the code: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3344-L3347 where since the output from model forward isn't a dict, it assumed it's a tuple, and trimmed the first sample's logits in the batch causing dimension error when computing the loss since labels and logits dimensions don't match (off by # of batches). This is probably solvable by modifying the model class to return what the trainer is expecting, but it's not communicated clearly either, maybe it's because the Trainer isn't fully suitable for custom model training? I have created a commit in my branch to fix this on my side so that the training can continue: https://github.com/huggingface/transformers/compare/main...zhangyilun:transformers:allow-logits-only-outputs. Not sure if it's worth merging into the repo. I think the change shouldn't break anything else. If you think I'm doing things wrong or I missed anything, please correct me! ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Create a custom binary classification model where the forward method only returns the logits. Use the Trainer class for training. ### Expected behavior Dimension mismatch in compute_loss method.
07-27-2023 14:00:32
07-27-2023 14:00:32
cc our trainer master @sgugger , but I think the best is to follow the standard output format (i.e. either tuple or dict)<|||||>This is communicated clearly on the [Trainer doc page](https://huggingface.co/docs/transformers/main_classes/trainer), scroll a bit to the big warning.<|||||>Thank you for pointing me to the doc!
transformers
25,141
closed
use_auth_token deprecation in pipeline
### System Info I noticed that `pipeline` uses `use_auth_token` argument which raises `FutureWarning: The use_auth_token argument is deprecated and will be removed in v5 of Transformers.`. Replacing `use_auth_token=True` with `token=True` argument does not yet work in `pipeline` (will raise an error). Sys Info ``` - `transformers` version: 4.31.0 - Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - Accelerate version: 0.20.3 - Accelerate config: not found - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ``` ### Who can help? @narsil ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import pipeline model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3" generate_text = pipeline( model=model_name, torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, use_auth_token=True) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` ### Expected behavior pipeline handles `token=True` argument.
07-27-2023 13:49:24
07-27-2023 13:49:24
Thanks for the report! cc @ydshieh Looks like it comes from the PR from yesterday.<|||||>Thanks for reporting, I will work on this. Not very easy to hanlde this history it turns out 👀 <|||||>Hi @maxjeblick Could you share the full error log. So far I don't get error when using `token=True`. (For `generate_text = pipeline(...)` part)<|||||>Sure @ydshieh : ``` from transformers import pipeline model_name = "facebook/opt-125m" # small model for testing purposes generate_text = pipeline( model=model_name, torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, token=True) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` get's ``` Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers pip install xformers. Traceback (most recent call last): File "/home/max/.config/JetBrains/PyCharm2023.2/scratches/scratch_2.py", line 13, in <module> res = generate_text( File "/home/max/.virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 200, in __call__ return super().__call__(text_inputs, **kwargs) File "/home/max/.virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1122, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/home/max/.virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1129, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/home/max/.virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1028, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/home/max/.virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 261, in _forward generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs) File "/home/max/.virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/max/.virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/transformers/generation/utils.py", line 1282, in generate self._validate_model_kwargs(model_kwargs.copy()) File "/home/max/.virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/transformers/generation/utils.py", line 1155, in _validate_model_kwargs raise ValueError( ValueError: The following `model_kwargs` are not used by the model: ['token'] (note: typos in the generate arguments will also show up in this list) ``` <|||||>Thanks! So the error only happens at the generation time, very strange it has been passed to that method! Definitely need a fix. I am on it.
transformers
25,140
open
add docs TypicalLogitsWarper
# What does this PR do? Added some doc string to TypicalLogitsWarper with some examples as well. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Related to #24783 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-27-2023 12:54:22
07-27-2023 12:54:22
@gante let me know the changes<|||||>You also need to run `make fixup` before your next commit, so that our CI becomes happy :D <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25140). All of your documentation changes will be reflected on that endpoint.
transformers
25,139
open
Seq2SeqTrainer.prediction_step does not support model.generation_config.max_length to be null
### System Info Although it is recommended to use max_new_tokens instead of max_length, if we set max_length to None, in the model's generation config, then in the following lines, we will get a "TypeError: '<' not supported between instances of 'int' and 'NoneType'" In transformers/trainer_seq2seq.py:290-296 ```python # Retrieves GenerationConfig from model.generation_config gen_config = self.model.generation_config # in case the batch is shorter than max length, the output should be padded if generated_tokens.shape[-1] < gen_config.max_length: generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_config.max_length) elif gen_config.max_new_tokens is not None and generated_tokens.shape[-1] < gen_config.max_new_tokens + 1: generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_config.max_new_tokens + 1) ``` Should be ```python # Retrieves GenerationConfig from model.generation_config gen_config = self.model.generation_config # in case the batch is shorter than max length, the output should be padded if gen_config.max_length is not None and generated_tokens.shape[-1] < gen_config.max_length: generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_config.max_length) elif gen_config.max_new_tokens is not None and generated_tokens.shape[-1] < gen_config.max_new_tokens + 1: generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_config.max_new_tokens + 1) ``` ## Versions - transformers: 4.31.0 - python: 2.11.3 - platform: macOS 13.4.1 ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Set the model's generation config to have max_length as None, to ensure it is consistent with the recommendations of max_length being None and max_new_tokens to be used. 2. Set `predict_with_generate` to True 3. Call trainer.train(eval_dataset=val) 4. See it blow up ``` trainer.train(train, eval_dataset=val) File "/Users/antonioalegria/Developer/hyperml/scripts/../hyperml/trainer.py", line 927, in train return self.hf_trainer.train() ^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/transformers/trainer.py", line 1539, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/accelerate/utils/memory.py", line 136, in decorator return function(batch_size, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/transformers/trainer.py", line 1916, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/opt/homebrew/lib/python3.11/site-packages/transformers/trainer.py", line 2226, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/transformers/trainer_seq2seq.py", line 159, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/transformers/trainer.py", line 2934, in evaluate output = eval_loop( ^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/transformers/trainer.py", line 3123, in evaluation_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/transformers/trainer_seq2seq.py", line 293, in prediction_step if generated_tokens.shape[-1] < gen_config.max_length: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: '<' not supported between instances of 'int' and 'NoneType' ``` ### Expected behavior It should check for None, like in the issue description is exemplified.
07-27-2023 12:29:41
07-27-2023 12:29:41
cc @gante <|||||>Hi @antonioalegria -- your issue and suggested fix makes complete sense 👍 Would you like to open a PR with the fix?<|||||>Sure, I'll do it! Thanks!
transformers
25,138
closed
How to return detected language using whisper with asr pipeline?
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sanchit-gandhi, @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hello, I'm trying to use asr pipeline with whisper, in other to detect an audio language and transcribe it. I get the transcribed audio successfully, but I have not found a way to return the detected language too. I search the GitHub issues, and it seems this was added by [#21427](https://github.com/huggingface/transformers/pull/21427), but I don't know how to return the detected language. Here is my code: ``` from transformers import pipeline import torch speech_file = "input.mp3" device = "cuda:0" if torch.cuda.is_available() else "cpu" whisper = pipeline("automatic-speech-recognition", max_new_tokens=448, model="openai/whisper-small", device=device) whisper_result = whisper(speech_file) print(whisper_result) ``` ### Expected behavior Be able to return detected language.
07-27-2023 10:51:31
07-27-2023 10:51:31
Probably the easiest here is to use the `processor` + `model` API: ```python from transformers import WhisperProcessor, WhisperForConditionalGeneration from datasets import load_dataset model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny") processor = WhisperProcessor.from_pretrained("openai/whisper-tiny") librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = librispeech_dummy[0]["audio"] input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features pred_tokens = model.generate(input_features, max_new_tokens=448) pred_text = processor.batch_decode(pred_tokens, skip_special_tokens=True) pred_language = processor.batch_decode(pred_tokens[:, 1:2], skip_special_tokens=False) print(pred_text) print(pred_language) ``` **Print Output:** ``` [' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.'] ['<|en|>'] ``` The pipeline discards the 'special' task/language tokens from the predictions when merging chunks, so we loose this information. <|||||>OK. I will try that. Thank you.
transformers
25,137
closed
Incorrect backward pass in the four bits LLaMA 2 70B
### System Info This issue needs to be treated as a code review comment. ### Who can help? @TimDettmers @ArthurZucker @younesbelkada It seems that when `config.pretraining_tp` is greater than one, then the projections of the keys, queries, and values in the `LlamaAttention` implemented using `torch.nn.functional.linear`. Hence, in such cases the implementation bypasses `torch.nn.Linear.forward ` which if i understand correctly is problematic when the underlined linear module is replaced by `bitsandbytes`. ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This issue needs to be treated as a code review comment. So, I didn't implement a concrete code for demonstrating the problem. ### Expected behavior A seamless integration with `bitsandbytes`.
07-27-2023 10:40:02
07-27-2023 10:40:02
Hi @noamwies For fine-tuning llama2 models that have `config.pretraining_tp>1` consider calling ```python model.config.pretraining_tp = 1 ``` Before training, make sure to use the main branch of `transformers` to include: https://github.com/huggingface/transformers/pull/24906 ```bash pip uninstall transformers pip install git+https://github.com/huggingface/transformers ```<|||||>This is a duplicate of #24961, as well https://github.com/facebookresearch/llama/issues/423 , and https://github.com/TimDettmers/bitsandbytes/issues/610. This is not something that will be fixed in `transformers` and not sure you need to fix it, since the pretraining tp should stay at 1 for most use cases
transformers
25,136
closed
fix delete all checkpoints when save_total_limit is set to 1
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #25129 (issue) Fix the bug that all checkpoints are deleted when set `save_total_limit ` in `TrainingArguments ` as 1. More details can refer to #25129 . ## Who can review? @ydshieh @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-27-2023 10:16:12
07-27-2023 10:16:12
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25136). All of your documentation changes will be reflected on that endpoint.
transformers
25,135
open
In assisted decoding, pass model_kwargs to model's forward call
# What does this PR do? Previously, assisted decoding would ignore any additional kwargs that it doesn't explicitly handle. This was inconsistent with other generation methods, which pass the model_kwargs through prepare_inputs_for_generation and forward the returned dict to the model's forward call. The prepare_inputs_for_generation method can not be used directly in this case, as many implementations assume they should only keep the last input ID if a past_key_values is passed. Same goes for attention_mask etc. The prepare_inputs_for_assisted_generation method modifies the outputs from prepare_inputs_for_generation so that they are suitable for assisted generation. This should work for most models, but if necessary a model can override this method to implement custom logic. Fixes #25020 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @gante
07-27-2023 09:41:19
07-27-2023 09:41:19
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25135). All of your documentation changes will be reflected on that endpoint.<|||||>@gante This is ready to review now. Thanks in advance.
transformers
25,134
closed
Clarify 4/8 bit loading log message
If you enable 4-bit loading, you will get a message that the model is being loaded in 8-bit. This can be a tad confusing. This tiny PR simply distinguishes in the logging between 4-bit and 8-bit loading.
07-27-2023 08:45:38
07-27-2023 08:45:38
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @younesbelkada and @SunMarc<|||||>Thanks!
transformers
25,133
closed
make run_generation more generic for other devices
## What does this PR do? Currently, the example for text-generation is only available for cuda or cpu. This PR makes it work well on mps or npu devices. Verified on A100 and npu. Example usage: ``` python3 run_generation.py \ --model_type=gpt2 \ --model_name_or_path=gpt2 ``` Below are the output logs: - On GPU ``` 07/27/2023 11:23:51 - WARNING - __main__ - device: cuda, n_gpu: 8, 16-bits training: False Using pad_token, but it is not set yet. 07/27/2023 11:23:57 - INFO - __main__ - Namespace(model_type='gpt2', model_name_or_path='gpt2', prompt='', length=20, stop_token=None, temperature=1.0, repetition_penalty=1.0, k=0, p=0.9, prefix='', padding_text='', xlm_language='', seed=42, use_cpu=False, num_return_sequences=1, fp16=False, jit=False, device=device(type='cuda'), n_gpu=8) Model prompt >>> "I'm Jack The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. === GENERATED SEQUENCE 1 === "I'm Jack Russell." "They said, 'Hey, you know, you have to get away with ``` - On NPU: ``` 07/27/2023 11:21:47 - WARNING - __main__ - device: npu, n_gpu: 8, 16-bits training: False Using pad_token, but it is not set yet. 07/27/2023 11:22:03 - INFO - __main__ - Namespace(device=device(type='npu'), fp16=False, jit=False, k=0, length=20, model_name_or_path='gpt2', model_type='gpt2', n_gpu=8, num_return_sequences=1, p=0.9, padding_text='', prefix='', prompt='', repetition_penalty=1.0, seed=42, stop_token=None, temperature=1.0, use_cpu=False, xlm_language='') Model prompt >>> "I'm Jack The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. === GENERATED SEQUENCE 1 === "I'm Jack Dylan," guitarist Tim Farrell said, picking up Dylan's Lave Club guitar and in a playful styl ```
07-27-2023 07:39:25
07-27-2023 07:39:25
_The documentation is not available anymore as the PR was closed or merged._<|||||>@muellerzr Hey there! I've addressed some code quality check warnings. Could you take a look at this PR? Thank you!<|||||>> Thanks! There's some errors in here we need to fix, and we can probably also improve the mixed precision to use accelerate if we want to (though if not, that's okay!) @muellerzr I have resolved these errors. Would you kindly spare a moment to review this PR again? Thank you.<|||||>> As @sgugger pointed out, we don't want to wrap the mixed precision here actually during inference and just want to make sure the device is working. As a result we should use a different and simpler API, the [PartialState](https://huggingface.co/docs/accelerate/package_reference/state#accelerate.PartialState), which is designed for such situations. I've added suggestions for each change as a result, and appreciate your patience making sure this all will be great! Thanks for the suggestion to make it more reasonable. I refactored this PR with `PartialState`, would you mind taking a look again.<|||||>Very clean! Thansk!
transformers
25,132
open
Fine tuning TrOCR on 22 Indian Languages
Yeah that definitely will change behaviour. If you check ```python from transformers import TrOCRProcessor, VisionEncoderDecoderModel processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-stage1") model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-stage1") print(model.config.decoder.decoder_start_token_id) ``` you'll see that it's set to 2. However, if you set it to `processor.tokenizer.cls_token_id`, then you set it to 0. But the model was trained with ID=2 as decoder start token ID. _Originally posted by @NielsRogge in https://github.com/huggingface/transformers/issues/15823#issuecomment-1099151683_ ------------------------------------------------------------------------------------------------------------------------ Hi, I have been working on TrOCR recently, and I am very new to these things. I am trying to extend TrOCR to all 22 scheduled Indian Languages. From my understanding, I have used AutoImageProcessor and AutoTokenizer class and for ecoder i have used BEiT and IndicBERTv2 respectively as it supports all the 22 languages. In the above mentioned reply, there seems to be mismatch wherein the Model was originally trained with decoder_start_token_id=2 and when fine tuning it is being set as tokenizer.cls_token_id which is 0. So should we explicitly set it to 2 before training? Becuase after running 3 epochs on 20M examples dataset, when im running inference, its generating dots and commas.
07-27-2023 07:27:45
07-27-2023 07:27:45
@AnustupOCR This question is better to be on [Hugging Face Forum](https://discuss.huggingface.co/). The issue page here is for bug reporting and feature requests. ------------------- However, it makes sense to try `decoder_start_token_id=2` but monitoring the generation results earlier (not to wait until 3 epochs on 20M examples). BTW, you use `microsoft/trocr-base-stage1` which has `RobertaTokenizer` (and has English-only vocabulary). It will be difficult for this model to learn with the new languages. Maybe better to use a TrOCR checkpoint with `XLMRobertaTokenizer` if there is one on the Hub.<|||||>@ydshieh Sorry, I will surely shift to the Forum for my future queries. But, to clarify, I am not using microsoft/trocr-base-stage1 as the checkpoint, I will attatch the model , tokenzer and image processor I am using. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------from transformers import VisionEncoderDecoderModel import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") #device="cpu" enc='microsoft/beit-base-patch16-224-pt22k-ft22k' dec='ai4bharat/IndicBERTv2-MLM-only' model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(enc,dec) model.to(device) ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- from transformers import AutoImageProcessor, AutoTokenizer,TrOCRProcessor,BeitFeatureExtractor image_processor = BeitFeatureExtractor.from_pretrained("microsoft/beit-base-patch16-224-pt22k-ft22k") tokenizer = AutoTokenizer.from_pretrained("ai4bharat/IndicBERTv2-MLM-only") processor = TrOCRProcessor(feature_extractor = image_processor, tokenizer = tokenizer) #processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-stage1") train_dataset = IAMDataset(root_dir='/home/ruser1/Anustup/synthtiger-1.2.1/results/bnnewtst/images/', df=train_df, processor=processor) eval_dataset = IAMDataset(root_dir='/home/ruser1/Anustup/synthtiger-1.2.1/results/bnnewtst/images/', df=test_df, processor=processor) ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Any kind of help would really mean a lot Thank you so much<|||||>So it's not from a pretrained TrOCRModel (decoder) model, but just a `VisionEncoderDecoderModel` model. Note that, `ai4bharat/IndicBERTv2-MLM-only` is actually an encoder model (I believe so, but you can verify), not a decoder model for generation. But it should still able to generate something. The best suggestions I could provide: - running the generation with a small example, see what is the first token being used as the starting token. - running a dummy training, check a bit what the examples (after encoding) looks like + check what the model receive as inputs (especially if the first token is the same as the one seen above) - running the real training, but try to do generation in an earlier stage. You can use `predict_with_generate=True` (and set `do_eval`) to verify if there is some progress
transformers
25,131
closed
[`T5/LlamaTokenizer`] default legacy to `None` to not always warn
# What does this PR do? In a follow up to the patch that introduce the `legacy` argument #24622, this makes sure people are warn if the `legacy` is not set. Since online models were not changed, this does not really change much!
07-27-2023 07:17:48
07-27-2023 07:17:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>Excellent - thank you for improving this feature, Arthur!
transformers
25,130
open
an inplace operation preventing TorchDistributor training
### System Info databricks ### Who can help? @ArthurZucker @younesbelkada Hi team, I got an error message by using TorchDistributor. I have checked in the class BertEmbeddings (url as below), line 238, embeddings += position_embeddings is an inplace operation, would you be able to change to embeddings = embeddings + position_embeddings, to allow TOrchDistributor? BertEmbeddings url: https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py TorchDistributor sample code: https://docs.databricks.com/_extras/notebooks/source/deep-learning/torch-distributor-notebook.html Thank you very much! Ling ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` single_node_single_gpu_dir = create_log_dir() print("Data is located at: ", single_node_single_gpu_dir) def train_one_epoch(model, device, data_loader, optimizer, epoch): torch.autograd.set_detect_anomaly(True) model.train() for batch_idx, (data, labels) in enumerate(data_loader): inputs1, inputs2 = data[0], data[1] inputs1 = {key: val.to(device) for key, val in inputs1.items()} inputs2 = {key: val.to(device) for key, val in inputs2.items()} # labels = labels.float().to(device) labels = labels.to(device) optimizer.zero_grad() # Compute embeddings embeddings1 = model(inputs1)['sentence_embedding'] embeddings2 = model(inputs2)['sentence_embedding'] # Compute loss loss = cosine_similarity_loss(embeddings1, embeddings2, labels) loss.backward() optimizer.step() if batch_idx % log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(data_loader) * len(data), 100. * batch_idx / len(data_loader), loss.item())) if int(os.environ["RANK"]) == 0: mlflow.log_metric('train_loss', loss.item()) def save_checkpoint(log_dir, model, optimizer, epoch): filepath = log_dir + '/checkpoint-{epoch}.pth.tar'.format(epoch=epoch) state = { 'model': model.module.state_dict(), 'optimizer': optimizer.state_dict(), } torch.save(state, filepath) # For distributed training we will merge the train and test steps into 1 main function def main_fn(directory): #### Added imports here #### import mlflow import torch.distributed as dist from torch.nn.parallel import DistributedDataParallel as DDP from torch.utils.data.distributed import DistributedSampler ############################ ##### Setting up MLflow #### # We need to do this so that different processes that will be able to find mlflow os.environ['DATABRICKS_HOST'] = db_host os.environ['DATABRICKS_TOKEN'] = db_token # We set the experiment details here experiment = mlflow.set_experiment(experiment_path) ############################ print("Running distributed training") dist.init_process_group("nccl") local_rank = int(os.environ["LOCAL_RANK"]) global_rank = int(os.environ["RANK"]) if global_rank == 0: train_parameters = {'batch_size': batch_size, 'epochs': num_epochs, 'trainer': 'TorchDistributor'} mlflow.log_params(train_parameters) model = SentenceTransformer(modelname) filepath = "../../dbfs/mnt/path2data/" df_train = readData('train', filepath) df_train = df_train.head(10000) train_text = df_train[['sentA', 'sentB', 'score']].values.tolist() train_examples = [InputExample(texts=[a, b], label=s) for [a, b, s] in train_text] train_dataset = SentencesDataset(train_examples, model) #### Added Distributed Dataloader #### train_sampler = DistributedSampler(dataset=train_dataset) data_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, sampler=train_sampler) ###################################### data_loader.collate_fn = model.smart_batching_collate model = model.to(local_rank) #### Added Distributed Model #### ddp_model = DDP(model, device_ids=[local_rank], output_device=local_rank) ################################# optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate) for epoch in range(1, num_epochs + 1): train_one_epoch(ddp_model, local_rank, data_loader, optimizer, epoch) if global_rank == 0: save_checkpoint(directory, ddp_model, optimizer, epoch) dist.destroy_process_group() return "finished" # can return any picklable object # single node distributed run to quickly test that the whole process is working with mlflow.start_run(): mlflow.log_param('run_type', 'test_dist_code') main_fn(single_node_single_gpu_dir) ``` ### Expected behavior below error disappear. ![image](https://github.com/huggingface/transformers/assets/48280760/54232ecc-29da-4d37-a0b5-7ca4b8a0d22a)
07-27-2023 05:28:20
07-27-2023 05:28:20
I would be very surprised if this famous `BERT` model has such issue. Could you provide the system environment like pytorch version. You can run the command `transformers-cli env` and copy-paste its output.<|||||>Actually @ydshieh I think this is pretty valid, and we have a bunch of issues with `inplace operations` preventing `fsdp` training. This is not limited to the embedding, have seen other places where the code fails. See the linked issue for more details. <|||||>@ArthurZucker Thanks. I know there is such problem, like I have engaged in #24525. My main concern here: is this issue (for BERT) is only happening with `TorchDistributor` (or FSDP as you said). In #24525, it seems it happens without these other tools. And BERT exists for so long, so I am somehow confused about what exactly triggers this error. <|||||>@ydshieh system environment is below: - `transformers` version: 4.29.2 - Platform: Linux-5.15.0-1040-azure-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> <|||||>@ydshieh @ArthurZucker I am working in Azure Databricks, I used Horovod for distributed training, the inplace operation does not cause any issue, but Horovod 4GPU is only 1.6 times faster than 1GPU. TorchDistributor can be nearly 4 times faster. However, TorchDistributor does not work due to inplace opertaion. I tried subclassing to remove inplace operations, but not easy :). Hopefully you guys can help to release an update. Thanks a lot. <|||||>@ydshieh @ArthurZucker I would suggest to do a thorough check for all inplace operations, and get rid of all :).
transformers
25,129
closed
Delete all checkpoints when set save_total_limit=1
### System Info - `transformers` version: 4.31.0 - Platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.27 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Set the `save_total_limit` in `TrainingArguments` as 1 2. Set `output_dir` in `TrainingArguments` 3. Run `trainer.train()` 4. No checkpoint folder was saved (all are deleted) I have found the reason and fix the bug, just report this bug. The reason is that in [trainer](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1963): ``` if self.args.should_save and self.state.best_model_checkpoint is not None and self.args.save_total_limit == 1: for checkpoint in checkpoints_sorted: if checkpoint != self.state.best_model_checkpoint: logger.info(f"Deleting older checkpoint [{checkpoint}] due to args.save_total_limit") shutil.rmtree(checkpoint) ``` This line: ``` if checkpoint != self.state.best_model_checkpoint: ``` Directly compare two paths as strings and ignore the format of the path. For example, in my case, when: * checkpoint == 'outputs/' and * self.state.best_model_checkpoint == './outputs', this comparison returns `False` and all checkpoints are deleted even including the best model. I fix this bug by changing the above line to: ``` if str(Path(checkpoint)) != str(Path(self.state.best_model_checkpoint)): ``` ### Expected behavior The best model is saved and other checkpoints are deleted.
07-27-2023 03:57:33
07-27-2023 03:57:33
Hi @Pbihao ! Thank you a lot of reporting this issue. I can confirm it. Would you like to open a PR to help us fixing this 🤗 ? <|||||>Yeah, I have submitted the PR #25136 . Many thanks.
transformers
25,128
closed
make run_generation more generic for other devices
## What does this PR do? Currently, the example for text-generation is only available for cuda or cpu. This PR makes it work well on mps or npu devices. Verified on A100 and npu. Example usage: ``` python3 run_generation.py \ --model_type=gpt2 \ --model_name_or_path=gpt2 ``` Below are the output logs: - On GPU ``` 07/27/2023 11:23:51 - WARNING - __main__ - device: cuda, n_gpu: 8, 16-bits training: False Using pad_token, but it is not set yet. 07/27/2023 11:23:57 - INFO - __main__ - Namespace(model_type='gpt2', model_name_or_path='gpt2', prompt='', length=20, stop_token=None, temperature=1.0, repetition_penalty=1.0, k=0, p=0.9, prefix='', padding_text='', xlm_language='', seed=42, use_cpu=False, num_return_sequences=1, fp16=False, jit=False, device=device(type='cuda'), n_gpu=8) Model prompt >>> "I'm Jack The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. === GENERATED SEQUENCE 1 === "I'm Jack Russell." "They said, 'Hey, you know, you have to get away with ``` - On NPU: ``` 07/27/2023 11:21:47 - WARNING - __main__ - device: npu, n_gpu: 8, 16-bits training: False Using pad_token, but it is not set yet. 07/27/2023 11:22:03 - INFO - __main__ - Namespace(device=device(type='npu'), fp16=False, jit=False, k=0, length=20, model_name_or_path='gpt2', model_type='gpt2', n_gpu=8, num_return_sequences=1, p=0.9, padding_text='', prefix='', prompt='', repetition_penalty=1.0, seed=42, stop_token=None, temperature=1.0, use_cpu=False, xlm_language='') Model prompt >>> "I'm Jack The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. === GENERATED SEQUENCE 1 === "I'm Jack Dylan," guitarist Tim Farrell said, picking up Dylan's Lave Club guitar and in a playful styl ```
07-27-2023 03:34:20
07-27-2023 03:34:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,127
open
Trainer explodes with multiple validation sets used
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.11.4 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - Accelerate version: 0.22.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no, running in notebook - Using distributed or parallel set-up in script?: no, running in notebook ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Using multiple datasets, the trainer looks for a key that doesn't exist and throws an error upon evaluation at the end of the epoch. ![image](https://github.com/huggingface/transformers/assets/2444926/bbc98fb0-b58d-42e0-8992-fef2139c5ad6) ### Expected behavior Metrics are returned just fine. There is no error.
07-27-2023 01:11:29
07-27-2023 01:11:29
Yes, multiple evaluation datasets are not supported in a notebook env, that is a known issue.<|||||>Thank you, Sylvain! 🙂 Didn't know that! Appreciate your reply, will close this now.<|||||>Ah ah you don't need to close it ;-) It's not been high-priority for us but we should fix it at some point.<|||||>Ok, sorry Sylvain 😊 Let me reopen the issue then! I thought it was a known issue as in "we know it doesn't work but it is meant to be broken in notebooks" and I just didn't realize that, but if it is a genuine issue then let me leave it open! 🙂
transformers
25,126
closed
[setup] fix min isort requirements
the current isort min version requirement doesn't match CI's reality even with isort 5.10.1 `make style` leads to modified code which doesn't match CI. I tested that 5.12 syncs with CI.
07-27-2023 00:40:02
07-27-2023 00:40:02
no, actually that didn't help :(<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25126). All of your documentation changes will be reflected on that endpoint.
transformers
25,125
closed
[DOCS] Add example and modified docs of EtaLogitsWarper
# What does this PR do? See #24783 Added example to EtaLogitsWarper and also modifed its docstrings to make it more understandable ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante
07-27-2023 00:20:53
07-27-2023 00:20:53
@ashishthomaschempolil thank you for iterating 🤗 <|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
25,124
closed
added compiled model support for inference
# What does this PR do? Adds support for torch.compile-ed models in pipelines. Basically, you cannot run torch.compile-ed model under `torch.inference_mode()` context and should use `torch.no_grad` instead Examples: <img width="1004" alt="image" src="https://github.com/huggingface/transformers/assets/22663468/04f0bde9-c7d0-4776-9fc4-bd224fadebce"> Here you see start of the super long traceback that ends with: ``` RuntimeError: Inference tensors do not track version counter. While executing %getitem : [#users=1] = call_function[target=operator.getitem](args = (%attention_mask, (slice(None, None, None), None, None, slice(None, None, None))), kwargs = {}) Original traceback: File "/home/alexander/jupyter_server/.venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 890, in get_extended_attention_mask extended_attention_mask = attention_mask[:, None, None, :] | File "/home/alexander/jupyter_server/.venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 993, in forward extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape) | File "/home/alexander/jupyter_server/.venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 1758, in forward outputs = self.bert( ``` and ``` BackendCompilerFailed: debug_wrapper raised RuntimeError: Inference tensors do not track version counter. While executing %getitem : [#users=1] = call_function[target=operator.getitem](args = (%attention_mask, (slice(None, None, None), None, None, slice(None, None, None))), kwargs = {}) Original traceback: File "/home/alexander/jupyter_server/.venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 890, in get_extended_attention_mask extended_attention_mask = attention_mask[:, None, None, :] | File "/home/alexander/jupyter_server/.venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 993, in forward extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape) | File "/home/alexander/jupyter_server/.venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 1758, in forward outputs = self.bert( ``` Possible solution right now: <img width="692" alt="image" src="https://github.com/huggingface/transformers/assets/22663468/f6b2c599-78b9-4e0c-89b7-31ded7532a52"> ## Who can review? @Narsil
07-26-2023 20:43:05
07-26-2023 20:43:05
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,123
open
Add Vocos model
### Model description Vocos is a Fourier-based neural vocoder for audio synthesis. According to its [paper](https://arxiv.org/pdf/2306.00814.pdf), Vocos constantly outperforms [HifiGan](https://huggingface.co/docs/transformers/main/en/model_doc/speecht5#transformers.SpeechT5HifiGan), has 13.5M params and is significantly faster than any competing vocoders! Moreover, it is also compatible with Bark, and significantly improve audio quality as showed [here](https://charactr-platform.github.io/vocos/#audio-reconstruction-from-bark-tokens). Vocos is composed of a backbone (ConvNeXt) and an inverse fourier transform head (either STFT or MDCT). ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Vocos code is available [here](https://github.com/charactr-platform/vocos/tree/main) and was mainly contributed by @hubertsiuzdak. Its weights are available on HF hub [here](https://huggingface.co/charactr/vocos-mel-24khz) and [here](https://huggingface.co/charactr/vocos-encodec-24khz).
07-26-2023 17:20:17
07-26-2023 17:20:17
transformers
25,122
closed
Move center_crop to BaseImageProcessor
# What does this PR do? Moves center_crop to BaseImageProcessor as the logic is the same for almost all models' image processors (except [bridgetower](https://github.com/huggingface/transformers/blob/659829b6ae558dd2e178462a797bf8b1a749f070/src/transformers/models/bridgetower/image_processing_bridgetower.py#L229) and [owlvit](https://github.com/huggingface/transformers/blob/659829b6ae558dd2e178462a797bf8b1a749f070/src/transformers/models/owlvit/image_processing_owlvit.py#L184)), and is a standard transformation all processors might want to apply. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
07-26-2023 16:50:42
07-26-2023 16:50:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,121
open
Add copied from for image processor methods
# What does this PR do? Adds `# Copied from` headers to shared image processor methods to ensure any updates to e.g. docstrings are propogated across. This mainly applies to the methods resize, center_crop and rescale. This is in part to prepare for any future adaptations to make handling of images with different number of channels / ambiguous data formats e.g. adding `input_data_format` arguments or handling an ImageArray object. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
07-26-2023 15:49:34
07-26-2023 15:49:34
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25121). All of your documentation changes will be reflected on that endpoint.
transformers
25,120
closed
Fix `.push_to_hub` and cleanup `get_full_repo_name` usage
A few (vaguely) related changes in this PR. The main goal was to fix a bug when `.push_to_hub` is used with a repo_id in the form `"organization/repo_name"` that is also a local working directory. The organization gets removed which pushes the model to `"username/repo_name"` instead of under the organization. This bug has been reported [on slack](https://huggingface.slack.com/archives/C02EMARJ65P/p1690366443112179) (private link) by @NathanHB. In addition to this fix, I also made some changes to get rid of `get_full_repo_name` in most cases. **List of changes:** - fix `src/transformers/utils/hub.py` to work with organization (bug above) - added some ValueError when using deprecated args in addition to existing args - get rid of `get_full_repo_name` in training scripts (no need for it when using `create_repo`), which saves 1 whoami call - removed `get_full_repo_name` from keras_callback.py, modeling_tf_utils.py and trainer.py - import `get_full_repo_name` from `huggingface_hub` instead of re-defining it. I expect nothing to be broken by those changes.
07-26-2023 15:20:23
07-26-2023 15:20:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the review @ydshieh. I added some comments and fix a bug where token was not used (well done finding this one! :wink:). So I think we're good to go now. I'll merge the PR once CI is green :) <|||||>@Wauplin Thanks for the update. Good to merge (you can re-run the failed tests and it should be fine). Regarding the comment, thanks a lot. (What I originally mean is that adding some comments on the PR pages, but it could be on the code too.) <|||||>~@ydshieh I don't think I have the permissions to rerun a failed test. Could you trigger it for me please :pray:~ **EDIT:** I was logged out :smile: I just triggered it.
transformers
25,119
closed
Support End LR for Cosine LR Scheduler
### Feature request Customize the end learning rate of a cosine LR scheduler, as a non-zero end lr is commonly used now (e.g., LLaMA-2 use 10% of peak lr as end lr). ![image](https://github.com/huggingface/transformers/assets/55196500/4a07777f-968c-47da-b836-0f84dfd07a4d) ### Motivation non-zero end lr is commonly used now (e.g., LLaMA-2 use 10% of peak lr as end lr) ### Your contribution Sorry, I don't think I can be of much help.
07-26-2023 14:50:49
07-26-2023 14:50:49
You can pass along your custom scheduler to the `Trainer` :-)<|||||>OK, Thank you.
transformers
25,118
closed
Fix ViT docstring regarding default dropout values.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-26-2023 14:49:28
07-26-2023 14:49:28
Related to https://github.com/huggingface/transformers/issues/25108 @ydshieh <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25118). All of your documentation changes will be reflected on that endpoint.
transformers
25,117
closed
EsmForMaskedLM no performance gain from batch processing
### System Info Transformers version: 4.30.2 Python version: 3.9.16 This occurs on both: MacBook Pro M2: MacOS 13.2.1 (22D68), ran using mps AND Debian 4.19.171-2 x86_64 GNU/Linux, ran using gpu ### Who can help? @ArthurZucker and @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm using the EsmForMaskedLM model with `facebook/esm2_t33_650M_UR50D` along with the EsmTokenizer. If I run inference on 200 sequences, it takes the same amount of time to run 10 forward passes with a batch size of 20, vs 100 forward passes on a batch size of 2. This seems to indicate the model doesn't support batch processing under the hood? It seems strange that the interface would imply that it supports batch processing, without actually supporting it properly. ```python inputs = self.tokenizer(sequences, return_tensors="pt") inputs = inputs.to(self.device) self.model.base_model(**inputs) ``` ### Expected behavior I would expect running 10 forward passes to be much faster than 100 forward passes.
07-26-2023 14:42:05
07-26-2023 14:42:05
When I run a profiler it seems there is a bottleneck in the `apply_rotary_pos_emb` function called from the ESM sequence embedding. It could be that this is such a large bottle neck, that changing the batch size almost has no effect.<|||||>cc @Rocketknight1 <|||||>Hi It would be good to provide a full code snippet. Currently, it is not super clear that if you are also including the time spent on the tokenization (I guess that's not the case however). And in any case, with a code snippet, it's easier for us to help. Thank you in advance.<|||||>The following code was run on a `Tesla V100-SXM2-16GB`. With a batch size of 10, it executes in 14.96 seconds. With a batch size of 50, it executes in 13.6 seconds. I would expect a much larger change in execution time between the two batch sizes. I would have thought a batch size of 50 would execute five times faster than a batch size of 10. ```python import numpy as np import torch from torch.utils.data import DataLoader from transformers import EsmForMaskedLM, EsmTokenizer import time device = torch.device("cuda") tokenizer = EsmTokenizer.from_pretrained("facebook/esm2_t33_650M_UR50D") model = EsmForMaskedLM.from_pretrained("facebook/esm2_t33_650M_UR50D") model.eval() model = model.to(device) batch_size = 10 samples = 500 sequence_length = 250 tokens = list("ARNDCQEGHILKMFPSTWYV") sequences = ["".join(np.random.choice(tokens, sequence_length)) for _ in range(samples)] t0 = time.time() with torch.no_grad(): for batch_seqs in DataLoader(sequences, batch_size=batch_size): inputs = tokenizer(batch_seqs, return_tensors="pt") inputs = inputs.to(device) model.base_model(**inputs) print(f"Execution time: {time.time() - t0}") ```<|||||>Thanks for the code snippet 🤗 <|||||>Hi @M-J-Murray, I think there are a few confounding things here. Firstly, the ESM tokenizer is relatively unoptimized. This means tokenization takes longer than it does for other models. If performance is critical, I would strongly recommend tokenizing your sequences once, then saving the tokenized outputs, rather than tokenizing them on-the-fly in each loop. This applies to all models, but especially to ESM! Secondly, performance does not scale linearly with batch size. The same amount of computation has to be done for 100 batches of 2, or 2 batches of 100. The main reason to use larger batch sizes is that larger batches generally allow GPUs to do more work in parallel, which is helpful when the model is small, as small batch sizes in small models generally cannot use all the power of a high-end GPU at once. There is also the further benefit during training that fewer optimizer update steps are needed, but this does not apply when you're doing inference. In this case, though, the model has 650M parameters, which is reasonably large. I would guess that even smaller batch sizes are enough to saturate a V100 GPU for a model of this size, so the performance benefit of larger batches would not be that significant. I think this, combined with the additional constant time added to your measurements from running the tokenizer in the loop, is enough to explain the lack of benefit, and the model is actually working as expected!<|||||>@Rocketknight1 Thank you, I've just validated and it does seem the tokenizer is the main bottle neck here. I will write my own tokenizer for now.
transformers
25,116
closed
[`MptConfig`] support from pretrained args
# What does this PR do? Adds a setter for `attn_config` to allow passing dict when initializing for backward compatibility Fixes https://github.com/huggingface/transformers/issues/25114
07-26-2023 14:20:48
07-26-2023 14:20:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sgugger
transformers
25,115
closed
Fix beam search to sample at least 1 non eos token (#25103)
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-26-2023 13:29:44
07-26-2023 13:29:44
done<|||||>ok now done :) sorry for all the force pushes I just wasn't sure how you guys merge so I preferred to keep 1 clean commit<|||||>> sorry for all the force pushes I just wasn't sure how you guys merge so I preferred to keep 1 clean commit @yonigottesman no worries, we squash before merging ;)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25115). All of your documentation changes will be reflected on that endpoint.
transformers
25,114
closed
MptForCausalLM.from_pretrained gives error 'dict' object has no attribute 'softmax_scale'
### System Info Creating model MPT: ``` model = MptForCausalLM.from_pretrained( model_args.model_name_or_path, cache_dir=training_args.cache_dir, torch_dtype=torch.bfloat16, use_cache=False, init_device=f"cuda:{local_rank}", attn_config=dict(attn_impl="flash", softmax_scale=None), # triton, flash ) ``` Gives the following error: ``` File "/home/anton/personal/stanford_alpaca-replit/env/lib/python3.10/site-packages/transformers/models/mpt/modeling_mpt.py", line 258, in __init__ self.attn = MptAttention(config) File "/home/anton/personal/stanford_alpaca-replit/env/lib/python3.10/site-packages/transformers/models/mpt/modeling_mpt.py", line 137, in __init__ self.softmax_scale = config.attn_config.softmax_scale AttributeError: 'dict' object has no attribute 'softmax_scale' ``` ### Who can help? ### Information ### Tasks ### Reproduction ``` model = MptForCausalLM.from_pretrained( model_args.model_name_or_path, cache_dir=training_args.cache_dir, torch_dtype=torch.bfloat16, use_cache=False, init_device=f"cuda:{local_rank}", attn_config=dict(attn_impl="flash", softmax_scale=None), # triton, flash ) ``` ### Expected behavior Should be using `MptAttentionConfig` instead is using a dict object
07-26-2023 13:05:54
07-26-2023 13:05:54
cc @younesbelkada and @ArthurZucker <|||||>Hey! Thanks for reporting. Indeed this does not work as the `attn_config` object does not seem to get converted to a `MptAttentionConfig` object. The `MptAttention` still needs the full config. Will open a PR to fix this as I guess this was previously working! <|||||>It is not something that we usually support. For composition models like CLIP, CLAP etc, this would not work: ```python from transformers import CLIPModel CLIPModel.from_pretrained("openai/clip-vit-base-patch16", text_config = dict(num_hidden_layers = 2)) .... │ /home/arthur_huggingface_co/transformers/src/transformers/models/clip/configuration_clip.py:411 │ │ in to_dict │ │ │ │ 408 │ │ │ `Dict[str, any]`: Dictionary of all the attributes that make up this configu │ │ 409 │ │ """ │ │ 410 │ │ output = copy.deepcopy(self.__dict__) │ │ ❱ 411 │ │ output["text_config"] = self.text_config.to_dict() │ │ 412 │ │ output["vision_config"] = self.vision_config.to_dict() │ │ 413 │ │ output["model_type"] = self.__class__.model_type │ │ 414 │ │ return output │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ AttributeError: 'dict' object has no attribute 'to_dict' ``` We allow: ```python from transformers import CLIPModel, CLIPTextConfig text_config = CLIPTextConfig(num_hidden_layers = 2) CLIPModel.from_pretrained("openai/clip-vit-base-patch16", text_config = text_config) ``` However for backward compatibility, #25116 will fix this for MPT
transformers
25,113
closed
Fix past CI after #24334
# What does this PR do? Fix dtype issue in past CI (pytorch 1.11 and 1.10) after #24334 cc @gante
07-26-2023 12:53:26
07-26-2023 12:53:26
@ydshieh thank you for fixing it 🙏 <|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
25,112
closed
`Gradient clipping` function is not compatible with upgrade
### System Info `Gradient clipping` function is not compatible with upgrade. transformers4.28.1: ![image](https://github.com/huggingface/transformers/assets/39549453/52744be1-ebaf-4259-aacd-438e495bf9d2) Transformers4.30.2: ![image](https://github.com/huggingface/transformers/assets/39549453/b8b9fe7a-ca3e-4b85-8b6d-0598391e9663) **The old and new versions do not support fp16 uniformly.** ### Who can help? @sgugger,@pacman100 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction you can run with transformers. ```shell torchrun \ --nproc_per_node $NUM_GPU \ --master_port $PORT_ID \ run_bloom.py \ --model_name_or_path facebook/opt-125m \ --use_fast_tokenizer False \ --train_file $TRAIN \ --validation_file $VAL \ --test_file $TEST \ --max_seq_length 512 \ --output_dir $OUTPUT \ --do_train True\ --do_eval False \ --do_predict False \ --evaluation_strategy no \ --eval_steps 1000000000 \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 16 \ --learning_rate 1e-5 \ --optim adamw_torch \ --adam_beta2 0.95 \ --weight_decay 0.1 \ --num_train_epochs 1 \ --lr_scheduler_type constant_with_warmup \ --warmup_ratio 0.1 \ --logging_first_step True \ --logging_steps 10 \ --logging_nan_inf_filter False \ --save_strategy steps \ --save_steps 10000 \ --save_total_limit 3 \ --fp16 True \ --disable_tqdm False \ --log_on_each_node False \ --report_to tensorboard \ ``` **important configuration**: --fp16 True ### Expected behavior Will not report bugs.
07-26-2023 12:06:21
07-26-2023 12:06:21
The `unscale_gradients` is done in the first screenshot a couple of lines above. I'm not sure what it is you are reporting as a bug.<|||||>> unscale_gradients I run the same code in the same environment, the difference is the version of transformers. transformers4.28.1 running normally ![Pasted Graphic 3](https://github.com/huggingface/transformers/assets/39549453/1132d543-e653-457b-8b58-2afbb9599488) There is a problem with transformers4.31.0 ![Pasted Graphic 4](https://github.com/huggingface/transformers/assets/39549453/05271b59-0d82-4dea-8fc3-2fb3b5fc5864) I found that the problem lies in the `clip_grad_norm_` function of accelerate. When performing `self.unscale_gradients()` calculation, it seems to support `amp` training. It seems that there is no support for `pure fp16` calculation, but there is no limit. It's weird here, the before and after versions are not compatible. I don't know if my understanding is correct. ![image](https://github.com/huggingface/transformers/assets/39549453/1fde1d6d-603a-4a05-9e55-c32e394d44e0) @sgugger <|||||>Hello @Baibaifan, you shouldn't explicitly call model.half() or model.to(torch.float16) when using amp. See this PyTorch forum message: https://discuss.pytorch.org/t/valueerror-attemting-to-unscale-fp16-gradients/81372/14 Could you please make sure to remove such lines and rerun and see if that resolves the issue?<|||||>> https://discuss.pytorch.org/t/valueerror-attemting-to-unscale-fp16-gradients/81372/14 If I don't want to run under `amp`, I just want to run under `pure fp16`, the 4.28.1 version is fine, but 4.31.0 will report an error, I want to know why? @pacman100 <|||||>@Baibaifan we don't support pure fp16 training in the `Trainer` as it doesn't converge. You can use pure `fp16` evaluation with the `--fp16_full_eval` flag.<|||||>> @Baibaifan we don't support pure fp16 training in the `Trainer` as it doesn't converge. You can use pure `fp16` evaluation with the `--fp16_full_eval` flag. OK, thanks. @sgugger
transformers
25,111
open
ValueError: Connection error, and we cannot find the requested files in the cached path
### System Info transformers version: 4.8.1 python version: 3.8 platform: Ubuntu 20.04.1 LTS ### Who can help? @ArthurZucker, @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Just run the below python script ``` from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') ``` We can get error messages ``` Traceback (most recent call last): File "1.py", line 3, in <module> tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') File "/home/jzwang/miniconda3/envs/torch1.10/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1672, in from_pretrained resolved_vocab_files[file_id] = cached_path( File "/home/jzwang/miniconda3/envs/torch1.10/lib/python3.8/site-packages/transformers/file_utils.py", line 1329, in cached_path output_path = get_from_cache( File "/home/jzwang/miniconda3/envs/torch1.10/lib/python3.8/site-packages/transformers/file_utils.py", line 1552, in get_from_cache raise ValueError( ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. ``` And I tried to download the file in the terminal using `wget https://huggingface.co/bert-base-uncased/blob/main/vocab.txt` but also failed. I do not know if there are any safety policies to prevent access. I checked the error code which is 104. I guess maybe some services have malfunctioned. I think it works well before about 2023-07-26 4:00 P.M. ### Expected behavior Hope you can give me some advice or fix the bug.
07-26-2023 11:54:32
07-26-2023 11:54:32
Hey! Pretty sure this was just a temporary failure! It is back up for me! <|||||>Thanks for your reply. But the problem still exists for me. I debugged the code of Transformers 4.8.1 and found there is an variable called `vocab_files` in PreTrainedTokenizerBase.from_pretrained method. `vocab_files` contains {'vocab_file': 'https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt', 'tokenizer_file': 'https://huggingface.co/bert-base-uncased/resolve/main/tokenizer.json', 'added_tokens_file': 'https://huggingface.co/bert-base-uncased/resolve/main/added_tokens.json', 'special_tokens_map_file': 'https://huggingface.co/bert-base-uncased/resolve/main/special_tokens_map.json', 'tokenizer_config_file': 'https://huggingface.co/bert-base-uncased/resolve/main/tokenizer_config.json'}. But the website `https://huggingface.co/bert-base-uncased/resolve/main/added_tokens.json` and `https://huggingface.co/bert-base-uncased/resolve/main/special_tokens_map.json` show `Entry not found`. I guess it is why the bug appears.<|||||>Pretty sure this has been fixed since then, you should consider using a more recent version of Transformers, 4.8.1 is more than 2 years old.<|||||>Hi! I encountered the same problem. When reproducing the code of this paper (https://github.com/rshaojimmy/MultiModal-DeepFake), I used the same version 4.8.1 transformers, and executed this line of code `tokenizer = BertTokenizerFast.from_pretrained(args.text_encoder)` prompts such an error: `ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.` I found it has access this link(https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt) but it returns 404 And I upgraded the version of transformers to 4.10.1 and reported this error: `requests.exceptions.ConnectionError: (ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')), '(Request ID: 9709381c-b025-435d-aba5-aef8667c4d1a)') ` I saw [this blog](https://blog.csdn.net/weixin_43301333/article/details/128080461), I guess it's a network problem
transformers
25,110
open
Feature request: Adding einops as a Dependency
### Feature request I propose that the einops library be added as a dependency in the HuggingFace Transformers library. einops (Einstein Notation Operations) is a Python library that provides a more expressive language for tensor operations. This would offer highly readable and maintainable code for complex tensor reshaping and rearranging operations. einops GitHub page: https://github.com/arogozhnikov/einops ### Motivation The addition of einops as a dependency would greatly facilitate the integration of new models that already use it into the Transformers library. Adding einops would require less refactoring in adding these models, making the code easier to read, and will decrease the amount of time spent on understanding and maintaining the code. Given the potential benefits that the inclusion of einops could bring to the Transformers library, and considering its impressive community support highlighted by over 7,000 stars on GitHub, I think that this proposal merits serious consideration. I suggest that we put this feature request up for a community vote. This would allow all contributors to weigh in on the decision. ### Your contribution .
07-26-2023 11:38:28
07-26-2023 11:38:28
There is actually a (recent) discussion > There have been lots of discussions on this in the past, the TLDR is that we aren't supporting it cause it causes issues with things like ONNX and the approach is more > just re-wrote that part in pure pytorch, <|||||>For me there is no reason to add a dependency when `einops` would actually be detrimental to all the work done to optimize inference (quantization, ONNX etc.). We can revisit this if the support end up being on par with classic PyTorch operations, but we shouldn't make the code easier to read if it's not as efficient/supported by the ecosystem.<|||||>einops has torch.compile support, why should it be a incompatible with onnx? https://github.com/arogozhnikov/einops/wiki/Using-torch.compile-with-einops<|||||>It might be the case some point in the past (I am not the one involved in this topic previously). Not sure the status at this moment. A further investigation into this to make sure the whole ecosystem will run smoothly with models using `einops` is necessary before we add it as a dependency. But the teams have their own priorities, and we would like to see how the community reacts to this feature request first.
transformers
25,109
closed
🚨🚨🚨Change default from `adamw_hf` to `adamw_torch` 🚨🚨🚨
# What does this PR do? This PR changes the default from `adamw_hf` to `adamw_torch`, as noted in https://github.com/huggingface/transformers/issues/25006 which fixes some breaking issues. Note that https://github.com/huggingface/transformers/issues/22141 still needs to be fulfilled once torch 2.1.0 is released (sometime in the next few months I imagine, as we're on 2.0.1) and swap it to be `ADAMW_TORCH_FUSED`. Fixes # (issue) Solves #25006 ## Maintaining old behavior To keep the old behavior prior to this change, ensure that you pass `"adamw_hf"` as the `optim` in your `TrainingArguments` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @stas00
07-26-2023 11:33:25
07-26-2023 11:33:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,108
closed
Wrong default values according to docstrings?
https://github.com/huggingface/transformers/blob/d941f07a4e3bc7b61b7afbd25d6e2e8427fccc6d/src/transformers/models/vit/configuration_vit.py#L100 https://github.com/huggingface/transformers/blob/d941f07a4e3bc7b61b7afbd25d6e2e8427fccc6d/src/transformers/models/vit/configuration_vit.py#L101
07-26-2023 11:14:00
07-26-2023 11:14:00
Hi @ebezzam I am not sure I understand the question. Could you describe it more precisely? Thanks.<|||||>Hi @ydshieh thank you for your quick response. The docstring of [ViTConfig](https://github.com/huggingface/transformers/blob/d941f07a4e3bc7b61b7afbd25d6e2e8427fccc6d/src/transformers/models/vit/configuration_vit.py#L35) says that the default values for [`hidden_dropout_prob`](https://github.com/huggingface/transformers/blob/d941f07a4e3bc7b61b7afbd25d6e2e8427fccc6d/src/transformers/models/vit/configuration_vit.py#L58) and [`attention_probs_dropout_prob`](https://github.com/huggingface/transformers/blob/d941f07a4e3bc7b61b7afbd25d6e2e8427fccc6d/src/transformers/models/vit/configuration_vit.py#L60) are `0.1`, but in the code (linked above) they are set to `0.0`.<|||||>Thanks a lot @ebezzam , super clear now 🤗 Will take a look <|||||>Hi @ebezzam From [the original ViT codebase](https://github.com/google-research/vision_transformer/blob/ac6e056f9da686895f9f0f6ac026d3b5a464e59e/vit_jax/configs/models.py#L123), they should be `0.0` for the `ViT-B/16` model. The value `0.1` in the docstring is likely from a copy-paste. Would you like to open a PR to help us correct those 2 values in the docstring? Thanks.<|||||>@ydshieh thanks for looking into that. Yes I can open a PR and link to this issue.<|||||>Thanks a lot 🤗 <|||||>@ydshieh done!<|||||>Thank you again @ebezzam 🤗
transformers
25,107
open
add util for ram efficient loading of model when using fsdp
# What does this PR do? 1. Fixes an issue explained in https://github.com/pytorch/pytorch/issues/105840 when using FSDP for training very large models. Should be merged after https://github.com/huggingface/accelerate/pull/1777 Currently, when using FSDP, the model is loaded for each of the N processes completely on CPU leading to huge CPU RAM usage. When training models like Flacon-40B with FSDP on a dgx node with 8 GPUs, it would lead to CPU RAM getting out of memory because each process is loading 160GB (40B x 4Bytes (FP32)) in CPU RAM for a total of 160*8=1280GB requirement which results in script getting killed due to out of CPU RAM. To combat this, we load the model only on rank 0 and have it on meta device when rank!=0. Then use no-op param_init_fn along with sync_module_states=True for FSDP to properly init the weights on other ranks and broadcast the params from rank 0 to other ranks. Usage: 1. FSDP config with `sync_module_states` set to True as shown below (config.yaml): ```yaml compute_environment: LOCAL_MACHINE debug: false distributed_type: FSDP downcast_bf16: 'no' fsdp_config: fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_backward_prefetch_policy: BACKWARD_PRE fsdp_forward_prefetch: true fsdp_offload_params: false fsdp_sharding_strategy: 1 fsdp_state_dict_type: FULL_STATE_DICT fsdp_sync_module_states: true fsdp_transformer_layer_cls_to_wrap: BertLayer fsdp_use_orig_params: true machine_rank: 0 main_training_function: main mixed_precision: 'no' num_machines: 1 num_processes: 2 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ``` 2. Code snippet changes: ```diff + from transformers import load_pretrained_model_only_on_rank0 - model = AutoModelForSequenceClassification.from_pretrained(args.model_name_or_path, return_dict=True) + model = load_pretrained_model_only_on_rank0( AutoModelForSequenceClassification, AutoConfig, args.model_name_or_path ) ``` 3. Pass the model to Trainer if using that. The example code repo leveraging this is here: https://github.com/pacman100/ram_efficient_fsdp. Without the util: ``` accelerator.process_index=1 CPU Memory before entering the train : 921 accelerator.process_index=1 CPU Memory consumed at the end of the loading (end-begin): 426 accelerator.process_index=1 CPU Peak Memory consumed during the loading (max-begin): 844 accelerator.process_index=1 CPU Total Peak Memory consumed during the loading (max): 1765 accelerator.process_index=0 CPU Memory before entering the train : 919 accelerator.process_index=0 CPU Memory consumed at the end of the loading (end-begin): 427 accelerator.process_index=0 CPU Peak Memory consumed during the loading (max-begin): 844 accelerator.process_index=0 CPU Total Peak Memory consumed during the loading (max): 1763 ``` With `load_model_from_pretrained_only_on_rank0`: ``` accelerator.process_index=1 CPU Memory before entering the train : 920 accelerator.process_index=1 CPU Memory consumed at the end of the loading (end-begin): 6 accelerator.process_index=1 CPU Peak Memory consumed during the loading (max-begin): 5 accelerator.process_index=1 CPU Total Peak Memory consumed during the loading (max): 925 accelerator.process_index=0 CPU Memory before entering the train : 918 accelerator.process_index=0 CPU Memory consumed at the end of the loading (end-begin): 427 accelerator.process_index=0 CPU Peak Memory consumed during the loading (max-begin): 844 accelerator.process_index=0 CPU Total Peak Memory consumed during the loading (max): 1762 ``` **So you can see that during loading Rank 1 doesn't take any more CPU RAM. And the performance between both setups matches.** To Do: - [ ] Add docs in the FSDP section
07-26-2023 10:50:49
07-26-2023 10:50:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25107). All of your documentation changes will be reflected on that endpoint.
transformers
25,106
closed
Fix `PvtModelIntegrationTest::test_inference_fp16`
# What does this PR do? Failing ```bash FAILED tests/models/pvt/test_modeling_pvt.py::PvtModelIntegrationTest::test_inference_fp16 - ValueError: PvtForImageClassification does not support `device_map='auto'`. To implement support, the modelclass needs to implement the `_no_split_modules` attribute. ``` A fix for this test will fix the other 2 shown in the report (due to bad GPU state or something else)
07-26-2023 10:12:30
07-26-2023 10:12:30
_The documentation is not available anymore as the PR was closed or merged._